Spring data entry (Part 1)

This section of the reference document deals with data access and interaction between the data access layer and the business or service layer.
This paper introduces Spring's comprehensive transaction management support in detail, and then introduces all kinds of data access frameworks and technologies of Spring framework integration.

1. Business management
Comprehensive transaction support is one of the most important reasons to use the Spring framework. The Spring framework provides a consistent abstraction for transaction management, providing the following benefits:

  • A consistent programming model across different transaction APIs, such as Java transaction API (JTA), JDBC, Hibernate, and Java persistence API (JPA).
  • Declarative transaction management is supported.
  • APIs for programmatic transaction management are simpler than complex transaction APIs such as JTA.
  • Excellent integration with Spring data access abstraction.

The following sections describe the transaction features and technologies of the Spring framework:

  • The advantages of the Spring framework transaction support model describe why the Spring framework's transaction abstraction should be used instead of the EJB container Managed Transaction (CMT) or the choice to drive local transactions through a proprietary API (such as Hibernate).
  • Understanding the Spring framework transaction abstraction outlines the core classes and describes how to configure and obtain data source instances from various data sources.
  • Synchronizing resources with transactions describes how application code ensures that resources are created, reused, and cleaned up correctly.
  • Declarative transaction management describes the support for declarative transaction management.
  • Procedural transaction management includes support for procedural (i.e. explicit coding) transaction management.
  • Transaction binding events describe how application events are used in transactions.

This chapter also discusses best practices, application server integration, and solutions to common problems.
1.1 advantages of transaction support model of spring framework

Traditionally, Java EE developers have two choices in transaction management: global transaction or local transaction, both of which have great limitations. In the next two parts, you'll review global and local transaction management, and then discuss how the Spring framework's transaction management support addresses the limitations of the global and local transaction models.

1.1.1 global transaction
Global transactions allow you to use multiple transaction resources, usually relational databases and message queues. Application servers manage global transactions through JTA, which is a tedious API (partly because of its exception model). In addition, JTA UserTransaction is usually obtained from JNDI, which means that you also need to use JNDI to use JTA. The use of global transactions limits any potential reuse of application code, as JTA is usually only available in the application server environment.

Previously, the preferred way to use global transactions was through the EJB CMT (container management transaction). CMT is a form of declarative transaction management (different from procedural transaction management). The EJB CMT eliminates the need for transaction related JNDI lookups, although using the EJB itself requires JNDI. It eliminates most, but not all, of the need to write Java code to control transactions. Its obvious disadvantage is that CMT is bound with JTA and application server environment. Moreover, you can use it only if you choose to implement the business logic in the EJB (or at least after the transactional EJB facade). The negative impact of EJBs is often so great that it is not an attractive proposition, especially in the face of alternatives to declarative transaction management.

1.1.2 local affairs
Local transactions are resource specific, such as those associated with JDBC connections. Local transactions may be easier to use, but there is an obvious disadvantage: they cannot work across multiple transaction resources. For example, code that uses JDBC connections to manage transactions cannot run in global JTA transactions. Because the application server is not involved in transaction management, it cannot help ensure correctness between multiple resources. (note that most applications use a single transaction resource. )Another disadvantage is that local transactions are intrusive to the programming model.

1.1.3 consistent programming model of spring framework
Spring solves the shortcomings of global transaction and local transaction. It allows application developers to use a consistent programming model in any environment. You only need to write code once to benefit from different transaction management strategies in different environments. The spring framework provides both declarative and procedural transaction management. Most users prefer declarative transaction management, which we recommend in most cases.

Through programmatic transaction management, developers can use the Spring Framework transaction abstraction, which can run on any underlying transaction infrastructure. Using the preferred declarative model, developers usually write little or no transaction management related code, so they do not rely on the Spring Framework transaction API or any other transaction API.

Do you need an application server for transaction management?
The Spring framework's transaction management support changes the traditional rules of when an enterprise Java application needs an application server.

In particular, you do not need an application server for declarative transactions through EJBs. In fact, even if your application server has powerful JTA functions, you may think that declarative transactions of Spring framework provide more powerful functions and more efficient programming model than EJB CMT.

Generally, the JTA function of the application server is required only when the application needs to process transactions across multiple resources, which is not necessary for many applications. Many high-end applications use a single, highly scalable database, such as Oracle RAC. Independent transaction managers, such as Atomikos transactions and jotm, are other options. Of course, you may need other application server functions, such as Java Message Service (JMS) and Java EE Connector Architecture (JCA).

The Spring framework allows you to choose when to extend your application to a fully loaded application server. The only alternative to using EJB CMT or JTA is to write code using local transactions (such as transactions on JDBC connections). If you need these codes to run in global, container managed transactions, then you need to do a lot of rework. Such days are gone. In the Spring framework, you only need to change some bean definitions in the configuration file (without changing the code).

1.2 understanding Spring framework transaction abstraction
The key of Spring transaction abstraction is the concept of transaction strategy. Transaction policy is defined by org.springframe.transaction. The PlatformTransactionManager interface is as follows:

public interface PlatformTransactionManager {

    TransactionStatus getTransaction(TransactionDefinition definition) throws TransactionException;

    void commit(TransactionStatus status) throws TransactionException;

    void rollback(TransactionStatus status) throws TransactionException;
}

This is primarily a service provider interface (SPI), although you can use it programmatically from your application code. Because platform transaction manager is an interface, it can easily simulate or stub as needed. It is not bound to a lookup policy such as JNDI. The PlatformTransactionManager implementation is defined like any other object (or bean) in the Spring framework IoC container. This alone makes Spring framework transactions valuable abstractions, even when using JTA. You can test transaction code more easily than using JTA directly.

Similarly, to be consistent with Spring's philosophy, the TransactionException that can be thrown by any method of the PlatformTransactionManager interface is unchecked (that is, it extends java.lang. RuntimeException class). Transaction infrastructure failures are almost always fatal. In rare cases, application code can actually recover from a transaction failure, and application developers can still choose to capture and handle TransactionException. It is important that developers are not forced to do so.

The getTransaction(..) method returns a transactionstatus object based on the TransactionDefinition parameter. If there is a matching transaction in the current call stack, the returned TransactionStatus can represent the new transaction or the existing transaction. The latter situation means that, like the Java EE transaction context, transactionstatus is associated with the execution thread.
The TransactionDefinition interface specifies:

  • Propagation: typically, all code executed within the scope of a transaction runs within that transaction. However, if you execute a transaction method when the transaction context already exists, you can specify the behavior. For example, code can continue to run in an existing transaction (usually), or it can suspend an existing transaction and create a new one. Spring provides all transaction propagation options similar to EJB CMT. To understand the semantics of transaction propagation in spring, see transaction propagation.
  • Isolation: the extent to which this transaction is isolated from the work of other transactions. For example, can the transaction see uncommitted writes from other transactions?
  • Timeout: the time that this transaction runs before it times out and is automatically rolled back by the underlying transaction infrastructure.
  • Read only status: read only transactions can be used when code reads but does not modify data. In some cases, such as with Hibernate, read-only transactions can be a useful optimization.

These settings reflect the standard transaction concept. If necessary, see resources that discuss transaction isolation levels and other core transaction concepts. Understanding these concepts is essential to using the Spring framework or any transaction management solution.

The TransactionStatus interface provides a simple way for transaction code to control transaction execution and query transaction status. These concepts should be familiar because they are common to all transaction APIs. The following listing shows the TransactionStatus interface:

public interface TransactionStatus extends SavepointManager {

    boolean isNewTransaction();

    boolean hasSavepoint();

    void setRollbackOnly();

    boolean isRollbackOnly();

    void flush();

    boolean isCompleted();
}

Whether you choose declarative or procedural transaction management in Spring, it is absolutely necessary to define the correct platform transaction manager implementation. This implementation is usually defined through dependency injection.

Platform transaction manager implementation usually needs to understand its working environment: JDBC, JTA, Hibernate, etc. The following example shows how to define a local platform transaction manager implementation (in this case, using normal JDBC).
You can define the JDBC data source by creating a bean similar to the following:

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${jdbc.driverClassName}" />
    <property name="url" value="${jdbc.url}" />
    <property name="username" value="${jdbc.username}" />
    <property name="password" value="${jdbc.password}" />
</bean>

The relevant PlatformTransactionManager bean definition then has a reference to the data source definition. It should be similar to the following example:

<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
    <property name="dataSource" ref="dataSource"/>
</bean>

If you use JTA in a Java EE container, you will use the container data source obtained through JNDI with Spring's JTA transaction manager. The following example shows the appearance of JTA and JNDI lookup versions:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:jee="http://www.springframework.org/schema/jee"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/jee
        https://www.springframework.org/schema/jee/spring-jee.xsd">

    <jee:jndi-lookup id="dataSource" jndi-name="jdbc/jpetstore"/>

    <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager" />

    <!-- other <bean/> definitions here -->

</beans>

JTA transaction manager does not need to know the data source (or any other specific resource) because it uses the container's global transaction management infrastructure.

Note: the previous data source bean definition uses the < JNDI lookup / > tag in the Jee namespace. For more information, see Jee mode.

You can also easily use Hibernate local transactions, as shown in the following example. In this case, you need to define a Hibernate LocalSessionFactoryBean that your application code can use to obtain the Hibernate session instance.

The DataSource bean definition is similar to the local JDBC example shown earlier, so it is not shown in the following example.

Note: if the data source (used by any non jta transaction manager) is found by JNDI and managed by the Java EE container, it should be non transactional because it is the Spring framework (not the Java EE container) that manages the transaction.

The txManager bean in this case is of the HibernateTransactionManager type. As DataSourceTransactionManager requires a reference to a data source, HibernateTransactionManager requires a reference to sessionfactory. The following example declares the sessionfactory and txmanager beans:

<bean id="sessionFactory" class="org.springframework.orm.hibernate5.LocalSessionFactoryBean">
    <property name="dataSource" ref="dataSource"/>
    <property name="mappingResources">
        <list>
            <value>org/springframework/samples/petclinic/hibernate/petclinic.hbm.xml</value>
        </list>
    </property>
    <property name="hibernateProperties">
        <value>
            hibernate.dialect=${hibernate.dialect}
        </value>
    </property>
</bean>

<bean id="txManager" class="org.springframework.orm.hibernate5.HibernateTransactionManager">
    <property name="sessionFactory" ref="sessionFactory"/>
</bean>

If you use JTA transactions managed by Hibernate and Java EE containers, you should use the same JTA transaction manager as the previous JTA JDBC example, as shown in the following example:

<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>

Note: if you use JTA, your transaction manager definition should look the same, no matter what data access technology you use, it's JDBC, Hibernate JPA, or any other supported technology. This is because JTA transactions are global transactions, which can enlist any transaction resource.

In all of these cases, the application code does not need to be changed. You can change the way transactions are managed only by changing the configuration, even if the change means moving from a local transaction to a global transaction, or vice versa.

1.3. Synchronize resources with transactions
It should now be clear how to create different transaction managers and link them to related resources that need to be synchronized to the transaction (for example, DataSourceTransactionManager to JDBC data source, HibernateTransactionManager to Hibernate SessionFactory, etc.). This section describes how application code (directly or indirectly, by using persistence API s such as JDBC, Hibernate, or JPA) ensures that these resources are created, reused, and cleaned up correctly. This section also discusses how to (optionally) trigger transaction synchronization through the associated platform transaction manager.

1.3.1. Advanced synchronization methods
The preferred approach is to use Spring's most advanced template based persistence integration api, or use the native ORM api with transaction aware factory bean s or proxies to manage native resource factories. These transaction aware solutions internally handle resource creation and reuse, cleanup, optional transaction synchronization of resources, and exception mapping. As a result, user data access code does not have to deal with these tasks, but can only focus on non template persistence logic. In general, you use the native ORM api, or adopt the JDBC access template method by using the JdbcTemplate. These solutions are detailed in subsequent sections of this reference document.

1.3.2. Low level synchronization method
Classes such as datasourceutilities (for JDBC), entitymanagerfactoryutilities (for JPA), sessionfactoryutils (for Hibernate) exist at a lower level. When you want the application code to directly handle the persistence API of the native resource type, you use these classes to ensure that the appropriate spring framework managed instance, transactions are (optional) synchronized, and the exceptions occurred in this process are correctly mapped to a consistent API.

For example, for JDBC, you can use Spring's org. Springframe. JDBC. Datasource instead of the traditional JDBC method that calls the getConnection() method on the datasource. DataSourceUtils class, as follows:

Connection conn = DataSourceUtils.getConnection(dataSource);

If an existing transaction already has a synchronous (linked) connection to it, the instance is returned. Otherwise, the method call triggers the creation of a new connection that (optionally) synchronizes with any existing transaction and can be used later for reuse in the same transaction. As mentioned before, any SQLException is wrapped in the Spring framework, and the JDBC connection exception cannot be obtained. This is one of the hierarchies of DataAccessException types that are not checked in the Spring framework. This method provides more information than SQLException, and ensures portability across databases and even across persistence technologies.

This method can also work without Spring transaction management (transaction synchronization is optional), so you can use Spring for transaction management or not.

Of course, once you use Spring's JDBC support, JPA support or Hibernate support, you usually don't like to use DataSourceUtils or other help classes, because you prefer to use Spring abstraction rather than directly using related APIs. For example, if you use Spring JdbcTemplate or JDBC. In order to simplify the use of JDBC, the correct connection retrieval is carried out in the background without any special code.

1.3.3. TransactionAwareDataSourceProxy
At the bottom is the transactionawaredasourceproxy class. This is the proxy for the target data source, which wraps the target data source to increase awareness of spring managed transactions. In this respect, it is similar to the transactional JNDI data source provided by the Java EE server.

You hardly need or want to use this class except when you have to call existing code and pass the standard JDBC data source interface implementation. In this case, the code may be available, but it is involved in spring managed transactions. You can use the high-level abstraction mentioned earlier to write new code.

1.4. Declarative transaction management

Note: most Spring framework users choose declarative transaction management. This option has the least impact on the application code, so it is best suited to the ideal of a non-invasive lightweight container.

The declarative transaction management of Spring framework is implemented by AOP. However, because transactional aspects of the code are released with the Spring framework and can be used in a template fashion, AOP concepts often do not need to understand to use the code effectively.

Declarative transaction management in the Spring framework is similar to EJB CMT, because you can specify transaction behavior (or lack of transaction behavior) at a single method level. If necessary, you can call setRollbackOnly() in the transaction context. The differences between the two types of transaction management are:

  • Unlike the EJB CMT bound to JTA, the declarative transaction management of the Spring framework can work in any environment. It can use JTA transactions or local transactions (by using JDBC, JPA, or Hibernate to adjust the configuration file).
  • You can apply Spring framework declarative transaction management to any class, not just special classes like ejb.
  • The Spring framework provides declarative rollback rules, which is a feature without an equivalent EJB. Provides programmatic and declarative support for rollback rules.
  • The Spring framework allows you to customize transaction behavior using AOP. For example, you can insert custom behavior in the case of a transaction rollback. You can also add any suggestions and transaction suggestions. With EJB CMT, you cannot affect the transaction management of the container unless setRollbackOnly() is used.
  • Unlike high-end application servers, the Spring framework does not support the propagation of transaction context between remote calls. If you need this capability, we recommend that you use EJB s. However, think carefully before using this feature, because you don't usually want transactions to cross remote calls.

The concept of rollback rules is very important. They allow you to specify which exceptions (and the ones that can be thrown) should cause an automatic rollback. You can specify it declaratively in the configuration, not in Java code. Therefore, although you can still call setRollbackOnly() on the TransactionStatus object to roll back the current transaction, you can usually specify a rule that MyApplicationException must always cause a rollback. The significant advantage of this option is that business objects do not depend on the transaction infrastructure. For example, they usually do not need to import the Spring transaction api or other Spring APIs.

Although the default behavior of the EJB container automatically rolls back transactions on system exceptions (usually run-time exceptions), EJB CMT does not automatically roll back transactions on application exceptions (that is, checked exceptions other than java.rmi.RemoteException). While the Spring default behavior for declarative transaction management follows the EJB Convention (automatically rolling back only when an exception is not checked), customizing this behavior is often useful.

1.4.1. Understand the declarative transaction implementation of Spring framework

It's not enough to tell you to annotate your class with @ Transactional annotation, add @ EnableTransactionManagement to your configuration, and expect you to understand how it works. For a deeper understanding, this section explains the internal workings of the Spring framework's declarative transaction infrastructure in the event of transaction related problems.

The most important concept to master about the declarative transaction support of Spring framework is that this support is enabled through AOP proxy, and transaction notification is driven by metadata (currently based on XML or annotation). The combination of AOP and transaction metadata produces an AOP agent that uses a TransactionInterceptor and an appropriate PlatformTransactionManager implementation to drive transactions around method calls.

Spring AOP will be discussed in the AOP section.

The following figure shows a conceptual view of calling methods on a transaction agent:

1.4.2. An example of a declarative transaction implementation
Consider the following interfaces and their accompanying implementations. This example uses the Foo and Bar classes as placeholders so that you can focus on the use of transactions without having to focus on specific domain models. For this example, the fact that the DefaultFooService class throws an unsupported operationexception instance in each implemented method body is good. This behavior allows you to view the created transaction and then roll back to the unsupported operationexception instance. The following listing shows the FooService interface:

// the service interface that we want to make transactional

package x.y.service;

public interface FooService {

    Foo getFoo(String fooName);

    Foo getFoo(String fooName, String barName);

    void insertFoo(Foo foo);

    void updateFoo(Foo foo);

}

The following example shows the implementation of the above interface:

package x.y.service;

public class DefaultFooService implements FooService {

    @Override
    public Foo getFoo(String fooName) {
        // ...
    }

    @Override
    public Foo getFoo(String fooName, String barName) {
        // ...
    }

    @Override
    public void insertFoo(Foo foo) {
        // ...
    }

    @Override
    public void updateFoo(Foo foo) {
        // ...
    }
}

Suppose that the first two methods of the FooService interface, getFoo(String) and getFoo(String, String), must be executed in a transaction context with read-only semantics, while the other methods, insertFoo(Foo) and updateFoo(Foo), must be executed in a transaction context with read-write semantics. The following paragraphs explain the following configuration in detail:

<!-- from the file 'context.xml' -->
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/tx
        https://www.springframework.org/schema/tx/spring-tx.xsd
        http://www.springframework.org/schema/aop
        https://www.springframework.org/schema/aop/spring-aop.xsd">

    <!-- this is the service object that we want to make transactional -->
    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- the transactional advice (what 'happens'; see the <aop:advisor/> bean below) -->
    <tx:advice id="txAdvice" transaction-manager="txManager">
        <!-- the transactional semantics... -->
        <tx:attributes>
            <!-- all methods starting with 'get' are read-only -->
            <tx:method name="get*" read-only="true"/>
            <!-- other methods use the default transaction settings (see below) -->
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>

    <!-- ensure that the above transactional advice runs for any execution
        of an operation defined by the FooService interface -->
    <aop:config>
        <aop:pointcut id="fooServiceOperation" expression="execution(* x.y.service.FooService.*(..))"/>
        <aop:advisor advice-ref="txAdvice" pointcut-ref="fooServiceOperation"/>
    </aop:config>

    <!-- don't forget the DataSource -->
    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
        <property name="url" value="jdbc:oracle:thin:@rj-t42:1521:elvis"/>
        <property name="username" value="scott"/>
        <property name="password" value="tiger"/>
    </bean>

    <!-- similarly, don't forget the PlatformTransactionManager -->
    <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dataSource"/>
    </bean>

    <!-- other <bean/> definitions here -->

</beans>

Check the previous configuration. It assumes that you want to make a service object (fooService bean) transactional. The transaction semantics to be applied are encapsulated in the < TX: advice / > definition. The definition of < TX: advice / > means: "all methods, starting from get, must be executed in the context of read-only transactions, and all other methods must be executed with the default transaction semantics". The transaction manager property of the < TX: advice / > tag is set to the name of the PlatformTransactionManager bean (in this case, the txManager bean) that will drive the transaction.

Note: if the bean name of the platform transaction manager you want to connect to is transactionManager, you can omit the transaction manager attribute in the transaction notification (< TX: advice / >). If the PlatformTransactionManager bean to connect has any other name, you must explicitly use the transaction manager property, as shown in the previous example.

The < AOP: config / > definition ensures that transaction notifications defined by the txAdvice bean are executed in the appropriate place in the program. First, define a pointcut that matches the execution of any operation defined in the fooServiceOperation. Then use the advisor tool to associate the pointcut with txAdvice. The results show that the notifications defined by txAdvice will be run when fooServiceOperation is executed.

The expression defined in the < AOP: pointcut / > element is an AspectJ pointcut expression. For more details on pointcut expressions in Spring, see the AOP section.
A common requirement is to make the whole service layer transactional. The best way is to change the pointcut expression to match any operation in the service tier. The following example demonstrates how to do this:

<aop:config>
    <aop:pointcut id="fooServiceMethods" expression="execution(* x.y.service.*.*(..))"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="fooServiceMethods"/>
</aop:config>

Note: in the previous example, assume that all service interfaces are defined in x.y. Service pack. For more details, see the AOP section.

Now that we've analyzed the configuration, you might ask yourself, "what do all of these configurations actually do?"
The configuration shown earlier is used to create transaction proxies around objects created from fooService bean definitions. The agent is configured with transaction notification to start, suspend, mark as read-only and other transactions according to the transaction configuration associated with the method when the appropriate method is called on the agent. Consider the following test drivers for the configuration shown earlier:

public final class Boot {

    public static void main(final String[] args) throws Exception {
        ApplicationContext ctx = new ClassPathXmlApplicationContext("context.xml", Boot.class);
        FooService fooService = (FooService) ctx.getBean("fooService");
        fooService.insertFoo (new Foo());
    }
}

The output of running the above program should be similar to the following (for clarity, the Log4J output and stack trace of unsupported operationexception thrown by insertFoo(..) method of DefaultFooService class have been truncated):

<!-- the Spring container is starting up... -->
[AspectJInvocationContextExposingAdvisorAutoProxyCreator] - Creating implicit proxy for bean 'fooService' with 0 common interceptors and 1 specific interceptors

<!-- the DefaultFooService is actually proxied -->
[JdkDynamicAopProxy] - Creating JDK dynamic proxy for [x.y.service.DefaultFooService]

<!-- ... the insertFoo(..) method is now being invoked on the proxy -->
[TransactionInterceptor] - Getting transaction for x.y.service.FooService.insertFoo

<!-- the transactional advice kicks in here... -->
[DataSourceTransactionManager] - Creating new transaction with name [x.y.service.FooService.insertFoo]
[DataSourceTransactionManager] - Acquired Connection [org.apache.commons.dbcp.PoolableConnection@a53de4] for JDBC transaction

<!-- the insertFoo(..) method from DefaultFooService throws an exception... -->
[RuleBasedTransactionAttribute] - Applying rules to determine whether transaction should rollback on java.lang.UnsupportedOperationException
[TransactionInterceptor] - Invoking rollback for transaction on x.y.service.FooService.insertFoo due to throwable [java.lang.UnsupportedOperationException]

<!-- and the transaction is rolled back (by default, RuntimeException instances cause rollback) -->
[DataSourceTransactionManager] - Rolling back JDBC transaction on Connection [org.apache.commons.dbcp.PoolableConnection@a53de4]
[DataSourceTransactionManager] - Releasing JDBC Connection after transaction
[DataSourceUtils] - Returning JDBC Connection to DataSource

Exception in thread "main" java.lang.UnsupportedOperationException at x.y.service.DefaultFooService.insertFoo(DefaultFooService.java:14)
<!-- AOP infrastructure stack trace elements removed for clarity -->
at $Proxy0.insertFoo(Unknown Source)
at Boot.main(Boot.java:11)

1.4.3 rollback declarative transaction
The previous section outlined the basics of how to declaratively specify transaction settings for classes (usually service tier classes) in an application. This section describes how to control the rollback of a transaction in a simple declarative way.

To show the Spring framework's transaction infrastructure that you want to roll back the work of the transaction, the recommended method is to throw an exception from the code currently executing in the transaction context. The Spring framework's transaction infrastructure code catches any unhandled exceptions when it pops up the call stack and decides whether to mark a transaction as rolled back.

In its default configuration, the Spring framework's transaction infrastructure code marks a transaction as rolled back only if the exception is not checked at run time. That is, when the exception thrown is an instance or subclass of a RuntimeException. (by default, bad instances also cause a rollback.). A checked exception thrown from a transaction method does not cause a rollback in the default configuration.

You can accurately configure which exception types mark transactions as rolled back, including checked exceptions. The following XML fragment shows how to configure rollback for checked, application specific exception types:

<tx:advice id="txAdvice" transaction-manager="txManager">
    <tx:attributes>
    <tx:method name="get*" read-only="true" rollback-for="NoProductInStockException"/>
    <tx:method name="*"/>
    </tx:attributes>
</tx:advice>

If you do not want the transaction to roll back when throwing an exception, you can also specify "no rollback rule". The following example tells the transaction infrastructure of the Spring framework that even in the face of unprocessed InstrumentNotFoundException, the corresponding transaction should be submitted:

<tx:advice id="txAdvice">
    <tx:attributes>
    <tx:method name="updateStock" no-rollback-for="InstrumentNotFoundException"/>
    <tx:method name="*"/>
    </tx:attributes>
</tx:advice>

When the Spring framework's transaction infrastructure catches exceptions and references the configured rollback rules to determine whether to mark a transaction as a rollback, the strongest matching rule wins. Therefore, in the following configurations, any exception except a gauge notfoundexception will cause the rollback of the corresponding transaction:

<tx:advice id="txAdvice">
    <tx:attributes>
    <tx:method name="*" rollback-for="Throwable" no-rollback-for="InstrumentNotFoundException"/>
    </tx:attributes>
</tx:advice>

You can also programmatically indicate the required rollback. Although this process is simple, it is highly intrusive and closely couples your code with the Spring framework's transactional infrastructure. The following example shows how to programmatically indicate the required rollback:

public void resolvePosition() {
    try {
        // some business logic...
    } catch (NoProductInStockException ex) {
        // trigger rollback programmatically
        TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
    }
}

It is strongly recommended that you use declarative methods for rollback whenever possible. Programming rollback is available if you absolutely need it, but its use runs counter to implementing a clean pojo based architecture.

1.4.4 configure different transaction semantics for different bean s
Consider a scenario where you have many service layer objects and want to apply a completely different transaction configuration to each object. This can be achieved by defining different < AOP: advisor / > elements with different pointcuts and advice ref attribute values.

As a comparison, first assume that all service layer classes are defined in root x.y. Service pack. To make all class instances defined in the package (or subpackage) and bean s whose names end in services have default transaction configuration, you can write the following code:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/tx
        https://www.springframework.org/schema/tx/spring-tx.xsd
        http://www.springframework.org/schema/aop
        https://www.springframework.org/schema/aop/spring-aop.xsd">

    <aop:config>

        <aop:pointcut id="serviceOperation"
                expression="execution(* x.y.service..*Service.*(..))"/>

        <aop:advisor pointcut-ref="serviceOperation" advice-ref="txAdvice"/>

    </aop:config>

    <!-- these two beans will be transactional... -->
    <bean id="fooService" class="x.y.service.DefaultFooService"/>
    <bean id="barService" class="x.y.service.extras.SimpleBarService"/>

    <!-- ... and these two beans won't -->
    <bean id="anotherService" class="org.xyz.SomeService"/> <!-- (not in the right package) -->
    <bean id="barManager" class="x.y.service.SimpleBarManager"/> <!-- (doesn't end in 'Service') -->

    <tx:advice id="txAdvice">
        <tx:attributes>
            <tx:method name="get*" read-only="true"/>
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>

    <!-- other transaction infrastructure beans such as a PlatformTransactionManager omitted... -->

</beans>

The following example shows how to configure two different bean s with completely different transaction settings:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/tx
        https://www.springframework.org/schema/tx/spring-tx.xsd
        http://www.springframework.org/schema/aop
        https://www.springframework.org/schema/aop/spring-aop.xsd">

    <aop:config>

        <aop:pointcut id="defaultServiceOperation"
                expression="execution(* x.y.service.*Service.*(..))"/>

        <aop:pointcut id="noTxServiceOperation"
                expression="execution(* x.y.service.ddl.DefaultDdlManager.*(..))"/>

        <aop:advisor pointcut-ref="defaultServiceOperation" advice-ref="defaultTxAdvice"/>

        <aop:advisor pointcut-ref="noTxServiceOperation" advice-ref="noTxAdvice"/>

    </aop:config>

    <!-- this bean will be transactional (see the 'defaultServiceOperation' pointcut) -->
    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- this bean will also be transactional, but with totally different transactional settings -->
    <bean id="anotherFooService" class="x.y.service.ddl.DefaultDdlManager"/>

    <tx:advice id="defaultTxAdvice">
        <tx:attributes>
            <tx:method name="get*" read-only="true"/>
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>

    <tx:advice id="noTxAdvice">
        <tx:attributes>
            <tx:method name="*" propagation="NEVER"/>
        </tx:attributes>
    </tx:advice>

    <!-- other transaction infrastructure beans such as a PlatformTransactionManager omitted... -->

</beans>

1.4.5 <tx:advice/> Settings
This section summarizes the various transaction settings that can be specified using the < TX: advice / > tag. The default < TX: advice / > setting is:

  • Propagation settings are required.
  • The isolation level is the default.
  • Transactions are read-write.
  • Transaction timeout defaults to the default timeout of the underlying transaction system. If timeout is not supported, it is none.
  • Any RuntimeException triggers a rollback, while any checked exception does not.

You can change these default settings. The following table summarizes the various attributes of < TX: method / > tags nested in < TX: advice / > and < TX: attributes / > Tags:

Attribute Required? Default Description

name

Yes

  The name of the method to associate with the transaction property. The wildcard (*) can be used to associate the same transaction property settings with many methods, such as get *, handle *, on*Event, and so on.

propagation

No

REQUIRED

Transaction propagation behavior.

isolation

No

DEFAULT

Transaction timeout (seconds). Only applicable to the propagation of REQUIRED or REQUIRED.

timeout

No

-1

Transaction timeout (seconds). Only applicable to propagation REQUIRED or REQUIRES_NEW.

read-only

No

false

Read write transactions and read-only transactions. Only applicable to REQUIRED or REQUIRED? New.

rollback-for

No

 

Comma separated list of exception instances that triggered the rollback. For example, com.foo.MyBusinessException ServletException.

no-rollback-for

No

 


Comma separated list of exception instances that do not trigger a rollback. For example, com.foo.MyBusinessException ServletException.

1.4.6 use @ Transactional
In addition to xml based declarative transaction configuration methods, you can also use annotation based methods. Declaring transaction semantics directly in Java source code brings the declaration closer to the affected code. There is no danger of excessive coupling because code used in a transactional way is almost always deployed in this way.

Note: the standard javax.transaction.Transactional annotation is an alternative to Spring's own annotations. Please refer to JTA 1.2 documentation for more details.

The ease of use provided by using the @ Transactional annotation is best illustrated by an example, which is explained in the following text. Consider the following class definitions:

// the service class that we want to make transactional
@Transactional
public class DefaultFooService implements FooService {

    Foo getFoo(String fooName) {
        // ...
    }

    Foo getFoo(String fooName, String barName) {
        // ...
    }

    void insertFoo(Foo foo) {
        // ...
    }

    void updateFoo(Foo foo) {
        // ...
    }
}

As mentioned above, this annotation is used at the class level and represents the default values for all methods of the declared class (and its subclasses). In addition, each method can be annotated separately. Note that class level annotations do not apply to ancestor classes on the class hierarchy; in this case, methods need to be redeclared locally in order to participate in subclass level annotations.

When a POJO class like the above is defined as a bean in the Spring context, you can make the bean instance transactional through the @ EnableTransactionManagement annotation in the @ Configuration class. For more information, see javadoc.
In XML configuration, < TX: annotation driven / > tags provide similar convenience:

<!-- from the file 'context.xml' -->
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/tx
        https://www.springframework.org/schema/tx/spring-tx.xsd
        http://www.springframework.org/schema/aop
        https://www.springframework.org/schema/aop/spring-aop.xsd">

    <!-- this is the service object that we want to make transactional -->
    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- enable the configuration of transactional behavior based on annotations -->
    <tx:annotation-driven transaction-manager="txManager"/><!-- a PlatformTransactionManager is still required --> 

    <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <!-- (this dependency is defined somewhere else) -->
        <property name="dataSource" ref="dataSource"/>
    </bean>

    <!-- other <bean/> definitions here -->

</beans>

Note: if the bean name of the platform transaction manager to connect is transactionManager, the transaction manager attribute in the < TX: annotationdriven / > tag can be omitted. If you want to rely on the injected PlatformTransactionManager bean for any other name, you must use the transaction manager property, as shown in the previous example.

Method visibility and @ Transactional
When using agents, only @ Transactional annotations should be applied to methods with public visibility. If you annotate a protected, private, or package visible method with @ Transactional annotation, no errors are raised, but the annotated method does not display the configured transaction settings. If you need to annotate non-public methods, consider using AspectJ (described later).

You can apply the @ transactional annotation to interface definitions, methods on interfaces, class definitions, or public methods on classes. However, the mere existence of @ transactional annotations is not sufficient to activate transactional behavior. @Transactional annotations are only metadata that can be used by the runtime infrastructure, which supports @ transaction and can be used to configure the appropriate bean and transaction behavior. In the previous example, the < TX: annotation driven / > element switches to transactional behavior.

Note: the Spring team recommends using @ Transactional annotation only to annotate concrete classes (and methods of concrete classes), not interfaces. Of course, you can place the @ Transactional annotation on an interface (or interface method), but this only works if you use an interface based proxy. The fact of Java annotation does not mean that the interface is inherited. If the class based proxy (proxy target class = "true") or the weaving based aspect (mode = "aspectj") is used, the transaction setting does not recognize the proxy and weaving infrastructure, and the object is not packaged in a transaction proxy.

In agent mode, which is the default mode, only external method calls passed in through the agent are blocked. This means that self calling (in fact, one method in the target object calls another method in the target object) does not result in an actual transaction at run time, even if the called method is marked @ Transactional. In addition, the agent must be fully initialized to provide the expected behavior, so you should not rely on this feature in the initialization code, @ postconstruct.

If you want to wrap your own calls with transactions, consider using the AspectJ pattern (see the mode attribute in the following table). In this case, there is no agent first. Instead, the target class is woven (that is, its bytecode is modified) to convert @ Transactional to the runtime behavior of any type of method.

XML Attribute Annotation Attribute Default Description

transaction-manager

N/A (see TransactionManagementConfigurer javadoc)

transactionManager

The name of the transaction manager to use. Only if the name of the transaction manager is not transactionManager, as shown in the previous example.

mode

mode

proxy

The default pattern (proxy) deals with annotated beans to be proxied using Spring's AOP framework (following the proxy semantics, as mentioned earlier, only for method calls passed in through the proxy). The alternative pattern (aspectj) weaves the affected classes with the AspectJ transaction aspect of Spring, and modifies the bytecode of the target class to apply to any type of method call. AspectJ weaving requires Spring aspect. Jar in the classpath and enable load time weaving (or compile time weaving). (for details on how to set up weaving on load, see Spring configuration.).

proxy-target-class

proxyTargetClass

false

For agent mode only. Controls what type of transaction agent is created for the class using the @ Transactional annotation. If the proxy target class property is set to true, a class based proxy is created. If the proxy target class is false, or the attribute is omitted, a standard JDK interface based proxy will be created. (for a detailed check of the different agent types, see agent mechanisms. )

order

order

Ordered.LOWEST_PRECEDENCE

Defines the order in which transaction notifications are applied to bean s annotated with @ Transactional. (for more information about the rules for AOP notification sorting, see notification sorting. )The absence of a specified order means that the AOP subsystem determines the order of notifications.

Note: the default notification mode for handling @ Transactional annotations is proxy, which only allows calls to be intercepted by proxy. Local calls within the same class cannot be intercepted in this way. For a more advanced interception mode, consider switching to aspectj mode in combination with compilation or load time weaving.

Note: the agent target class property controls what type of transaction agent is created for the class using the @ Transactional annotation. If the proxy target class is set to true, a class based proxy is created. If the proxy target class is false, or the attribute is omitted, an interface based proxy for the standard JDK is created. (see [AOP proxy - Ying] to discuss different proxy types. )

Note: @ EnableTransactionManagement and < TX: annotation driver / > look for @ Transactional only in the same application context where @ Transactional is defined. This means that if you place annotation driven configuration in the WebApplicationContext of the dispatcher servlet, it will only check @ Transactional bean s in the controller, not in the service. See MVC for more information.

When calculating the transaction settings of a method, the most derived position takes precedence. In the following example, the DefaultFooService class uses the read-only transaction settings for annotation at the class level, but the @ Transactional annotation on the updateFoo(Foo) method in the same class takes precedence over the transaction settings defined at the class level.

@Transactional(readOnly = true)
public class DefaultFooService implements FooService {

    public Foo getFoo(String fooName) {
        // ...
    }

    // these settings have precedence for this method
    @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
    public void updateFoo(Foo foo) {
        // ...
    }
}

@Transactional settings
@A Transactional annotation is metadata that specifies that an interface, class, or method must have transaction semantics (for example, "start a new read-only transaction when this method is called, suspend any existing transaction"). The default @ transactional settings are as follows:

  • The propagation setting is propagation? Required.
  • The isolation level is isolation menu default.
  • Transactions are read-write.
  • Transaction timeout defaults to the default timeout of the underlying transaction system. If timeout is not supported, it is none.
  • Any RuntimeException triggers a rollback, while any checked exception does not.

You can change these default settings. The following table summarizes the various properties of the @ Transactional annotation:

Property Type Description

value

String

Specifies the optional qualifier for the transaction manager to use.

propagation

enum: Propagation

Optional propagation setting.

isolation

enum: Isolation

Optional isolation level. Applies only to propagation values of REQUIRED or REQUIRES_NEW.

timeout

int (in seconds of granularity)

Optional transaction timeout. Applies only to propagation values of REQUIRED or REQUIRES_NEW.

readOnly

boolean

Read-write versus read-only transaction. Only applicable to values of REQUIRED or REQUIRES_NEW.

rollbackFor

Array of Class objects, which must be derived from Throwable.

Optional array of exception classes that must cause rollback.

rollbackForClassName

Array of class names. The classes must be derived from Throwable.

Optional array of names of exception classes that must cause rollback.

noRollbackFor

Array of Class objects, which must be derived from Throwable.

Optional array of exception classes that must not cause rollback.

noRollbackForClassName

Array of String class names, which must be derived from Throwable.

Optional array of names of exception classes that must not cause rollback.

Currently, you cannot explicitly control the name of the transaction, where "name" represents the transaction name that appears in the transaction monitor (such as WebLogic's transaction monitor) and the log output. For declarative transactions, the transaction name is always the fully qualified class name +. The method name of the transaction notification class. For example, if the handlePayment(..) method of the BusinessService class starts a transaction, the name of the transaction will be: com.example.BusinessService.handlePayment.

Multiple transaction managers using @ Transactional
Most Spring applications only need one transaction manager, but in some cases, you may want to have multiple independent transaction managers in one application. You can optionally specify the identity of the PlatformTransactionManager to use using the value attribute of the @ Transactional annotation. This can be the bean name or the qualifier value of the transaction manager bean. For example, using qualifier symbols, you can combine the following Java code with the following transaction manager bean declarations in the context of your application:

public class TransactionalService {

    @Transactional("order")
    public void setSomething(String name) { ... }

    @Transactional("account")
    public void doSomething() { ... }
}

The following listing shows the bean declaration:

<tx:annotation-driven/>

    <bean id="transactionManager1" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        ...
        <qualifier value="order"/>
    </bean>

    <bean id="transactionManager2" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        ...
        <qualifier value="account"/>
    </bean>

In this case, the two methods on TransactionalService run under a separate transaction manager, distinguished by the order and account qualifiers. If no specially qualified PlatformTransactionManager bean is found, the default < TX: annotation driven > target bean name transactionManager is still used.

Custom shortcut notes
Spring's meta annotation support allows you to define custom quick annotations for specific use cases if you find that you reuse the same attributes as @ Transactional on many different methods. For example, consider the following annotation definitions:

@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Transactional("order")
public @interface OrderTx {
}

@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Transactional("account")
public @interface AccountTx {
}

In the previous note, let's write the example in the previous section as follows:

public class TransactionalService {

    @OrderTx
    public void setSomething(String name) {
        // ...
    }

    @AccountTx
    public void doSomething() {
        // ...
    }
}

In the previous example, we used syntax to define the transaction manager qualifier, but we can also include propagation behavior, rollback rules, timeouts, and other features.

1.4.7 transaction communication
This section describes some semantics of transaction propagation in Spring. Note that this section is not an introduction to transaction propagation. Instead, it details some of the semantics of transaction propagation in Spring.

In spring managed transactions, note the differences between physical and logical transactions and how propagation settings apply to them.
Understanding promotion ﹐ required

"Promotion" performs a physical transaction. If there is no transaction in the current scope, the physical transaction is performed locally or participates in an existing "external" transaction defined for a larger scope. This is a good default setting in the common call stack arrangement in the same thread (for example, a service facade, which delegates to several repository methods, where all underlying resources must participate in service level transactions).

Note: by default, the characteristics of the external scope of the participating transaction connection ignore the local isolation level, timeout value or read-only flag (if any). If you want to reject isolation level declarations when participating in existing transactions with different isolation levels, consider switching the validateExistingTransactions flag to true on your transaction manager. This non loose pattern also rejects read-only mismatches (that is, internal read-write transactions attempt to participate in read-only external scopes).

When propagation is set to propagation "required, a logical transaction scope is created for each method to which the setting is applied. Each such logical transaction scope can independently determine the rollback only state, and the external transaction scope is logically independent of the internal transaction scope. All of these scopes are mapped to the same physical transaction for the standard promotion [required] behavior. Therefore, the rollback only flag set in the scope of an internal transaction does affect the opportunity for an external transaction to actually commit.

However, when the internal transaction scope is set with rollback flag only, the external transaction does not decide to rollback itself, so rollback (triggered by the internal transaction scope silently) is unexpected. A corresponding te drollbackexception is thrown. This is the expected behavior, so the caller of the transaction will never be misled into thinking that the commit is actually executed without execution. Therefore, if an internal transaction (the external caller does not know its existence) quietly marks a transaction as rollback only, the external caller still calls commit. The external caller needs to receive an unexpected drop back exception to make it clear that a rollback has been performed.

Understanding promotion ﹣ requirements ﹣ NEW

Unlike promotion ﹐ required, promotion ﹐ requires ﹐ new always uses separate physical transactions for each affected transaction scope, and never participates in existing transactions in the external scope. In this arrangement, the underlying resource transactions are different, so they can be independently committed or rolled back. External transactions are not affected by the rollback state of internal transactions, and the lock of internal transactions is released immediately after completion. Such an independent internal transaction can also declare its isolation level, timeout and read-only settings without inheriting the characteristics of the external transaction.

Understanding promotion  nested
propagation nesting uses a physical transaction with multiple savepoints to which it can roll back. Such partial rollback allows the internal transaction scope to trigger its scope rollback, while the external transaction can continue the physical transaction, although some operations have been rolled back. This setting is usually mapped to a JDBC savepoint, so it is only applicable to JDBC resource transactions. Look at Spring's DataSourceTransactionManager.

1.4.8 recommended transaction operations
Suppose you want to perform both transaction operations and some basic analysis recommendations. How do you do this in the context of <tx: annotation driven / >?
When you call the updateFoo(Foo) method, you want to see the following:

  • Start the configured profile aspect.
  • Perform transaction notification.
  • Executes the method on the proposed object.
  • Transaction commit.
  • The analysis aspect reports the exact duration of the entire transaction method call.

This chapter does not cover any detailed explanation of AOP (unless it applies to transactions). For details on AOP configuration and AOP, see AOP.

The following code shows the simple summary aspects discussed earlier:

package x.y;

import org.aspectj.lang.ProceedingJoinPoint;
import org.springframework.util.StopWatch;
import org.springframework.core.Ordered;

public class SimpleProfiler implements Ordered {

    private int order;

    // allows us to control the ordering of advice
    public int getOrder() {
        return this.order;
    }

    public void setOrder(int order) {
        this.order = order;
    }

    // this method is the around advice
    public Object profile(ProceedingJoinPoint call) throws Throwable {
        Object returnValue;
        StopWatch clock = new StopWatch(getClass().getName());
        try {
            clock.start(call.toShortString());
            returnValue = call.proceed();
        } finally {
            clock.stop();
            System.out.println(clock.prettyPrint());
        }
        return returnValue;
    }
}

The ordering of notifications is controlled through an ordered interface. For more information about notification ordering, see Advice ordering..
The following configuration creates a fooService bean that applies the analysis and transaction aspects in the desired order:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/tx
        https://www.springframework.org/schema/tx/spring-tx.xsd
        http://www.springframework.org/schema/aop
        https://www.springframework.org/schema/aop/spring-aop.xsd">

    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- this is the aspect -->
    <bean id="profiler" class="x.y.SimpleProfiler">
        <!-- execute before the transactional advice (hence the lower order number) -->
        <property name="order" value="1"/>
    </bean>

    <tx:annotation-driven transaction-manager="txManager" order="200"/>

    <aop:config>
            <!-- this advice will execute around the transactional advice -->
            <aop:aspect id="profilingAspect" ref="profiler">
                <aop:pointcut id="serviceMethodWithReturnValue"
                        expression="execution(!void x.y..*Service.*(..))"/>
                <aop:around method="profile" pointcut-ref="serviceMethodWithReturnValue"/>
            </aop:aspect>
    </aop:config>

    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/>
        <property name="url" value="jdbc:oracle:thin:@rj-t42:1521:elvis"/>
        <property name="username" value="scott"/>
        <property name="password" value="tiger"/>
    </bean>

    <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dataSource"/>
    </bean>

</beans>

You can configure any number of additional aspects in a similar way.
The following example creates the same settings as the previous two, but uses the pure XML declaration method:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/tx
        https://www.springframework.org/schema/tx/spring-tx.xsd
        http://www.springframework.org/schema/aop
        https://www.springframework.org/schema/aop/spring-aop.xsd">

    <bean id="fooService" class="x.y.service.DefaultFooService"/>

    <!-- the profiling advice -->
    <bean id="profiler" class="x.y.SimpleProfiler">
        <!-- execute before the transactional advice (hence the lower order number) -->
        <property name="order" value="1"/>
    </bean>

    <aop:config>
        <aop:pointcut id="entryPointMethod" expression="execution(* x.y..*Service.*(..))"/>
        <!-- will execute after the profiling advice (c.f. the order attribute) -->

        <aop:advisor advice-ref="txAdvice" pointcut-ref="entryPointMethod" order="2"/>
        <!-- order value is higher than the profiling aspect -->

        <aop:aspect id="profilingAspect" ref="profiler">
            <aop:pointcut id="serviceMethodWithReturnValue"
                    expression="execution(!void x.y..*Service.*(..))"/>
            <aop:around method="profile" pointcut-ref="serviceMethodWithReturnValue"/>
        </aop:aspect>

    </aop:config>

    <tx:advice id="txAdvice" transaction-manager="txManager">
        <tx:attributes>
            <tx:method name="get*" read-only="true"/>
            <tx:method name="*"/>
        </tx:attributes>
    </tx:advice>

    <!-- other <bean/> definitions such as a DataSource and a PlatformTransactionManager here -->

</beans>

The result of the previous configuration is a fooService bean that applies the analysis and transaction aspects in this order. If you want the analysis suggestion to be executed after the transaction suggestion and before the transaction suggestion, you can exchange the value of the order attribute of the analysis aspect bean to make it higher than the order value of the transaction suggestion.

You can configure other aspects in a similar way.

1.4.9. Use @ Transactional with AspectJ
You can also use the Spring framework's @ Transactional support outside of the Spring container through the AspectJ aspect. To do this, first annotate your classes (and optional class methods) with @ Transactional annotation, then use org.springframework.transaction.aspectj.AnnotationTransactionAspect in the spring-aspects.jar file. You must also use the transaction manager to configure aspects. You can use the IoC container of the Spring framework to handle dependency injection aspects. The easiest way to configure the transaction management aspect is to use the < TX: annotation driven / > element and specify the mode attribute as AspectJ, as described in Using @Transactional. Because we're focused here on applications running outside the Spring container, we'll show you how to do this programmatically.

Note: you may want to read using @ Transactional and AOP respectively before proceeding.

The following example demonstrates how to create a transaction manager and configure annotation transaction aspect to use it:

// construct an appropriate transaction manager
DataSourceTransactionManager txManager = new DataSourceTransactionManager(getDataSource());

// configure the AnnotationTransactionAspect to use it; this must be done before executing any transactional methods
AnnotationTransactionAspect.aspectOf().setTransactionManager(txManager);

Note: when you use this aspect, you must annotate the implementation class (or methods in the class or both), not the interface (if any) that the class implements. AspectJ follows the rules of Java, and annotations on interfaces are not inherited.

The @ Transactional annotation on the class specifies the default transaction semantics for the execution of any public method in the class.
The @ Transactional annotation on a method in a class overrides the default transaction semantics given by the class annotation, if it exists. You can annotate any method, regardless of its visibility.

To weave an application using annotation transaction aspect, you must build the application using AspectJ (see the AspectJ Development Guide), or use load time weaving. For a discussion of load time weaving with AspectJ, see load time weaving with AspectJ in the Spring framework.

1.5 program transaction management
The Spring framework provides two methods of procedural transaction management, which are:

  • TransactionTemplate.
  • A direct platform transaction manager implementation.

Spring teams usually recommend using TransactionTemplate for programmatic transaction management. The second approach is similar to using the JTA UserTransaction API, although exception handling is less cumbersome.

1.5.1 using TransactionTemplate
The TransactionTemplate takes the same approach as other Spring templates, such as the jdbctemplate. It uses callback methods (freeing application code from the process of having to template and release transactional resources) and generates intent driven code, because your code only focuses on what you want to do.

Note: as the following example shows, using TransactionTemplate can definitely combine you with Spring's transaction infrastructure and api. Whether procedural transaction management is suitable for your development needs is a decision you must make by yourself.

The application code that must execute in the transaction context and explicitly use the TransactionTemplate is similar to the following example. As an application developer, you can write a TransactionCallback implementation (typically represented as an anonymous inner class) that contains code that needs to be executed in the context of a transaction. You can then pass an instance of the custom TransactionCallback to the execute(..) method exposed on the TransactionTemplate. The following example demonstrates how to do this:

public class SimpleService implements Service {

    // single TransactionTemplate shared amongst all methods in this instance
    private final TransactionTemplate transactionTemplate;

    // use constructor-injection to supply the PlatformTransactionManager
    public SimpleService(PlatformTransactionManager transactionManager) {
        this.transactionTemplate = new TransactionTemplate(transactionManager);
    }

    public Object someServiceMethod() {
        return transactionTemplate.execute(new TransactionCallback() {
            // the code in this method executes in a transactional context
            public Object doInTransaction(TransactionStatus status) {
                updateOperation1();
                return resultOfUpdateOperation2();
            }
        });
    }
}

If there is no return value, you can use the convenient TransactionCallbackWithoutResult class and an anonymous class as follows:

transactionTemplate.execute(new TransactionCallbackWithoutResult() {
    protected void doInTransactionWithoutResult(TransactionStatus status) {
        updateOperation1();
        updateOperation2();
    }
});

The code in the callback can roll back the transaction by calling the setRollbackOnly() method on the provided TransactionStatus object, as follows:

transactionTemplate.execute(new TransactionCallbackWithoutResult() {

    protected void doInTransactionWithoutResult(TransactionStatus status) {
        try {
            updateOperation1();
            updateOperation2();
        } catch (SomeBusinessException ex) {
            status.setRollbackOnly();
        }
    }
});

Specify transaction settings
Transaction settings (such as propagation mode, isolation level, timeout, and so on) can be specified either programmatically on the TransactionTemplate or in the configuration. By default, the TransactionTemplate instance has the default transaction settings default transactional settings . The following example shows the programmatic customization of transaction settings for a specific TransactionTemplate:

public class SimpleService implements Service {

    private final TransactionTemplate transactionTemplate;

    public SimpleService(PlatformTransactionManager transactionManager) {
        this.transactionTemplate = new TransactionTemplate(transactionManager);

        // the transaction settings can be set here explicitly if so desired
        this.transactionTemplate.setIsolationLevel(TransactionDefinition.ISOLATION_READ_UNCOMMITTED);
        this.transactionTemplate.setTimeout(30); // 30 seconds
        // and so forth...
    }
}

The following example defines a TransactionTemplate with some custom transaction settings by using the Spring XML configuration:

<bean id="sharedTransactionTemplate"
        class="org.springframework.transaction.support.TransactionTemplate">
    <property name="isolationLevelName" value="ISOLATION_READ_UNCOMMITTED"/>
    <property name="timeout" value="30"/>
</bean>

You can then inject the shared transaction template into any number of services you need.
Finally, the instance of the TransactionTemplate class is thread safe, in which case the instance does not maintain any session state. However, the TransactionTemplate instance maintains the configuration state. Therefore, although many classes may share a TransactionTemplate instance, if a class needs to use TransactionTemplate with different settings (for example, different isolation levels), two different TransactionTemplate instances need to be created.

1.5.2 using PlatformTransactionManager
You can also use org. Springframe. Transaction. Platform transaction manager directly manages your transactions. To do this, pass the implementation of the PlatformTransactionManager you are using to the bean through a bean reference. Then, by using the TransactionDefinition and TransactionStatus objects, you can start transactions, roll back, and commit. The following example demonstrates how to do this:

DefaultTransactionDefinition def = new DefaultTransactionDefinition();
// explicitly setting the transaction name is something that can be done only programmatically
def.setName("SomeTxName");
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);

TransactionStatus status = txManager.getTransaction(def);
try {
    // execute your business logic here
}
catch (MyException ex) {
    txManager.rollback(status);
    throw ex;
}
txManager.commit(status);

1.6 choose between procedural and declarative transaction management
Programming transaction management is usually a good idea, only if you have a small number of transaction operations. For example, if your web application only needs transactions to perform certain update operations, you may not want to use Spring or any other technology to set up a transaction proxy. In this case, using TransactionTemplate might be a good way. The ability to explicitly set transaction names can only be achieved through the use of programmatic transaction management methods.

On the other hand, if your application has many transaction operations, declarative transaction management is usually worthwhile. It places transaction management outside of business logic and is easy to configure. When using Spring framework instead of EJB CMT, the configuration cost of declarative transaction management will be greatly reduced.

1.7 transaction bound event
Starting with Spring 4.2, event listeners can be bound to a certain stage of a transaction. A typical example is handling events when a transaction completes successfully. This gives you more flexibility in using events when the results of the current transaction are important to the listener.

You can register a regular event listener using the @ EventListener annotation. If you need to bind it to a transaction, use @ TransactionalEventListener. When this is done, the listener is bound to the commit phase of the transaction by default.

The next example shows this concept. Suppose that the component publishes an order creation event, and we want to define a listener that should only handle the event after the transaction that publishes the event has successfully committed. The following example sets an event listener like this:

@Component
public class MyComponent {

    @TransactionalEventListener
    public void handleOrderCreatedEvent(CreationEvent<Order> creationEvent) {
        // ...
    }
}

@The TransactionalEventListener annotation exposes a phase property that allows you to customize the stage of the transaction to which the listener should be bound. The valid stages are before commit, after commit (default), after rollback, and after completion, which aggregate transaction completion (commit or rollback).

If no transaction is running, the listener is not called at all, because we cannot comply with the required semantics. However, you can override this behavior by setting the fallbackExecution property of the comment to true.

1.8 application server specific integration
Spring's transaction abstraction is usually independent of the application server. In addition, spring's JtaTransactionManager class (which optionally performs JNDI lookups on JTA UserTransaction and TransactionManager objects) automatically detects the location of the latter object, which varies with the application server. Access to JTA transaction manager allows for enhanced transaction semantics - especially support for transaction pauses. For more information, see the JtaTransactionManager javadoc.

pring's JTA transaction manager is a standard choice for running on Java EE application servers and can run on all general-purpose servers. Advanced features such as transaction pauses can also work on many servers, including GlassFish, JBoss, and Geronimo, without any special configuration. However, for fully supported transaction hangs and further advanced integration, Spring includes special adapters for WebLogic Server and WebSphere. These adapters are discussed in the following sections.

For standard scenarios, including WebLogic Server and WebSphere, consider the convenient < TX: JTA transaction manager / > configuration elements. When configured, this element automatically detects the underlying server and selects the best available transaction manager for the platform. This means that you do not need to explicitly configure the server specific adapter classes, as described in the following sections. Instead, they are automatically selected, using the standard JTA transaction manager as the default fallback.

1.8.1 IBM WebSphere
In WebSphere 6.1.0.9 and above, the recommended Spring JTA transaction manager is WebSphere uowtransactionmanager. This special adapter uses IBM's UOWManager API, which is available in WebSphere Application Server 6.1.0.9 and later. With this adapter, IBM officially supports spring driven transaction suspend (suspend and restore initiated by promotion [requests] new).

1.8.2 Oracle WebLogic Server
On WebLogic Server 9.0 or later, you typically use WebLogicJtaTransactionManager instead of the stock jtartransactionmanager class. This special weblogic specific subclass of the general JTA transaction manager supports all the functions of Spring's transaction definition in the transaction environment managed by weblogic, which goes beyond the standard JTA semantics. Features include the name of the transaction, the isolation level of each transaction, and the correct recovery of the transaction in all cases.

1.9 solutions to common problems
This section describes solutions to some common problems.

1.9.1 using the wrong transaction manager for a specific data source
Use the right platform transaction manager implementation based on your choice of transaction technology and requirements. If used properly, the Spring framework provides only a simple and portable abstraction. If you use global transactions, you must use org.springframe.transaction.jta. The JtaTransactionManager class (or its application server specific subclass) for all transaction operations. Otherwise, the transaction infrastructure attempts to perform local transactions on resources such as container data source instances. Such local transactions are meaningless, and good application servers treat them as errors.

1.10 further resources
For more information about transaction support of the Spring framework, see:

  • XA is a Java World demonstration of distributed transactions in Spring. David Syer of Spring guides you to understand the seven patterns of distributed transactions in Spring applications, three of which have XA and four have not.
  • "Java transaction design strategy" is a book provided by InfoQ, which introduces Java transaction in a compact rhythm. It also comes with examples of how to configure and use transactions using the Spring framework and EJB3.

2. DAO support
Data access object (DAO) support in Spring is designed to simplify the use of data access technologies (such as JDBC, Hibernate, or JPA) in a consistent way. This makes it fairly easy for you to switch between the persistence techniques described above, and it also eliminates the need to worry about catching exceptions specific to each technique.

2.1 consistent exception hierarchy
Spring provides a convenient transformation from technology specific exceptions (such as SQLException) to its own exception class hierarchy, which takes DataAccessException as the root exception. These exceptions encapsulate the original exception so there is no risk of losing any information about possible errors.

In addition to JDBC exceptions, Spring can wrap JPA and hibernate specific exceptions and transform them into a set of centralized runtime exceptions. This allows you to handle most unrecoverable persistent exceptions only in the appropriate layer, without having to use annoying templates in DAOs to catch throw blocks and exception declarations. (however, you can still catch and handle exceptions wherever you need to. )As mentioned above, JDBC exceptions (including database specific dialects) are also converted to the same hierarchy, which means that you can perform certain operations using JDBC in a consistent programming model.

The previous discussion applies to various template classes in Spring's support for various ORM frameworks. If an interceptor based class is used, the application must be concerned about how to handle hibernateexception and persistenceexception itself. It is better to delegate to the convertHibernateAccessException(..) or convertJpaAccessException() methods of SessionFactoryUtils, respectively. These methods convert exceptions to exceptions that are compatible with those in the org.springframework. Dao exception hierarchy. When persistenceexceptions are not checked, they are also thrown (although the general Dao abstraction is sacrificed in terms of exceptions).

The following figure shows the exception hierarchy provided by Spring. (note that the class hierarchy detailed in the figure shows only a subset of the entire DataAccessException hierarchy. )

 

2.2 notes for configuring DAO or repository classes
The best way to ensure that data access objects (DAOs) or repositories provide exception transformations is to use the @ Repository annotation. The annotation also allows component scanning to support finding and configuring DAOs and repositories without having to provide them with XML configuration items. The following example shows how to use the @ Repository annotation:

@Repository 
public class SomeMovieFinder implements MovieFinder {
    // ...
}

Any DAO or repository implementation requires access to persistence resources, depending on the Persistence technology used. For example, JDBC based repositories need access to JDBC data sources, while JPA based repositories need access to EntityManager. The easiest way to do this is to Inject this Resource dependency by using one of the @ Autowired, @ Inject, @ Resource, or @ PersistenceContext annotations. The following example applies to the JPA Repository:

@Repository
public class JpaMovieFinder implements MovieFinder {

    @PersistenceContext
    private EntityManager entityManager;

    // ...
}

If you use the classic Hibernate api, you can inject SessionFactory, as shown in the following example:

@Repository
public class HibernateMovieFinder implements MovieFinder {

    private SessionFactory sessionFactory;

    @Autowired
    public void setSessionFactory(SessionFactory sessionFactory) {
        this.sessionFactory = sessionFactory;
    }

    // ...
}

The last example we show here is a typical JDBC support. You can inject a data source into an initialization method or constructor by using it to create a JdbcTemplate and other data access support classes such as SimpleJdbcCall and others. The following example automatically assembles a data source:

@Repository
public class JdbcMovieFinder implements MovieFinder {

    private JdbcTemplate jdbcTemplate;

    @Autowired
    public void init(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    // ...
}

Note: for details on how to configure the application context to take advantage of these annotations, see the specifics of each persistence technique.

3. Use JDBC to access data
The values provided by the Spring Framework JDBC abstraction are best shown by the sequence of operations listed in the following table. This table shows which operations Spring is responsible for and which are your responsibilities.

Table 4 Spring JDBC -- who does what?

Action Spring You

Define connection parameters.

 

X

Open the connection.

X

 

Specify the SQL statement.

 

X

Declare parameters and provide parameter values

 

X

Prepare and execute the statement.

X

 

Set up the loop to iterate through the results (if any).

X

 

Do the work for each iteration.

 

X

Process any exception.

X

 

Handle transactions.

X

 

Close the connection, the statement, and the resultset.

X

The Spring framework handles all the underlying details that make JDBC such a boring API.

3.1 choose a method for JDBC database access
You can choose among several methods to form the basis of JDBC database access. In addition to three styles of JdbcTemplate, a new SimpleJdbcInsert and SimpleJdbcCall method optimizes database metadata. RDBMS object style adopts a more object-oriented method similar to JDO query design. Once you start using one of these methods, you can still mix and match to include features from different methods. All methods need JDBC 2.0 compatible drivers, and some advanced features need JDBC 3.0 drivers.

  • JdbcTemplate is the classic and most popular Spring JDBC method. This "lowest level" approach and all others use the JdbcTemplate behind the scenes.
  • NamedParameterJdbcTemplate wraps a JdbcTemplate to provide named parameters instead of traditional JDBC? Placeholders. This method provides better documentation and ease of use when you have multiple parameters for a SQL statement.
  • SimpleJdbcInsert and SimpleJdbcCall optimize database metadata to limit the number of necessary configurations. This approach simplifies coding, so you only need to provide the name of the table or procedure, and the parameter mapping that matches the column name. This works only if the database provides enough metadata. If the database does not provide this metadata, you must provide an explicit configuration of the parameters.
  • RDBMS objects, including MappingSqlQuery, SqlUpdate, and StoredProcedure, require you to create reusable, thread safe objects during initialization of the data access layer. This method mimics a JDO query, where query strings are defined, parameters are declared, and queries are compiled. Once you do, you can call the execute method multiple times with various parameter values.

3.2 package hierarchy
The JDBC abstract framework of the Spring framework consists of four different packages:

  • core:org.springframework.jdbc. The core package contains the JdbcTemplate class and various callback interfaces, as well as various related classes. A subpackage called org. Springframework. JDBC. Core. simple contains SimpleJdbcInsert and SimpleJdbcCall classes. Another subpackage named org.springframework.jdbc.core.namedparam contains the NamedParameterJdbcTemplate class and related support classes. See using JDBC core classes to control basic JDBC processing and error handling, JDBC batch operations, and SimpleJdbc classes to simplify JDBC operations.
  • Datasource: the org.springframework.jdbc.datasource package contains a utility class for simple data source access and various simple data source implementations. You can use it to test and run unmodified JDBC code outside the Java EE container. A subpackage called org.springfamework.jdbc.datasource. Embedded provides support for creating embedded databases by using java database engines such as HSQL, H2, and Derby. See control database connection and embedded database support.
  • object:org.springframework.jdbc. The object package contains classes representing RDBMS queries, updates, and stored procedures, which are thread safe, reusable objects. See modeling JDBC operations as Java objects. This method is modeled by JDO, although the objects returned by the query are naturally disconnected from the database. This high-level JDBC abstraction relies on the low-level abstraction in the org.springframework.jdbc.core package.
  • Support: the org.springframework.jdbc.support package provides the SQLException translation function and some utility classes. The exception thrown during JDBC processing is converted to the exception defined in org. Springframework. dao bag. This means that code using the Spring JDBC abstraction layer does not need to implement JDBC or rdbms specific error handling. All converted exceptions are unchecked, which gives you the option to catch exceptions from which you can recover, while allowing other exceptions to propagate to the caller. Use SQLExceptionTranslator to see.

3.3 use JDBC core class to control basic JDBC processing and error handling
This section describes how to use JDBC core classes to control basic JDBC processing, including error handling. It includes the following topics:

3.3.1 using JdbcTemplate
JdbcTemplate is the central class in the JDBC core package. It handles the creation and release of resources, which helps avoid common errors, such as forgetting to close the connection. It performs the basic tasks of the core JDBC workflow, such as statement creation and execution, leaving application code to provide SQL and extract results. JdbcTemplate class:

  • Run SQL query
  • Update statements and stored procedure calls
  • Iterate over the ResultSet instance and extract the returned parameter values.
  • Capture JDBC exceptions and transform them into a common, more informative exception hierarchy defined in the org.springframework. dao bag. (see consistent exception hierarchies. )

When you use JdbcTemplate for your code, you only need to implement callback interfaces and give them a clearly defined contract. For connections provided by the JdbcTemplate class, the PreparedStatementCreator callback interface creates a prepared statement that provides SQL and any necessary parameters. The same is true for the CallableStatementCreator interface that creates callable statements. The RowCallbackHandler interface extracts values from each row of the ResultSet.

You can use JdbcTemplate in DAO implementation by directly instantiating data source reference, or you can configure it in Spring IoC container and provide it to DAOs as bean reference.

Note: data sources should always be configured as beans in the Spring IoC container. In the first case, the bean is provided directly to the service; in the second case, it is provided to the prepared template.

All the SQL issued by this class is recorded at the debugging level, corresponding to the fully qualified class name of the template instance (usually JdbcTemplate, but the situation may be different if you use the custom subclass of the JdbcTemplate class).

The following section provides some examples of using the JdbcTemplate. These examples are not an exhaustive list of all the features exposed by the JdbcTemplate. See the related javadoc.

Querying (SELECT)
The following query gets the number of rows in the relationship:

int rowCount = this.jdbcTemplate.queryForObject("select count(*) from t_actor", Integer.class);

The following queries use bound variables:

int countOfActorsNamedJoe = this.jdbcTemplate.queryForObject(
        "select count(*) from t_actor where first_name = ?", Integer.class, "Joe");

The following query finds a string:

String lastName = this.jdbcTemplate.queryForObject(
        "select last_name from t_actor where id = ?",
        new Object[]{1212L}, String.class);

The following query finds and populates a single domain object:

Actor actor = this.jdbcTemplate.queryForObject(
        "select first_name, last_name from t_actor where id = ?",
        new Object[]{1212L},
        new RowMapper<Actor>() {
            public Actor mapRow(ResultSet rs, int rowNum) throws SQLException {
                Actor actor = new Actor();
                actor.setFirstName(rs.getString("first_name"));
                actor.setLastName(rs.getString("last_name"));
                return actor;
            }
        });

The following query finds and populates a large number of domain objects:

List<Actor> actors = this.jdbcTemplate.query(
        "select first_name, last_name from t_actor",
        new RowMapper<Actor>() {
            public Actor mapRow(ResultSet rs, int rowNum) throws SQLException {
                Actor actor = new Actor();
                actor.setFirstName(rs.getString("first_name"));
                actor.setLastName(rs.getString("last_name"));
                return actor;
            }
        });

If the last two pieces of code actually exist in the same application, it will remove the duplicate meanings in the two RowMapper anonymous inner classes and the DAO methods that they extract into a separate class (usually a static nested class), and then can reference. For example, it is best to write the previous code snippet as follows:

public List<Actor> findAllActors() {
    return this.jdbcTemplate.query( "select first_name, last_name from t_actor", new ActorMapper());
}

private static final class ActorMapper implements RowMapper<Actor> {

    public Actor mapRow(ResultSet rs, int rowNum) throws SQLException {
        Actor actor = new Actor();
        actor.setFirstName(rs.getString("first_name"));
        actor.setLastName(rs.getString("last_name"));
        return actor;
    }
}

Updating (INSERT, UPDATE, and DELETE) with JdbcTemplate

Update (insert, update, and delete) with JdbcTemplate
Insert, update, and delete operations can be performed using the update(..) method. Parameter values are usually provided as variable parameters or as arrays of objects.
The following example inserts a new entry:

this.jdbcTemplate.update(
        "insert into t_actor (first_name, last_name) values (?, ?)",
        "Leonor", "Watling");

The following example updates an existing entry:

this.jdbcTemplate.update(
        "update t_actor set last_name = ? where id = ?",
        "Banjo", 5276L);

The following example deletes an entry:

this.jdbcTemplate.update(
        "delete from actor where id = ?",
        Long.valueOf(actorId));

Other JdbcTemplate operations
You can use the execute(..) method to run any SQL. Therefore, this method is usually used for DDL statements. It is overloaded by variables with callback interface, bound variable array, etc. The following example creates a table:

this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))");

The following example calls a stored procedure:

this.jdbcTemplate.update(
        "call SUPPORT.REFRESH_ACTORS_SUMMARY(?)",
        Long.valueOf(unionId));

More complex stored procedure support will be covered later.

JdbcTemplate best practices
After configuration, the instance of the JdbcTemplate class is thread safe. This is important because it means that you can configure a single instance of a JdbcTemplate and then safely inject the shared reference into multiple Daos (or repositories). The JdbcTemplate is stateful because it maintains references to data sources, but this state is not session state.

A common practice when using the JdbcTemplate class (and the associated NamedParameterJdbcTemplate class) is to configure a data source in the Spring configuration file and then inject the shared data source bean into the DAO class on a dependency basis. The JdbcTemplate is created in the setter of the data source. This will result in a DAO similar to the following:

public class JdbcCorporateEventDao implements CorporateEventDao {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    // JDBC-backed implementations of the methods on the CorporateEventDao follow...
}

The following example shows the corresponding XML configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context
        https://www.springframework.org/schema/context/spring-context.xsd">

    <bean id="corporateEventDao" class="com.example.JdbcCorporateEventDao">
        <property name="dataSource" ref="dataSource"/>
    </bean>

    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="${jdbc.driverClassName}"/>
        <property name="url" value="${jdbc.url}"/>
        <property name="username" value="${jdbc.username}"/>
        <property name="password" value="${jdbc.password}"/>
    </bean>

    <context:property-placeholder location="jdbc.properties"/>

</beans>

Another alternative to explicit configuration is to use component scanning and annotation to support dependency injection. In this case, you can use @ Repository to annotate the class (which makes it a candidate for component scanning), and @ Autowired to annotate the DataSource setter method. The following example demonstrates how to do this:

@Repository 
public class JdbcCorporateEventDao implements CorporateEventDao {

    private JdbcTemplate jdbcTemplate;

    @Autowired 
    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource); 
    }

    // JDBC-backed implementations of the methods on the CorporateEventDao follow...
}
  • Annotate the class with @ Repository.
  • Annotate the data source setter method with @ Autowired.
  • Create a new JdbcTemplate using the data source.

The following example shows the corresponding XML configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        https://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context
        https://www.springframework.org/schema/context/spring-context.xsd">

    <!-- Scans within the base package of the application for @Component classes to configure as beans -->
    <context:component-scan base-package="org.springframework.docs.test" />

    <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="${jdbc.driverClassName}"/>
        <property name="url" value="${jdbc.url}"/>
        <property name="username" value="${jdbc.username}"/>
        <property name="password" value="${jdbc.password}"/>
    </bean>

    <context:property-placeholder location="jdbc.properties"/>

</beans>

If you use Spring's JdbcDaoSupport class, and your various jdbc supported DAO classes are extended from it, your subclass will inherit the setDataSource(..) method from the JdbcDaoSupport class. You can choose whether to inherit from this class. The JdbcDaoSupport class is provided for convenience only.

Regardless of which template initialization style you choose to use (or not to use), you rarely need to create a new instance of the JdbcTemplate class each time you want to run SQL. After configuration, the JdbcTemplate instance is thread safe. If your application accesses multiple databases, you may need multiple instances of JdbcTemplate, which requires multiple data sources, and then multiple instances of JdbcTemplate with different configurations.

3.3.2 use NamedParameterJdbcTemplate
The NamedParameterJdbcTemplate class adds support for JDBC statement programming by using named parameters, rather than just using traditional placeholder ('?') parameters to write JDBC statements. The NamedParameterJdbcTemplate class wraps a JdbcTemplate and delegates it to the wrapped JdbcTemplate to do most of its work. This section describes only the different parts of the NamedParameterJdbcTemplate class from the JdbcTemplate itself - that is, writing JDBC statements by using named parameters. The following example shows how to use NamedParameterJdbcTemplate:

// some JDBC-backed DAO class...
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}

public int countOfActorsByFirstName(String firstName) {

    String sql = "select count(*) from T_ACTOR where first_name = :first_name";

    SqlParameterSource namedParameters = new MapSqlParameterSource("first_name", firstName);

    return this.namedParameterJdbcTemplate.queryForObject(sql, namedParameters, Integer.class);
}

Note that the named parameter notation is used in the value assigned to the sql variable and the corresponding value inserted into the namedParameters variable (of type MapSqlParameterSource).

Alternatively, you can use a mapping based style to pass named parameters and their corresponding values to the NamedParameterJdbcTemplate instance. The rest of the methods exposed by NamedParameterJdbcOperations and implemented by the NamedParameterJdbcTemplate class follow a similar pattern, which is not discussed in this article.

The following example shows how to use map based styles:

// some JDBC-backed DAO class...
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}

public int countOfActorsByFirstName(String firstName) {

    String sql = "select count(*) from T_ACTOR where first_name = :first_name";

    Map<String, String> namedParameters = Collections.singletonMap("first_name", firstName);

    return this.namedParameterJdbcTemplate.queryForObject(sql, namedParameters,  Integer.class);
}

A good feature associated with NamedParameterJdbcTemplate (which exists in the same java package) is the SqlParameterSource interface. You have seen an implementation example of this interface in a previous code snippet (MapSqlParameterSource class). SqlParameterSource is the source of named parameter values for NamedParameterJdbcTemplate. The MapSqlParameterSource class is a simple implementation, which is an adapter around java.util. Map, where the key is the parameter name and the value is the parameter value.

Another SqlParameterSource implementation is the BeanPropertySqlParameterSource class. This class wraps an arbitrary JavaBean (that is, an instance of a class that follows the JavaBean Convention) and uses the properties of the wrapped JavaBean as the source of the named parameter values.

The following example shows a typical JavaBean:

public class Actor {

    private Long id;
    private String firstName;
    private String lastName;

    public String getFirstName() {
        return this.firstName;
    }

    public String getLastName() {
        return this.lastName;
    }

    public Long getId() {
        return this.id;
    }

    // setters omitted...

}

The following example uses NamedParameterJdbcTemplate to return the count of class members shown in the previous example:

// some JDBC-backed DAO class...
private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}

public int countOfActors(Actor exampleActor) {

    // notice how the named parameters match the properties of the above 'Actor' class
    String sql = "select count(*) from T_ACTOR where first_name = :firstName and last_name = :lastName";

    SqlParameterSource namedParameters = new BeanPropertySqlParameterSource(exampleActor);

    return this.namedParameterJdbcTemplate.queryForObject(sql, namedParameters, Integer.class);
}

Remember that the NamedParameterJdbcTemplate class wraps a classic JdbcTemplate template. If you need to access the wrapped JdbcTemplate instance to access the functions that only appear in the JdbcTemplate class, you can use the getJdbcOperations() method to access the wrapped JdbcTemplate through the JdbcOperations interface.

For guidance on using the NamedParameterJdbcTemplate class in the context of an application, see the JdbcTemplate best practices.

3.3.3 using SQLExceptionTranslator
SQLExceptionTranslator is an interface implemented by classes, which can be converted between SQLExceptions and Spring's own org.springframe.dao. DataAccessException, which is independent of the data access policy. The implementation can be generic (for example, using SQLState code for JDBC) or proprietary (for example, using Oracle error code) for higher accuracy.

Sqlerrorcodesqlexeptiontranslator is the implementation of sqllexeptiontranslator used by default. This implementation uses a specific vendor code. It is more accurate than the SQLState implementation. The conversion of the error code is based on the code in the JavaBean type class SQLErrorCodes. This class is created and populated by a SQLErrorCodesFactory, which (as the name implies) is a factory that creates SQLErrorCodes based on the content of a configuration file called SQL error code. XML. This file is populated with vendor code and is based on DatabaseProductName obtained from DatabaseMetaData. Use the code for the actual database you are using.

Sqlerrorcodesqlexptiontranslator applies the matching rules in the following order:

  1. Any custom translation implemented by a subclass. The specific sqlerrorcodesqlexptiontranslator provided is usually used, so this rule is not applicable. It only applies if you actually provide a subclass implementation.
  2. Any custom implementation of the SQLExceptionTranslator interface provided as the customSqlExceptionTranslator property of the SQLErrorCodes class.
  3. A list of instances of the CustomSQLErrorCodesTranslation class (provided for the customtranslate property of the SQLErrorCodes class) is searched for matches.
  4. Apply error code matching.
  5. Use backup translators. SQLExceptionSubclassTranslator is the default fallback converter. If this translation is not available, the next backup translator is SQLStateSQLExceptionTranslator.

Note: by default, SQLErrorCodesFactory is used to define error codes and custom exception transformations. They are found in the file named sql-error-codes.xml in the classpath, and the matching SQLErrorCodes instance is located according to the database name in the used database metadata.

You can extend sqlerrorcodesqlexptiontranslator as shown in the following example:

public class CustomSQLErrorCodesTranslator extends SQLErrorCodeSQLExceptionTranslator {

    protected DataAccessException customTranslate(String task, String sql, SQLException sqlEx) {
        if (sqlEx.getErrorCode() == -12345) {
            return new DeadlockLoserDataAccessException(task, sqlEx);
        }
        return null;
    }
}

In the previous example, the specific error code (- 12345) was translated, while other errors were translated by the default converter implementation. To use the custom converter, it must be passed to the JdbcTemplate through the setExceptionTranslator method, and it must be used in all data access processing that requires the converter. The following example shows how to use this custom translation:

private JdbcTemplate jdbcTemplate;

public void setDataSource(DataSource dataSource) {

    // create a JdbcTemplate and set data source
    this.jdbcTemplate = new JdbcTemplate();
    this.jdbcTemplate.setDataSource(dataSource);

    // create a custom translator and set the DataSource for the default translation lookup
    CustomSQLErrorCodesTranslator tr = new CustomSQLErrorCodesTranslator();
    tr.setDataSource(dataSource);
    this.jdbcTemplate.setExceptionTranslator(tr);

}

public void updateShippingCharge(long orderId, long pct) {
    // use the prepared JdbcTemplate for this update
    this.jdbcTemplate.update("update orders" +
        " set shipping_charge = shipping_charge * ? / 100" +
        " where id = ?", pct, orderId);
}

To find the error code in SQL error code. XML, pass a data source to the custom converter.

3.3.4 operation report
Running SQL statements requires very little code. You need a data source and a JdbcTemplate, including the convenience methods provided by the JdbcTemplate. The following example shows what a minimal but fully functional class needs to contain to create a new table:

import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;

public class ExecuteAStatement {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public void doExecute() {
        this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))");
    }
}

3.3.5 running query
Some query methods return a single value. To retrieve a count or a specific value from a row, use queryForObject(..). The latter converts the returned JDBC type to the Java class passed in as a parameter. If the type conversion is invalid, an InvalidDataAccessApiUsageException is thrown. The following example contains two query methods, one for int and one for query string:

import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;

public class RunAQuery {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public int getCount() {
        return this.jdbcTemplate.queryForObject("select count(*) from mytable", Integer.class);
    }

    public String getName() {
        return this.jdbcTemplate.queryForObject("select name from mytable", String.class);
    }
}

In addition to a single result query method, there are several methods that return a list with entries for each row returned by the query. The most common method is queryForList(..), which returns a list in which each element is a map and each column contains an entry, using the column name as the key. If you added a method to retrieve the list of all rows in the previous example, it might look like this:

private JdbcTemplate jdbcTemplate;

public void setDataSource(DataSource dataSource) {
    this.jdbcTemplate = new JdbcTemplate(dataSource);
}

public List<Map<String, Object>> getList() {
    return this.jdbcTemplate.queryForList("select * from mytable");
}

The list returned is as follows:

[{name=Bob, id=1}, {name=Mary, id=2}]

3.3.6 update database
The following example updates the columns of a primary key:

import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;

public class ExecuteAnUpdate {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public void setName(int id, String name) {
        this.jdbcTemplate.update("update mytable set name = ? where id = ?", name, id);
    }
}

In the previous example, the row parameters of the SQL statement have placeholders. You can pass parameter values as variables, or as arrays of objects. Therefore, primitives should be wrapped explicitly in primitive wrapper classes, or auto boxing should be used.

3.3.7 get auto generated key
The update() convenience method supports retrieving the primary key generated by the database. This support is part of the JDBC 3.0 standard. See Chapter 13.6 of this specification for details. The first parameter of this method is PreparedStatementCreator, which is how to specify the required insert statement. Another parameter is KeyHolder, which contains the key generated when the update returns successfully. There is no standard single way to create the appropriate Preparedstatement (which explains why method signatures are like this). The following examples apply to Oracle, but may not apply to other platforms:

final String INSERT_SQL = "insert into my_test (name) values(?)";
final String name = "Rob";

KeyHolder keyHolder = new GeneratedKeyHolder();
jdbcTemplate.update(
    new PreparedStatementCreator() {
        public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
            PreparedStatement ps = connection.prepareStatement(INSERT_SQL, new String[] {"id"});
            ps.setString(1, name);
            return ps;
        }
    },
    keyHolder);

// keyHolder.getKey() now contains the generated key

3.4 control database connection
This section describes:

3.4.1. Using DataSource

Spring obtains the connection to the database through the data source. The data source is part of the JDBC specification and is a general connection factory. It allows containers or frameworks to hide connection pooling and transaction management issues from application code. As a developer, you don't need to know the details of how to connect to the database. This is the responsibility of the administrator who sets up the data source. You are likely to play both roles when developing and testing code, but you do not need to know how to configure the production data source.

When you use Spring's JDBC layer, you can get the data source from JNDI, or you can configure your own data source using the connection pool implementation provided by a third party. The popular implementations are Apache Jakarta Commons DBCP and C3P0. The implementation in the Spring distribution is for testing purposes only and does not provide pools.

This section uses Spring's DriverManagerDataSource implementation, and several other implementations are described later.

Note: you should only use the DriverManagerDataSource class for testing purposes because it does not provide pools and performs poorly when making multiple requests for a connection.

Configure a DriverManagerDataSource:

  1. Get a connection to the DriverManagerDataSource as you would normally get a JDBC connection.
  2. Specifies the fully qualified class name of the JDBC driver so that the driver manager can load the driver class.
  3. Provides a URL that changes between JDBC drivers. (refer to the driver documentation for the correct values. )
  4. Provide the user name and password to connect to the database.

The following example shows how to configure a DriverManagerDataSource in Java:

DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.hsqldb.jdbcDriver");
dataSource.setUrl("jdbc:hsqldb:hsql://localhost:");
dataSource.setUsername("sa");
dataSource.setPassword("");

The following example shows the corresponding XML configuration:

<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
    <property name="driverClassName" value="${jdbc.driverClassName}"/>
    <property name="url" value="${jdbc.url}"/>
    <property name="username" value="${jdbc.username}"/>
    <property name="password" value="${jdbc.password}"/>
</bean>

<context:property-placeholder location="jdbc.properties"/>

The next two examples show the basic connection and configuration of DBCP and C3P0. To learn more about options that help control pool functionality, refer to the product documentation for the corresponding connection pool implementation.
The following example shows the DBCP configuration:

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${jdbc.driverClassName}"/>
    <property name="url" value="${jdbc.url}"/>
    <property name="username" value="${jdbc.username}"/>
    <property name="password" value="${jdbc.password}"/>
</bean>

<context:property-placeholder location="jdbc.properties"/>

The following example shows the C3P0 configuration:

<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
    <property name="driverClass" value="${jdbc.driverClassName}"/>
    <property name="jdbcUrl" value="${jdbc.url}"/>
    <property name="user" value="${jdbc.username}"/>
    <property name="password" value="${jdbc.password}"/>
</bean>

<context:property-placeholder location="jdbc.properties"/>

3.4.2 using DataSourceUtils
The DataSourceUtils class is a convenient and powerful helper class that provides a static method to get a connection from JNDI and close the connection if necessary. It supports connections bound to threads such as DataSourceTransactionManager.

3.4.3 implementing SmartDataSource
The SmartDataSource interface should be implemented by a class that can provide a connection to a relational database. It extends the DataSource interface so that classes using it can query whether a connection should be closed after a given operation. This is useful when you know that you need to reuse a connection.

3.4.4 extend AbstractDataSource
AbstractDataSource is an abstract base class implemented by Spring data source. It implements code that is common to all data source implementations. If you write your own data source implementation, you should extend the AbstractDataSource class.

3.4.5 using SingleConnectionDataSource
The SingleConnectionDataSource class is an implementation of the SmartDataSource interface that encapsulates a connection that has not been closed after each use. This is not a multi-threaded capability.

If any client code call is based on the assumption of a pool connection, such as when using persistence tools, the suppressClose property should be set to true. This setting returns a shutdown suppression agent that encapsulates the physical connection. Note that you can no longer convert this to a native Oracle connection or similar.

SingleConnectionDataSource is mainly a test class. For example, it allows code to be easily tested outside the application server with a simple JNDI environment. Unlike DriverManagerDataSource, it always reuses the same connection to avoid creating too many physical connections.

3.4.6 using DriverManagerDataSource
The DriverManagerDataSource class is an implementation of the standard DataSource interface. It configures the normal JDBC driver through the bean property and returns a new connection each time.

This implementation is very useful for testing and standalone environments outside the Java EE container. It can be used as a data source bean in the Spring IoC container or in combination with a simple JNDI environment. close() calls the close connection, so any data source aware persistence code can work. However, using a java bean style connection pool (such as commons DBCP) is very simple, even in a test environment, so using such a connection pool on the DriverManagerDataSource is almost always desirable.

3.4.7 using transactionawaredasourceproxy
TransactionAwareDataSourceProxy is the proxy for the target data source. The agent wraps the target data source to increase awareness of spring managed transactions. In this respect, it is similar to the transactional JNDI data source provided by the Java EE server.

Note: this class is rarely needed unless the existing code has to be called and passed a standard JDBC data source interface implementation. In this case, you can still make the code available and participate in Spring managed transactions. Writing your own new code using a higher level of resource management abstraction, such as JdbcTemplate or datasourceutils, is often a better option.

For more information, see TransactionAwareDataSourceProxy javadoc.
3.4.8. Using DataSourceTransactionManager
The DataSourceTransactionManager class is a platform transactionmanager implementation for a single JDBC data source. It binds the JDBC connection of the specified data source to the currently executing thread, which may allow one thread connection per data source.

To retrieve JDBC connections through datasourceptil. Getconnection (datasource) instead of Java EE's standard DataSource. getconnection, application code is required. It throws an unchecked org.springframework. dao exception, not SQLExceptions checked. All framework classes, such as jdbctemplate, implicitly use this policy. If not used with this transaction manager, the lookup policy behaves exactly the same as the normal policy. Therefore, it can be used in any situation.

The DataSourceTransactionManager class supports custom isolation levels and timeouts applied as appropriate JDBC statement query timeouts. To support the latter method, the application code must call the datasourceptil. Applytransactiontimeout (..) method using the JdbcTemplate or for each statement created.

In the case of a single resource, you can use this implementation instead of JTA transaction manager because it does not require a container to support JTA. If you insist on using the required connection lookup mode, switching between the two is just a configuration issue. JTA does not support custom isolation levels.

3.5 JDBC batch operation
Most JDBC drivers provide better performance if you batch multiple calls to the same prepared statement. By grouping updates into batches, you can limit the number of round trips to the database.

3.5.1 basic batch operation with JdbcTemplate
By implementing two methods of a special interface, BatchPreparedStatementSetter, the batch processing of JdbcTemplate is completed, and the implementation is passed as the second parameter in the batchUpdate method call. You can use the getBatchSize method to provide the size of the current batch. You can use the setValues method to set the parameter values of the prepared statement. The number of calls to this method is the number specified in the getBatchSize call. The following example updates the actor table based on the entries in the list, which is used as a batch:

public class JdbcActorDao implements ActorDao {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public int[] batchUpdate(final List<Actor> actors) {
        return this.jdbcTemplate.batchUpdate(
                "update t_actor set first_name = ?, last_name = ? where id = ?",
                new BatchPreparedStatementSetter() {
                    public void setValues(PreparedStatement ps, int i) throws SQLException {
                        Actor actor = actors.get(i);
                        ps.setString(1, actor.getFirstName());
                        ps.setString(2, actor.getLastName());
                        ps.setLong(3, actor.getId().longValue());
                    }
                    public int getBatchSize() {
                        return actors.size();
                    }
                });
    }

    // ... additional methods
}

If you process an update stream or read data from a file, you may have a preferred batch size, but the last batch may not have as many entries. In this case, you can use the InterruptibleBatchPreparedStatementSetter interface, which allows you to interrupt a batch after the input source is exhausted. The isbatch method allows you to signal the end of the batch.

3.5.2 batch operation with object list
Both JdbcTemplate and NamedParameterJdbcTemplate provide an alternative to providing batch updates. Instead of implementing a special batch interface, all parameter values in the call are provided as lists. The framework loops through these values and uses the internally prepared statement setter. The API varies depending on whether named parameters are used. For named parameters, you provide an array of SqlParameterSource with one entry for each member. You can use SqlParameterSourceUtils. createBatch is a convenient method to create this array. It passes in a bean style array of objects (with getter methods corresponding to parameters), a string keyed mapping instance (containing corresponding parameters as values), or a mixture of the two.

The following example shows a batch update using named parameters:

public class JdbcActorDao implements ActorDao {

    private NamedParameterTemplate namedParameterJdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.namedParameterJdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
    }

    public int[] batchUpdate(List<Actor> actors) {
        return this.namedParameterJdbcTemplate.batchUpdate(
                "update t_actor set first_name = :firstName, last_name = :lastName where id = :id",
                SqlParameterSourceUtils.createBatch(actors));
    }

    // ... additional methods
}

For SQL statement placeholders using classic, pass a list of object arrays and updated values. For each placeholder in the SQL statement, the array of objects must have an entry and they must be in the same order as defined in the SQL statement.
The following example is the same as the previous one, except that it uses the classic JDBC? Placeholder:

public class JdbcActorDao implements ActorDao {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public int[] batchUpdate(final List<Actor> actors) {
        List<Object[]> batch = new ArrayList<Object[]>();
        for (Actor actor : actors) {
            Object[] values = new Object[] {
                    actor.getFirstName(), actor.getLastName(), actor.getId()};
            batch.add(values);
        }
        return this.jdbcTemplate.batchUpdate(
                "update t_actor set first_name = ?, last_name = ? where id = ?",
                batch);
    }

    // ... additional methods
}

All of the batch update methods we described earlier return an int array containing the number of affected rows for each batch entry. This count is reported by the JDBC driver. If the count is not available, the JDBC driver returns a - 2 value.

Note: in this case, by automatically setting values on the underlying PreparedStatement, you need to derive the corresponding JDBC type of each value from the given Java type. While this usually works well, there are potential problems (for example, using null values that contain mappings). By default, spring calls parameter metadata. In this case, getParameterType is very expensive for JDBC drivers. You should use the latest driver version and consider setting spring.jdbc.getParameterType. Ignore the attribute as true (either as a JVM system attribute or in spring). If you have performance issues - for example, as reported by Oracle 12C (sprl-16139).

Alternatively, you may consider explicitly specifying the corresponding JDBC type, calling on the instance based on "list < object []", customizing "MapSqlParameterSource" by "registerSqlType" through "BatchPreparedStatementSetter" (as shown in the figure), or coming from Java declared property type through "BeanPropertySqlParameterSource" SQL type A null value.

3.5.3 batch operation with multiple batches
The previous batch update example processed very large batches, and you want to break them up into smaller batches. You can use the previously mentioned method by calling the batchUpdate method multiple times, but now there is a more convenient method. In addition to SQL statements, this method uses the collection of objects containing parameters, the number of updates per batch, and ParameterizedPreparedStatementSetter to set the parameter values of the prepared statements. The framework loops through the supplied values and divides the update call into batches of a specified size.

The following example shows a batch update that uses a batch size of 100:

public class JdbcActorDao implements ActorDao {

    private JdbcTemplate jdbcTemplate;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

    public int[][] batchUpdate(final Collection<Actor> actors) {
        int[][] updateCounts = jdbcTemplate.batchUpdate(
                "update t_actor set first_name = ?, last_name = ? where id = ?",
                actors,
                100,
                new ParameterizedPreparedStatementSetter<Actor>() {
                    public void setValues(PreparedStatement ps, Actor argument) throws SQLException {
                        ps.setString(1, argument.getFirstName());
                        ps.setString(2, argument.getLastName());
                        ps.setLong(3, argument.getId().longValue());
                    }
                });
        return updateCounts;
    }

    // ... additional methods
}

The batch update method of this call returns an int array containing an array entry for each batch and an array of the number of affected rows for each update. The length of the first level array represents the number of batches executed, and the length of the second level array represents the number of updates in the batch. The number of updates in each batch should be the batch size provided for all batches (the last batch size may be smaller), depending on the total number of update objects provided. The update count for each update statement is the update count reported by the JDBC driver. If the count is not available, the JDBC driver returns a - 2 value.

3.6 using SimpleJdbc class to simplify JDBC operation
The SimpleJdbcInsert and SimpleJdbcCall classes provide simplified configuration by leveraging database metadata that can be retrieved through the JDBC driver. This means that you need less preconfigured content, although you can override or turn off metadata processing if you want to provide all the details in your code.

3.6.1. Insert data using SimpleJdbcInsert
Let's first look at the SimpleJdbcInsert class with minimal configuration options. You should instantiate SimpleJdbcInsert in the initialization method of the data access layer. For this example, the initialization method is the setDataSource method. You do not need to subclass the SimpleJdbcInsert class. Instead, you can create a new instance and set the table name using the withTableName method. The configuration method of this class follows the fluid style of returning a SimpleJdbcInsert instance, which allows you to link all configuration methods. The following example uses only one configuration method (we'll show examples of several later):

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcInsert insertActor;

    public void setDataSource(DataSource dataSource) {
        this.insertActor = new SimpleJdbcInsert(dataSource).withTableName("t_actor");
    }

    public void add(Actor actor) {
        Map<String, Object> parameters = new HashMap<String, Object>(3);
        parameters.put("id", actor.getId());
        parameters.put("first_name", actor.getFirstName());
        parameters.put("last_name", actor.getLastName());
        insertActor.execute(parameters);
    }

    // ... additional methods
}

The execute method used here uses normal java.util. Map is its only parameter. It is important to note that the key used for mapping must match the column name of the table defined in the database. This is because we read the metadata to construct the actual insert statement.

3.6.2 use SimpleJdbcInsert to retrieve the automatically generated key

The next example uses the same insert as the previous, but it retrieves the automatically generated key and sets it on the new Actor object instead of passing the id. When it creates SimpleJdbcInsert, in addition to specifying the table name, it also uses the usingeneratedkeycolumns method to specify the name of the generated key column. The following list shows how it works:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcInsert insertActor;

    public void setDataSource(DataSource dataSource) {
        this.insertActor = new SimpleJdbcInsert(dataSource)
                .withTableName("t_actor")
                .usingGeneratedKeyColumns("id");
    }

    public void add(Actor actor) {
        Map<String, Object> parameters = new HashMap<String, Object>(2);
        parameters.put("first_name", actor.getFirstName());
        parameters.put("last_name", actor.getLastName());
        Number newId = insertActor.executeAndReturnKey(parameters);
        actor.setId(newId.longValue());
    }

    // ... additional methods
}

The main difference when running the insert with the second method is that instead of adding an id to the map, the executeAndReturnKey method is called. This returns java.lang. Object that you can use to create instances of numeric types used in domain classes. Here, you cannot rely on all databases to return specific Java classes. lang. Number is the base class you can rely on. If you have more than one auto generated column, or if the generated value is non numeric, you can use the KeyHolder returned from the executeandreturn KeyHolder method.

3.6.3. Specify columns for SimpleJdbcInsert
You can restrict the inserted columns by specifying a list of column names using the usingColumns method, as shown in the following example:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcInsert insertActor;

    public void setDataSource(DataSource dataSource) {
        this.insertActor = new SimpleJdbcInsert(dataSource)
                .withTableName("t_actor")
                .usingColumns("first_name", "last_name")
                .usingGeneratedKeyColumns("id");
    }

    public void add(Actor actor) {
        Map<String, Object> parameters = new HashMap<String, Object>(2);
        parameters.put("first_name", actor.getFirstName());
        parameters.put("last_name", actor.getLastName());
        Number newId = insertActor.executeAndReturnKey(parameters);
        actor.setId(newId.longValue());
    }

    // ... additional methods
}

The execution of the insert is the same as relying on metadata to determine which columns to use.

3.6.4. Use SqlParameterSource to provide parameter value

Using mapping to provide parameter values works well, but it's not the most convenient class to use. Spring provides two implementations of the SqlParameterSource interface that you can use. The first is BeanPropertySqlParameterSource. If you have a java bean compatible class that contains your values, it is a very convenient class. It uses the corresponding getter method to extract parameter values. The following example shows how to use BeanPropertySqlParameterSource:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcInsert insertActor;

    public void setDataSource(DataSource dataSource) {
        this.insertActor = new SimpleJdbcInsert(dataSource)
                .withTableName("t_actor")
                .usingGeneratedKeyColumns("id");
    }

    public void add(Actor actor) {
        SqlParameterSource parameters = new BeanPropertySqlParameterSource(actor);
        Number newId = insertActor.executeAndReturnKey(parameters);
        actor.setId(newId.longValue());
    }

    // ... additional methods
}

Another option is MapSqlParameterSource, which is similar to mapping, but provides a more convenient addValue method that can be linked together. The following example shows how to use it:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcInsert insertActor;

    public void setDataSource(DataSource dataSource) {
        this.insertActor = new SimpleJdbcInsert(dataSource)
                .withTableName("t_actor")
                .usingGeneratedKeyColumns("id");
    }

    public void add(Actor actor) {
        SqlParameterSource parameters = new MapSqlParameterSource()
                .addValue("first_name", actor.getFirstName())
                .addValue("last_name", actor.getLastName());
        Number newId = insertActor.executeAndReturnKey(parameters);
        actor.setId(newId.longValue());
    }

    // ... additional methods
}

As you can see, the configuration is the same. These alternative input classes can only be used if the code in execution requires changes.

3.6.5 calling stored procedures with SimpleJdbcCall
The SimpleJdbcCall class uses the metadata in the database to find the names of in and out parameters, so you don't have to declare them explicitly. You can declare parameters if you want to, or if you have parameters (such as arrays or structures) that are not automatically mapped to Java classes. The first example shows a simple process that returns only VARCHAR and scalar values in date format from the MySQL database. The sample procedure reads the specified actor entry and returns the first_name, last_name, and birth_date columns as out parameters. The following listing shows the first example:

CREATE PROCEDURE read_actor (
    IN in_id INTEGER,
    OUT out_first_name VARCHAR(100),
    OUT out_last_name VARCHAR(100),
    OUT out_birth_date DATE)
BEGIN
    SELECT first_name, last_name, birth_date
    INTO out_first_name, out_last_name, out_birth_date
    FROM t_actor where id = in_id;
END;

The in u id parameter contains the ID of the participant you are looking for. The out parameter returns the data read from the table.
You can declare SimpleJdbcCall in a way similar to declaring SimpleJdbcInsert. You should instantiate and configure the class in the initialization method of the data access layer. Compared with the StoredProcedure class, you do not need to create subclasses or declare parameters that can be found in the database metadata. The following SimpleJdbcCall configuration example uses the previous stored procedure (the only configuration option other than the data source is the name of the stored procedure):

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcCall procReadActor;

    public void setDataSource(DataSource dataSource) {
        this.procReadActor = new SimpleJdbcCall(dataSource)
                .withProcedureName("read_actor");
    }

    public Actor readActor(Long id) {
        SqlParameterSource in = new MapSqlParameterSource()
                .addValue("in_id", id);
        Map out = procReadActor.execute(in);
        Actor actor = new Actor();
        actor.setId(id);
        actor.setFirstName((String) out.get("out_first_name"));
        actor.setLastName((String) out.get("out_last_name"));
        actor.setBirthDate((Date) out.get("out_birth_date"));
        return actor;
    }

    // ... additional methods
}

The code written to execute the call involves creating a SqlParameterSource with an IN parameter. The name provided for the input value must match the parameter name declared IN the stored procedure. This does not need to be matched because you use metadata to determine how database objects are referenced IN the stored procedure. What is specified IN the source for the stored procedure is not necessarily the way it is stored IN the database. Some databases convert names to full uppercase, while others use lowercase or the specified case.

The execute method takes the IN parameter and returns a map containing all the out parameters keyed by name, as specified IN the stored procedure. IN this case, they are out [first] name, out [last] name, and out [birth] date.
The last part of the execute method creates an Actor instance to return the retrieved data. It is also important to use their names when declaring out parameters in stored procedures. In addition, the case of the out parameter name stored in the result map matches the case of the out parameter name in the database, and the out parameter names may be different between databases. To make your code more portable, you should perform case insensitive lookups or instruct Spring to use LinkedCaseInsensitiveMap. To implement the latter, you can create your own JdbcTemplate and set the setresultsmapcaseinsensive property to true. You can then pass this custom instance of the JdbcTemplate into the constructor of SimpleJdbcCall. The following example shows this configuration:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcCall procReadActor;

    public void setDataSource(DataSource dataSource) {
        JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
        jdbcTemplate.setResultsMapCaseInsensitive(true);
        this.procReadActor = new SimpleJdbcCall(jdbcTemplate)
                .withProcedureName("read_actor");
    }

    // ... additional methods
}

By doing so, you can avoid conflicts when using the name of the returned out parameter.
3.6.6 explicitly declare the parameters used by SimpleJdbcCall

Earlier in this chapter, we described how to derive parameters from metadata, but you can explicitly declare them if you want. You can do this by creating and configuring SimpleJdbcCall using the declare parameters method, which takes a variable number of SqlParameter objects as input. For details on how to define SqlParameters, see the next section.

Note: if you are using a database that is not supported by spring, you need to explicitly declare it. Currently, spring supports metadata queries for stored procedure calls for the following databases: Apache Derby, DB2, mysql, Microsoft SQL Server, Oracle, and Sybase. We also support metadata queries for MySQL, Microsoft SQL Server, and Oracle storage functions.

You can choose to declare one, some, or all parameters explicitly. Parameter metadata is still used without explicitly declaring parameters. To bypass all metadata lookup processing for potential parameters, use only the declared parameters, and call the method without procedure recolumnmetadata access as part of the declaration. Suppose you declare two or more different call signatures for a database function. In this case, you can call useInParameterNames to specify a list of parameter names to include the given signature.

The following example shows a fully declared procedure call, using information from the previous example:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcCall procReadActor;

    public void setDataSource(DataSource dataSource) {
        JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
        jdbcTemplate.setResultsMapCaseInsensitive(true);
        this.procReadActor = new SimpleJdbcCall(jdbcTemplate)
                .withProcedureName("read_actor")
                .withoutProcedureColumnMetaDataAccess()
                .useInParameterNames("in_id")
                .declareParameters(
                        new SqlParameter("in_id", Types.NUMERIC),
                        new SqlOutParameter("out_first_name", Types.VARCHAR),
                        new SqlOutParameter("out_last_name", Types.VARCHAR),
                        new SqlOutParameter("out_birth_date", Types.DATE)
                );
    }

    // ... additional methods
}

The execution and end result of the two examples are the same. The second example explicitly specifies all the details, rather than relying on metadata.

3.6.7 how to define SqlParameters
To define a parameter for the SimpleJdbc class and the RDBMS operation class (involved in modeling JDBC operations as Java objects), you can use SqlParameter or one of its subclasses. To do this, you typically specify the parameter name and SQL type in the constructor. The SQL type is specified by using java.sql. Constant of type. Earlier in this chapter, we saw statements like this:

new SqlParameter("in_id", Types.NUMERIC),
new SqlOutParameter("out_first_name", Types.VARCHAR),

The first line with SqlParameter declares an IN parameter. By using SqlQuery and its subclasses, which are covered IN understanding SqlQuery, you can use parameters IN stored procedure calls and queries.
The second line (with SqlOutParameter) declares an out parameter for the stored procedure call. The InOut parameter also has a sqlinoutparameter (a parameter that provides an IN value for the procedure and returns a value).

Note: only parameters declared as SqlParameter and SqlInOutParameter are used to provide input values. This is different from the StoredProcedure class, which (for backward compatibility reasons) allows input values to be provided for parameters declared as sqloutparameters.

For the IN parameter, IN addition to the name and SQL type, you can also specify a scale for numeric data or a type name for a custom database type. For the out parameter, you can provide a row mapper to handle the row mapping returned from the REF cursor. Another option is to specify SqlReturnType, which provides an opportunity to define custom processing of the return value.

3.6.8 using SimpleJdbcCall to call the storage function
You can call a stored function almost the same way you call a stored procedure, except that you provide a function name instead of a procedure name. You can use the withFunctionName method as part of the configuration to indicate that you want to call a function and generate the corresponding function call string. A special execute function is used to execute a function, which returns the function return value as an object of the specified type, which means that the return value does not have to be retrieved from the result map. For a stored procedure with only one out parameter, you can use a similar convenience method (named executeObject). The following example (for MySQL) is based on a storage function called get actor name, which returns the full name of a participant:

CREATE FUNCTION get_actor_name (in_id INTEGER)
RETURNS VARCHAR(200) READS SQL DATA
BEGIN
    DECLARE out_name VARCHAR(200);
    SELECT concat(first_name, ' ', last_name)
        INTO out_name
        FROM t_actor where id = in_id;
    RETURN out_name;
END;

To call this function, we create a SimpleJdbcCall in the initialization method again, as shown in the following example:

public class JdbcActorDao implements ActorDao {

    private JdbcTemplate jdbcTemplate;
    private SimpleJdbcCall funcGetActorName;

    public void setDataSource(DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
        JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
        jdbcTemplate.setResultsMapCaseInsensitive(true);
        this.funcGetActorName = new SimpleJdbcCall(jdbcTemplate)
                .withFunctionName("get_actor_name");
    }

    public String getActorName(Long id) {
        SqlParameterSource in = new MapSqlParameterSource()
                .addValue("in_id", id);
        String name = funcGetActorName.executeFunction(String.class, in);
        return name;
    }

    // ... additional methods
}

The executeFunction method used returns a string containing the return value of the function call.

3.6.9. Return ResultSet or REF cursor from SimpleJdbcCall
Calling a stored procedure or function that returns a result set is a bit tricky. Some databases return result sets during JDBC result processing, while others require explicit registration of specific types of out parameters. Both methods require extra processing to traverse the result set and process the returned rows. With SimpleJdbcCall, you can use the returningResultSet method and declare a row mapper implementation for a specific parameter. If a result set is returned during result processing, no name is defined, so the returned results must match the order in which RowMapper implementation is declared. The specified name is still used to store the list of processed results in the result map returned from the execute statement.

The next example (for MySQL) uses a stored procedure that does not accept parameters and returns all the rows in the t_actor table:

CREATE PROCEDURE read_all_actors()
BEGIN
 SELECT a.id, a.first_name, a.last_name, a.birth_date FROM t_actor a;
END;

To call this procedure, declare the row mapper. Because the class to be mapped follows the JavaBean rules, you can use BeanPropertyRowMapper, which is created by passing the class to be mapped in the newInstance method. The following example demonstrates how to do this:

public class JdbcActorDao implements ActorDao {

    private SimpleJdbcCall procReadAllActors;

    public void setDataSource(DataSource dataSource) {
        JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
        jdbcTemplate.setResultsMapCaseInsensitive(true);
        this.procReadAllActors = new SimpleJdbcCall(jdbcTemplate)
                .withProcedureName("read_all_actors")
                .returningResultSet("actors",
                BeanPropertyRowMapper.newInstance(Actor.class));
    }

    public List getActorsList() {
        Map m = procReadAllActors.execute(new HashMap<String, Object>(0));
        return (List) m.get("actors");
    }

    // ... additional methods
}

The execute call passes an empty map because it does not take any parameters. The list of participants is then retrieved from the result map and returned to the caller.

3.7 modeling JDBC operations as Java objects
org.springframework.jdbc. The object package contains classes that allow you to access the database in a more object-oriented way. For example, you can execute a query and return the results as a list containing business objects, where relational data is mapped to the properties of the business objects. You can also run stored procedures and update, delete, and insert statements.

Note: many Spring developers believe that the various RDBMS action classes described below, with the exception of the StoredProcedure class, can often be replaced with direct JdbcTemplate calls. In general, a DAO method that directly calls a method on a JdbcTemplate is simpler (as opposed to encapsulating the query as a complete class).

However, if you get measurable value from using RDBMS action classes, you should continue to use them.

3.7.1. Understanding SqlQuery
SqlQuery is a reusable, thread safe class that encapsulates a SQL query. The subclass must implement the newRowMapper(..) method to provide a RowMapper instance that can create an object in each row obtained iteratively on the ResultSet created during query execution. SqlQuery classes are rarely used directly because the MappingSqlQuery subclass provides a more convenient implementation for mapping rows to Java classes. Other implementations of extended sqlQuery include MappingSqlQueryWithParameters and UpdatableSqlQuery.

3.7.2 chapter. Using MappingSqlQuery
MappingSqlQuery is a reusable query. The concrete subclass must implement the abstract mapRow(..) method to convert each row of the provided ResultSet to an object of the specified type. The following example shows a custom query that maps data from the T & Actor relationship to an instance of the Actor class:

public class ActorMappingQuery extends MappingSqlQuery<Actor> {

    public ActorMappingQuery(DataSource ds) {
        super(ds, "select id, first_name, last_name from t_actor where id = ?");
        declareParameter(new SqlParameter("id", Types.INTEGER));
        compile();
    }

    @Override
    protected Actor mapRow(ResultSet rs, int rowNumber) throws SQLException {
        Actor actor = new Actor();
        actor.setId(rs.getLong("id"));
        actor.setFirstName(rs.getString("first_name"));
        actor.setLastName(rs.getString("last_name"));
        return actor;
    }
}

This class extends the parameterized MappingSqlQuery using the Actor type. The constructor for this customer query takes the data source as a unique parameter. In this constructor, you can use DataSource to call the constructor of the superclass and call the SQL that should be executed to retrieve the rows of this query. This SQL is used to create PreparedStatement, so it may contain placeholders for any parameters passed in during execution. You must declare each parameter using the declareParameter method that passes the SqlParameter. SqlParameter accepts names and JDBC types defined in java.sql.Types. After defining all the parameters, you can call the compile() method to prepare the statement and run it later. This class is thread safe after compilation, so as long as these instances are created during DAO initialization, they can be saved and reused as instance variables. The following example shows how to define such a class:

private ActorMappingQuery actorMappingQuery;

@Autowired
public void setDataSource(DataSource dataSource) {
    this.actorMappingQuery = new ActorMappingQuery(dataSource);
}

public Customer getCustomer(Long id) {
    return actorMappingQuery.findObject(id);
}

The method in the previous example retrieves customers with IDS passed in as unique parameters. Because we only want to return one object, we call the findObject convenience method with id as the parameter. If we have a query that returns a list of objects and gets additional parameters, we will use one of the execute methods to get an array of parameter values passed as varargs. The following example shows such a method:

public List<Actor> searchForActors(int age, String namePattern) {
    List<Actor> actors = actorSearchMappingQuery.execute(age, namePattern);
    return actors;
}

3.7.3. Use SqlUpdate
The sqlupdate class encapsulates a SQL update. Like queries, update objects are reusable, and like all RdbmsOperation classes, updates can have parameters and are defined in SQL. This class provides many update(..) methods similar to the execute(..) method of query objects. The sqlupdate class is concrete. It can be subclassed - for example, by adding a custom update method. However, you do not need to subclass the sqlupdate class, because it can be easily parameterized by setting SQL and declaring parameters. The following example creates a custom update method called execute:

import java.sql.Types;
import javax.sql.DataSource;
import org.springframework.jdbc.core.SqlParameter;
import org.springframework.jdbc.object.SqlUpdate;

public class UpdateCreditRating extends SqlUpdate {

    public UpdateCreditRating(DataSource ds) {
        setDataSource(ds);
        setSql("update customer set credit_rating = ? where id = ?");
        declareParameter(new SqlParameter("creditRating", Types.NUMERIC));
        declareParameter(new SqlParameter("id", Types.NUMERIC));
        compile();
    }

    /**
     * @param id for the Customer to be updated
     * @param rating the new value for credit rating
     * @return number of rows updated
     */
    public int execute(int id, int rating) {
        return update(rating, id);
    }
}

3.7.4. Using StoredProcedure
The StoredProcedure class is a superclass for object abstraction of RDBMS stored procedures.

This class is abstract, and its various execute(..) methods have protected access rights. In addition to providing more strict subclasses of types, other methods can be prevented from being used.

The inherited sql attribute is the name of the stored procedure in RDBMS.
To define parameters for the StoredProcedure class, you can use SqlParameter or one of its subclasses. You must specify the parameter name and SQL type in the constructor, as shown in the following code snippet:

new SqlParameter("in_id", Types.NUMERIC),
new SqlOutParameter("out_first_name", Types.VARCHAR),

The SQL type is specified using java.sql. Constant of type.
The first line (with SqlParameter) declares an IN parameter. You can use stored procedure calls IN parameters and queries that use SqlQuery and its subclasses, which are involved IN understanding SqlQuery.

The second line (with SqlOutParameter) declares an out parameter for the stored procedure call. The InOut parameter also has a sqlinoutparameter (a parameter that provides an in value for the procedure and returns a value).

For the in parameter, in addition to the name and SQL type, you can also specify a scale for numeric data or a type name for a custom database type. For the out parameter, you can provide a row mapper to handle the row mapping returned from the REF cursor. Another option is to specify SqlReturnType, which allows you to define custom processing of the return value.

The next simple DAO example uses the StoredProcedure call function (sysdate()), which is provided with any Oracle database. To use the stored procedure feature, you must create a class that extends StoredProcedure. In this case, the StoredProcedure class is an internal class. However, if you need to reuse StoredProcedure, you can declare it as a top-level class. This example has no input parameters, but the output parameters are declared as date types by using the SqlOutParameter class. The execute() method runs this procedure and extracts the returned date from the result map. By using the parameter name as the key, each declared output parameter of the resulting map has an entry (in this case only one). The following listing shows our custom StoredProcedure class:

import java.sql.Types;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.object.StoredProcedure;

public class StoredProcedureDao {

    private GetSysdateProcedure getSysdate;

    @Autowired
    public void init(DataSource dataSource) {
        this.getSysdate = new GetSysdateProcedure(dataSource);
    }

    public Date getSysdate() {
        return getSysdate.execute();
    }

    private class GetSysdateProcedure extends StoredProcedure {

        private static final String SQL = "sysdate";

        public GetSysdateProcedure(DataSource dataSource) {
            setDataSource(dataSource);
            setFunction(true);
            setSql(SQL);
            declareParameter(new SqlOutParameter("date", Types.DATE));
            compile();
        }

        public Date execute() {
            // the 'sysdate' sproc has no input parameters, so an empty Map is supplied...
            Map<String, Object> results = execute(new HashMap<String, Object>());
            Date sysdate = (Date) results.get("date");
            return sysdate;
        }
    }

}

The following StoredProcedure example has two output parameters (in this case, the Oracle REF cursor):

import java.util.HashMap;
import java.util.Map;
import javax.sql.DataSource;
import oracle.jdbc.OracleTypes;
import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.object.StoredProcedure;

public class TitlesAndGenresStoredProcedure extends StoredProcedure {

    private static final String SPROC_NAME = "AllTitlesAndGenres";

    public TitlesAndGenresStoredProcedure(DataSource dataSource) {
        super(dataSource, SPROC_NAME);
        declareParameter(new SqlOutParameter("titles", OracleTypes.CURSOR, new TitleMapper()));
        declareParameter(new SqlOutParameter("genres", OracleTypes.CURSOR, new GenreMapper()));
        compile();
    }

    public Map<String, Object> execute() {
        // again, this sproc has no input parameters, so an empty Map is supplied
        return super.execute(new HashMap<String, Object>());
    }
}

Notice how the overloaded variant of the declareParameter(..) method used in the titlesandgenrestoredprocedure constructor is passed to the RowMapper implementation instance. This is a very convenient and powerful way to reuse existing functions. The following two examples provide code for two RowMapper implementations.

The TitleMapper class maps a ResultSet to a Title domain object for each line in the provided ResultSet, as follows:

import java.sql.ResultSet;
import java.sql.SQLException;
import com.foo.domain.Title;
import org.springframework.jdbc.core.RowMapper;

public final class TitleMapper implements RowMapper<Title> {

    public Title mapRow(ResultSet rs, int rowNum) throws SQLException {
        Title title = new Title();
        title.setId(rs.getLong("id"));
        title.setName(rs.getString("name"));
        return title;
    }
}

The generemapper class maps a ResultSet to a type domain object for each line in the ResultSet provided, as follows:

import java.sql.ResultSet;
import java.sql.SQLException;
import com.foo.domain.Genre;
import org.springframework.jdbc.core.RowMapper;

public final class GenreMapper implements RowMapper<Genre> {

    public Genre mapRow(ResultSet rs, int rowNum) throws SQLException {
        return new Genre(rs.getString("name"));
    }
}

To pass parameters to a stored procedure defined in RDBMS with one or more input parameters, you can write a strongly typed execute(..) method, which delegates to the untyped execute(Map) method in the superclass, as shown in the following example:

import java.sql.Types;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import javax.sql.DataSource;
import oracle.jdbc.OracleTypes;
import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.core.SqlParameter;
import org.springframework.jdbc.object.StoredProcedure;

public class TitlesAfterDateStoredProcedure extends StoredProcedure {

    private static final String SPROC_NAME = "TitlesAfterDate";
    private static final String CUTOFF_DATE_PARAM = "cutoffDate";

    public TitlesAfterDateStoredProcedure(DataSource dataSource) {
        super(dataSource, SPROC_NAME);
        declareParameter(new SqlParameter(CUTOFF_DATE_PARAM, Types.DATE);
        declareParameter(new SqlOutParameter("titles", OracleTypes.CURSOR, new TitleMapper()));
        compile();
    }

    public Map<String, Object> execute(Date cutoffDate) {
        Map<String, Object> inputs = new HashMap<String, Object>();
        inputs.put(CUTOFF_DATE_PARAM, cutoffDate);
        return super.execute(inputs);
    }
}

3.8. Common problems in parameter and data value processing
Common problems with parameters and data values exist in different methods provided by JDBC support of the Spring Framework. This section describes how to solve these problems.

3.8.1. Provide SQL type information for parameters
In general, Spring determines the SQL type of the parameter based on the type of parameter passed in. You can explicitly provide the SQL type to use when setting parameter values. This is sometimes necessary to set null values correctly.

You can provide SQL type information in the following ways:

  • Many update and query methods of jdbtemplate use additional parameters in the form of int array. This array is used to indicate the SQL type of the corresponding parameter by using the constant value in java.sql. Class of type. Provide an entry for each parameter.
  • You can use the SqlParameterValue class to wrap parameter values that require this additional information. To do this, create a new instance of each value and pass the SQL type and parameter values in the constructor. You can also provide optional scaling parameters for numeric values.
  • For methods that use named parameters, you can use the SqlParameterSource class, BeanPropertySqlParameterSource, or MapSqlParameterSource. They all have methods for registering the SQL type of any specified parameter value.

3.8.2. Working with BLOB and CLOB objects
You can store images, other binary data, and large blocks of text in the database. These large objects are called blobs for binary data and clobs for character data. In Spring, you can use the JdbcTemplate directly to handle these large objects, or you can use the high-level abstraction provided by RDBMS objects and SimpleJdbc classes. All of these methods use the implementation of the LobHandler interface to actually manage LOB (large object) data. LobHandler provides access to a LobCreator class through the getLobCreator method, which is used to create a new LOB object to insert.

LobCreator and LobHandler provide the following support for LOB input and output:

BLOB

  • byte[]: getBlobAsBytes and setBlobAsBytes
  • InputStream: getBlobAsBinaryStream and setBlobAsBinaryStream

CLOB

  • String: getClobAsString and setClobAsString
  • InputStream: getclobsasciistream and setclobassciream
  • Reader: getclobacharacterstream and setclobacharacterstream

The next example shows how to create and insert a BLOB. Later, we'll show you how to read it from the database.
This example uses the implementation of JdbcTemplate and abstractcreatingpreparedstatementcallback. It implements a method, setValues. This method provides a LobCreator, which we use to set the value of LOB column in SQL insert statement.

For this example, we assume that there is a variable lobHandler, which has been set to an instance of DefaultLobHandler. This value is usually set through dependency injection.
The following example shows how to create and insert a BLOB:

final File blobIn = new File("spring2004.jpg");
final InputStream blobIs = new FileInputStream(blobIn);
final File clobIn = new File("large.txt");
final InputStream clobIs = new FileInputStream(clobIn);
final InputStreamReader clobReader = new InputStreamReader(clobIs);

jdbcTemplate.execute(
    "INSERT INTO lob_table (id, a_clob, a_blob) VALUES (?, ?, ?)",
    new AbstractLobCreatingPreparedStatementCallback(lobHandler) {  //1
        protected void setValues(PreparedStatement ps, LobCreator lobCreator) throws SQLException {
            ps.setLong(1, 1L);
            lobCreator.setClobAsCharacterStream(ps, 2, clobReader, (int)clobIn.length()); //2 
            lobCreator.setBlobAsBinaryStream(ps, 3, blobIs, (int)blobIn.length());  //3
        }
    }
);

blobIs.close();
clobReader.close();
  1. Pass in a simple defaultlobhandler (in this case).
  2. Use the setclobacharacterstream method to pass the contents of the CLOB.
  3. Use the setBlobAsBinaryStream method to pass in the contents of the BLOB.

Note: if you call the setBlobAsBinaryStream, setclobacsicistream, or setclobacharacterstream methods on the LobCreator returned by DefaultLobHandler.getLobCreator(), you can specify a negative value for the contentLength parameter. If the specified content length is negative, DefaultLobHandler uses the JDBC 4.0 variant of the streaming method, but does not use the length parameter. Otherwise, it passes the specified length to the driver.

Refer to the documentation for the JDBC driver used to verify that it supports LOB streams that do not provide content length.

Now it's time to read LOB data from the database. Also, use a JdbcTemplate with the same instance variable lobHandler and a reference to DefaultLobHandler. The following example demonstrates how to do this:

List<Map<String, Object>> l = jdbcTemplate.query("select id, a_clob, a_blob from lob_table",
    new RowMapper<Map<String, Object>>() {
        public Map<String, Object> mapRow(ResultSet rs, int i) throws SQLException {
            Map<String, Object> results = new HashMap<String, Object>();
            String clobText = lobHandler.getClobAsString(rs, "a_clob");  
            results.put("CLOB", clobText);
            byte[] blobBytes = lobHandler.getBlobAsBytes(rs, "a_blob");  
            results.put("BLOB", blobBytes);
            return results;
        }
    });

Use the getClobAsString method to retrieve the contents of the CLOB.
Use the getBlobAsBytes method to retrieve the contents of the BLOB.

3.8.3. List of values passed in Clause
The SQL standard allows you to select rows based on expressions that contain a list of variable values. A typical example is select * from t [actor], where id is in (1,2,3). You cannot declare an indefinite number of placeholders. You need to prepare a large number of placeholders, or you need to dynamically generate SQL strings after you know how many placeholders you need. The named parameter provided in NamedParameterJdbcTemplate supports the latter method. You can pass in the value as java.util.List. This list is used to insert the required placeholders and pass values during statement execution.

Note: be careful when passing many values. The JDBC standard does not guarantee that you can use more than 100 values for a list of in expressions. Various databases exceed this number, but they usually have a hard limit on the values allowed. For example, the upper limit for Oracle is 1000.

In addition to primitive values in the value list, you can create java.util. A list of arrays of objects. This list can support multiple expressions defined for the in clause, such as select * from T_ACTOR where (id, last_name) in ((1, 'Johnson'), (2, 'Harrop' \)). Of course, this requires the database to support this syntax.

3.8.4. Complex types of handling stored procedure calls
When invoking a stored procedure, you can sometimes use database specific complex types. To accommodate these types, Spring provides a SqlReturnType to handle them when they are returned from a stored procedure call and SqlTypeValue when they are passed as parameters to the stored procedure.

The SqlReturnType interface has a method (named getTypeValue) that must be implemented. This interface is used as part of the SqlOutParameter declaration. The following example shows the value of an Oracle STRUCT object that returns the user declaration type, item ﹣ type:

public class TestItemStoredProcedure extends StoredProcedure {

    public TestItemStoredProcedure(DataSource dataSource) {
        // ...
        declareParameter(new SqlOutParameter("item", OracleTypes.STRUCT, "ITEM_TYPE",
            new SqlReturnType() {
                public Object getTypeValue(CallableStatement cs, int colIndx, int sqlType, String   ) throws SQLException {
                    STRUCT struct = (STRUCT) cs.getObject(colIndx);
                    Object[] attr = struct.getAttributes();
                    TestItem item = new TestItem();
                    item.setId(((Number) attr[0]).longValue());
                    item.setDescription((String) attr[1]);
                    item.setExpirationDate((java.util.Date) attr[2]);
                    return item;
                }
            }));
        // ...
    }

You can use SqlTypeValue to pass the value of a Java object, such as a testicle, to a stored procedure. The SqlTypeValue interface has a method (named createTypeValue) that must be implemented. The active connection is passed in and you can use it to create database specific objects, such as StructDescriptor instances or ArrayDescriptor instances. The following example creates a StructDescriptor instance:

final TestItem testItem = new TestItem(123L, "A test item",
        new SimpleDateFormat("yyyy-M-d").parse("2010-12-31"));

SqlTypeValue value = new AbstractSqlTypeValue() {
    protected Object createTypeValue(Connection conn, int sqlType, String typeName) throws SQLException {
        StructDescriptor itemDescriptor = new StructDescriptor(typeName, conn);
        Struct item = new STRUCT(itemDescriptor, conn,
        new Object[] {
            testItem.getId(),
            testItem.getDescription(),
            new java.sql.Date(testItem.getExpirationDate().getTime())
        });
        return item;
    }
};

You can now add this SqlTypeValue to the map that contains the input parameters of the stored procedure execution call.

Another use of SqlTypeValue is to pass an array of values to an Oracle stored procedure. Oracle has its own internal array class, which must be used in this case. You can use SqlTypeValue to create an instance of Oracle array and fill it with the value in Java array, as shown in the following example:

final Long[] ids = new Long[] {1L, 2L};

SqlTypeValue value = new AbstractSqlTypeValue() {
    protected Object createTypeValue(Connection conn, int sqlType, String typeName) throws SQLException {
        ArrayDescriptor arrayDescriptor = new ArrayDescriptor(typeName, conn);
        ARRAY idArray = new ARRAY(arrayDescriptor, conn, ids);
        return idArray;
    }
};

3.9. Support of embedded database
org.springframework.jdbc.datasource. The embedded package provides support for the embedded Java database engine. Support for HSQL, H2, and Derby is provided locally. You can also use extensible API s to insert new embedded database types and data source implementations.

3.9.1. Why use embedded database?
Embedded database is very useful in the development phase of a project because it is lightweight. Its advantages include simple configuration, fast start-up time, testability and the ability to rapidly develop SQL in the development process.

3.9.2. Using Spring XML to create embedded database
If you want to expose the embedded database instance as a bean in the Spring ApplicationContext, you can use the embedded database tag in the spring JDBC namespace:

<jdbc:embedded-database id="dataSource" generate-name="true">
    <jdbc:script location="classpath:schema.sql"/>
    <jdbc:script location="classpath:test-data.sql"/>
</jdbc:embedded-database>

The previous configuration created an embedded HSQL database populated with SQL from the schema. SQL and test data. The SQL resource in the classpath root. In addition, as a best practice, assign a uniquely generated name to the embedded database. Spring containers can use embedded databases as javax.sql type bean s. Data sources can then be injected into data access objects as needed.

3.9.3. Creating embedded database programmatically
The EmbeddedDatabaseBuilder class provides a smooth API for building embedded database programmatically. When you need to create an embedded database in a stand-alone environment or in a stand-alone integration test, you can use this function, as shown in the following example:

EmbeddedDatabase db = new EmbeddedDatabaseBuilder()
        .generateUniqueName(true)
        .setType(H2)
        .setScriptEncoding("UTF-8")
        .ignoreFailedDrops(true)
        .addScript("schema.sql")
        .addScripts("user_data.sql", "country_data.sql")
        .build();

// perform actions against the db (EmbeddedDatabase extends javax.sql.DataSource)

db.shutdown()

For more information about all supported options, see the javadoc for EmbeddedDatabaseBuilder.
You can also use EmbeddedDatabaseBuilder to create an embedded database by using Java configuration, as shown in the following example:

@Configuration
public class DataSourceConfig {

    @Bean
    public DataSource dataSource() {
        return new EmbeddedDatabaseBuilder()
                .generateUniqueName(true)
                .setType(H2)
                .setScriptEncoding("UTF-8")
                .ignoreFailedDrops(true)
                .addScript("schema.sql")
                .addScripts("user_data.sql", "country_data.sql")
                .build();
    }
}

3.9.4. Select embedded database type
This section describes how to select one of the three embedded databases supported by Spring. It includes the following topics:

Using HSQL
Spring supports HSQL 1.8.0 and above. If no type is explicitly specified, HSQL is the default embedded database. To explicitly specify HSQL, set the type attribute of the embedded database tag to HSQL. If you use the builder API, use EmbeddedDatabaseType. hsql to call the setType(EmbeddedDatabaseType) method.

Using H2
Spring supports H2 databases. To enable H2, set the type attribute of the embedded database tag to H2. If you use the builder API, use EmbeddedDatabaseType. h2 to call the setType(EmbeddedDatabaseType) method.

Using Derby
Spring supports Apache Derby 10.5 and above. To enable Derby, set the type attribute of the embedded database tag to Derby. If you use the builder API, use EmbeddedDatabaseType. derby to call the setType(EmbeddedDatabaseType) method.

3.9.5. Using embedded database to test data access logic
Embedded database provides a lightweight way to test data access code. The next example is the data access integration test template using an embedded database. When the embedded database does not need to be reused across test classes, using such a template is very useful for one-time use. However, if you want to create a shared embedded database in a test suite, consider using the Spring and TestContext framework and the embedded database configured with Spring ApplicationContext as a bean to create the embedded database by using Spring XML and creating the embedded database programming. The following listing shows the test template:

public class DataAccessIntegrationTestTemplate {

    private EmbeddedDatabase db;

    @BeforeEach
    public void setUp() {
        // creates an HSQL in-memory database populated from default scripts
        // classpath:schema.sql and classpath:data.sql
        db = new EmbeddedDatabaseBuilder()
                .generateUniqueName(true)
                .addDefaultScripts()
                .build();
    }

    @Test
    public void testDataAccess() {
        JdbcTemplate template = new JdbcTemplate(db);
        template.query( /* ... */ );
    }

    @AfterEach
    public void tearDown() {
        db.shutdown();
    }

}

3.9.6. Generate a unique name for the embedded database
If the development team's test suite inadvertently tries to recreate other instances of the same database, they often encounter errors when using embedded databases. This can easily happen if an XML Configuration file or @ Configuration class is responsible for creating an embedded database and corresponding Configuration and then reusing test scenarios across multiple identical test suites (i.e. in the same JVM process) - for example, different set test ApplicationContext configurations for an embedded database are only active for bean definition profiles.

The root cause of these errors is Spring's embeddeddatabasefactory (used internally by the < JDBC: embeddatabase > XML namespace element and EmbeddedDatabaseBuilder for Java configuration) which sets the name of the embedded database to testdb, if not otherwise specified. For < JDBC: Embedded Database >, the embedded database is usually assigned a name equal to the bean's id (usually similar to dataSource). Therefore, subsequent attempts to create an embedded database will not result in a new database. Instead, reuse the same JDBC connection URL and try to create a new embedded database that actually points to an existing embedded database created by the same configuration.

To solve this common problem, Spring Framework 4.2 provides support for generating unique names for embedded databases. To enable the name of the build, use one of the following options.

  • EmbeddedDatabaseFactory.setGenerateUniqueDatabaseName()

  • EmbeddedDatabaseBuilder.generateUniqueName()

  • <jdbc:embedded-database generate-name="true" ...​ >

3.9.7. Extended embedded database support
You can extend Spring JDBC embedded database support in two ways:

  • Implement EmbeddedDatabaseConfigurer to support new embedded database types.
  • Implement DataSourceFactory to support new data source implementations, such as managing connection pools for embedded database connections.

We encourage you to extend the Spring community on GitHub Issues.

3.10. Initialize data source
org.springframework.jdbc.datasource. The init package provides support for initializing existing data sources. Embedded database support provides an option for creating and initializing data sources for applications. However, sometimes it may be necessary to initialize an instance running on a server.

3.10.1. Initializing the database with Spring XML
If you want to initialize a database, you can provide a reference to the DataSource bean. You can use the initialize database tag in the spring JDBC namespace:

<jdbc:initialize-database data-source="dataSource">
    <jdbc:script location="classpath:com/foo/sql/db-schema.sql"/>
    <jdbc:script location="classpath:com/foo/sql/db-test-data.sql"/>
</jdbc:initialize-database>

The previous example ran two specified scripts against the database. The first script creates a pattern, and the second script populates the table with a test data set. Script locations can also be patterns with wildcards, which are common Ant styles used in Spring for resources (for example, classpath*:/com/foo/**/sql/*-data.sql). If you use patterns, the script runs in lexical order of URL or filename.

The default behavior of the database initializer is to run the provided script unconditionally. This may not be what you want - for example, if you run a script on a database that already contains test data. By following a common pattern, as shown earlier, you can reduce the possibility of accidentally deleting data by creating tables first and then inserting data. If the table already exists, the first step fails.

However, for more control over the creation and deletion of existing data, XML namespaces provide additional options. The first is the flag used to initialize the switch. You can set it based on the environment (for example, extracting a Boolean value from a system property or environment bean). The following example takes a value from a system property:

<jdbc:initialize-database data-source="dataSource"
    enabled="#{systemProperties.INITIALIZE_DATABASE}"> 
    <jdbc:script location="..."/>
</jdbc:initialize-database>

The second option to control what happens to existing data is to be more tolerant of failure. To do this, you can control the initializer to ignore some errors in the SQL it executes from the script, as shown in the following example:

<jdbc:initialize-database data-source="dataSource" ignore-failures="DROPS">
    <jdbc:script location="..."/>
</jdbc:initialize-database>

In the previous example, we said that we expect the script to run against an empty database sometimes, so some DROP statements in the script will fail. As a result, failed SQL DROP statements will be ignored, but other failures will result in exceptions. This is useful if your SQL dialect does not support DROP, if there is (or a similar situation), but you want to unconditionally delete all test data before you recreate it. In this case, the first script is usually a set of DROP statements followed by a set of CREATE statements.

You can set the ignore failure option to none (the default), drops (ignore failed drops), or all (ignore all failures).
You should use; or new lines between each statement, if; the character doesn't exist in the script at all. You can control the global or script scripts, as shown in the following example:

<jdbc:initialize-database data-source="dataSource" separator="@@"> 
    <jdbc:script location="classpath:com/myapp/sql/db-schema.sql" separator=";"/> 
    <jdbc:script location="classpath:com/myapp/sql/db-test-data-1.sql"/>
    <jdbc:script location="classpath:com/myapp/sql/db-test-data-2.sql"/>
</jdbc:initialize-database>

In this case, the two test data scripts use @ @ as the statement separator and only DB schema. sql use;. This configuration specifies that the default separator is @ @, and overrides the default separator for DB schema scripts.
If you need more control than the XML namespace, you can use the DataSourceInitializer directly and define it as a component in your application.

Initialization of other components that depend on the database
A large number of applications (those that use the database only after the Spring context is started) can use the database initializer without further trouble. If your application is not one of them, you may need to read the rest of this section.

The database initializer relies on the data source instance and runs the script provided in its initialization callback (similar to the init method in the XML bean definition, the @ PostConstruct method in the component, or the afterPropertiesSet() method in the component that implements InitializingBean). If other beans depend on the same data source and use that data source in the initialization callback, problems can occur because the data has not been initialized. A common example is a cache that eagerly initializes and loads data from the database when the application starts.

To solve this problem, you have two options: change the cache initialization policy to a later stage, or make sure to initialize the database initializer first.
If the application is under your control, it may be easy to change the cache initialization policy, otherwise it is not. Some suggestions on how to achieve this include:

  • Delay initialization of the cache on first use, which improves application startup time.
  • Let your cache or a separate component initialize the cache for a lifecycle or SmartLifecycle. When the application context starts, you can automatically start a SmartLifecycle by setting its autoStartup flag, or you can manually start a lifecycle by calling ConfigurableApplicationContext.start() on the enclosing context.
  • Use Spring ApplicationEvent or a similar custom observer mechanism to trigger cache initialization. ContextRefreshedEvent is always published by context when it is ready to be used (after all bean s are initialized), so this is usually a useful hook (which is how SmartLifecycle works by default).

It is also easy to ensure that the database initializer is initialized first. Some suggestions on how to implement this include:

  • Depending on the default behavior of Spring BeanFactory, beans are initialized in the order they are registered. You can easily arrange these elements by adopting a common set of < import / > elements in the XML configuration, which sorts the application modules and ensures that the database and database initialization are listed first.
  • Separate the data source from the business components that use it, and control their starting order by placing them in a separate ApplicationContext instance (for example, the parent context contains the data source, and the child context contains the business components). This structure is common in Spring web applications, but it can be used more widely.

 

Published 7 original articles, praised 0, visited 52
Private letter follow

Tags: JDBC Spring Database SQL

Posted on Sat, 11 Jan 2020 01:12:55 -0800 by crash58