Why Spring's jdbcTemplate.batchUpdate() so slow?

后端 未结 8 2307
终归单人心
终归单人心 2020-12-04 18:11

I\'m trying to find the faster way to do batch insert.

I tried to insert several batches with jdbcTemplate.update(String sql), wher

相关标签:
8条回答
  • 2020-12-04 18:54

    Solution given by @Rakesh worked for me. Significant improvement in performance. Earlier time was 8 min, with this solution taking less than 2 min.

    DataSource ds = jdbcTemplate.getDataSource();
    Connection connection = ds.getConnection();
    connection.setAutoCommit(false);
    String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
    PreparedStatement ps = connection.prepareStatement(sql);
    final int batchSize = 1000;
    int count = 0;
    
    for (Employee employee: employees) {
    
        ps.setString(1, employee.getName());
        ps.setString(2, employee.getCity());
        ps.setString(3, employee.getPhone());
        ps.addBatch();
    
        ++count;
    
        if(count % batchSize == 0 || count == employees.size()) {
            ps.executeBatch();
            ps.clearBatch(); 
        }
    }
    
    connection.commit();
    ps.close();
    
    0 讨论(0)
  • I don't know if this will work for you, but here's a Spring-free way that I ended up using. It was significantly faster than the various Spring methods I tried. I even tried using the JDBC template batch update method the other answer describes, but even that was slower than I wanted. I'm not sure what the deal was and the Internets didn't have many answers either. I suspected it had to do with how commits were being handled.

    This approach is just straight JDBC using the java.sql packages and PreparedStatement's batch interface. This was the fastest way that I could get 24M records into a MySQL DB.

    I more or less just built up collections of "record" objects and then called the below code in a method that batch inserted all the records. The loop that built the collections was responsible for managing the batch size.

    I was trying to insert 24M records into a MySQL DB and it was going ~200 records per second using Spring batch. When I switched to this method, it went up to ~2500 records per second. so my 24M record load went from a theoretical 1.5 days to about 2.5 hours.

    First create a connection...

    Connection conn = null;
    try{
        Class.forName("com.mysql.jdbc.Driver");
        conn = DriverManager.getConnection(connectionUrl, username, password);
    }catch(SQLException e){}catch(ClassNotFoundException e){}
    

    Then create a prepared statement and load it with batches of values for insert, and then execute as a single batch insert...

    PreparedStatement ps = null;
    try{
        conn.setAutoCommit(false);
        ps = conn.prepareStatement(sql); // INSERT INTO TABLE(x, y, i) VALUES(1,2,3)
        for(MyRecord record : records){
            try{
                ps.setString(1, record.getX());
                ps.setString(2, record.getY());
                ps.setString(3, record.getI());
    
                ps.addBatch();
            } catch (Exception e){
                ps.clearParameters();
                logger.warn("Skipping record...", e);
            }
        }
    
        ps.executeBatch();
        conn.commit();
    } catch (SQLException e){
    } finally {
        if(null != ps){
            try {ps.close();} catch (SQLException e){}
        }
    }
    

    Obviously I've removed error handling and the query and Record object is notional and whatnot.

    Edit: Since your original question was comparing the insert into foobar values (?,?,?), (?,?,?)...(?,?,?) method to Spring batch, here's a more direct response to that:

    It looks like your original method is likely the fastest way to do bulk data loads into MySQL without using something like the "LOAD DATA INFILE" approach. A quote from the MysQL docs (http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html):

    If you are inserting many rows from the same client at the same time, use INSERT statements with multiple VALUES lists to insert several rows at a time. This is considerably faster (many times faster in some cases) than using separate single-row INSERT statements.

    You could modify the Spring JDBC Template batchUpdate method to do an insert with multiple VALUES specified per 'setValues' call, but you'd have to manually keep track of the index values as you iterate over the set of things being inserted. And you'd run into a nasty edge case at the end when the total number of things being inserted isn't a multiple of the number of VALUES lists you have in your prepared statement.

    If you use the approach I outline, you could do the same thing (use a prepared statement with multiple VALUES lists) and then when you get to that edge case at the end, it's a little easier to deal with because you can build and execute one last statement with exactly the right number of VALUES lists. It's a bit hacky, but most optimized things are.

    0 讨论(0)
  • 2020-12-04 19:02

    Change your sql insert to INSERT INTO TABLE(x, y, i) VALUES(1,2,3). The framework creates a loop for you. For example:

    public void insertBatch(final List<Customer> customers){
    
      String sql = "INSERT INTO CUSTOMER " +
        "(CUST_ID, NAME, AGE) VALUES (?, ?, ?)";
    
      getJdbcTemplate().batchUpdate(sql, new BatchPreparedStatementSetter() {
    
        @Override
        public void setValues(PreparedStatement ps, int i) throws SQLException {
            Customer customer = customers.get(i);
            ps.setLong(1, customer.getCustId());
            ps.setString(2, customer.getName());
            ps.setInt(3, customer.getAge() );
        }
    
        @Override
        public int getBatchSize() {
            return customers.size();
        }
      });
    }
    

    IF you have something like this. Spring will do something like:

    for(int i = 0; i < getBatchSize(); i++){
       execute the prepared statement with the parameters for the current iteration
    }
    

    The framework first creates PreparedStatement from the query (the sql variable) then the setValues method is called and the statement is executed. that is repeated as much times as you specify in the getBatchSize() method. So the right way to write the insert statement is with only one values clause. You can take a look at http://docs.spring.io/spring/docs/3.0.x/reference/jdbc.html

    0 讨论(0)
  • 2020-12-04 19:07

    I have also faced the same issue with Spring JDBC template. Probably with Spring Batch the statement was executed and committed on every insert or on chunks, that slowed things down.

    I have replaced the jdbcTemplate.batchUpdate() code with original JDBC batch insertion code and found the Major performance improvement.

    DataSource ds = jdbcTemplate.getDataSource();
    Connection connection = ds.getConnection();
    connection.setAutoCommit(false);
    String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
    PreparedStatement ps = connection.prepareStatement(sql);
    final int batchSize = 1000;
    int count = 0;
    
    for (Employee employee: employees) {
    
        ps.setString(1, employee.getName());
        ps.setString(2, employee.getCity());
        ps.setString(3, employee.getPhone());
        ps.addBatch();
    
        ++count;
    
        if(count % batchSize == 0 || count == employees.size()) {
            ps.executeBatch();
            ps.clearBatch(); 
        }
    }
    
    connection.commit();
    ps.close();
    

    Check this link as well JDBC batch insert performance

    0 讨论(0)
  • 2020-12-04 19:07

    I found a major improvement setting the argTypes array in the call.

    In my case, with Spring 4.1.4 and Oracle 12c, for insertion of 5000 rows with 35 fields:

    jdbcTemplate.batchUpdate(insert, parameters); // Take 7 seconds
    
    jdbcTemplate.batchUpdate(insert, parameters, argTypes); // Take 0.08 seconds!!!
    

    The argTypes param is an int array where you set each field in this way:

    int[] argTypes = new int[35];
    argTypes[0] = Types.VARCHAR;
    argTypes[1] = Types.VARCHAR;
    argTypes[2] = Types.VARCHAR;
    argTypes[3] = Types.DECIMAL;
    argTypes[4] = Types.TIMESTAMP;
    .....
    

    I debugged org\springframework\jdbc\core\JdbcTemplate.java and found that most of the time was consumed trying to know the nature of each field, and this was made for each record.

    Hope this helps !

    0 讨论(0)
  • 2020-12-04 19:11

    These parameters in the JDBC connection URL can make a big difference in the speed of batched statements --- in my experience, they speed things up:

    ?useServerPrepStmts=false&rewriteBatchedStatements=true

    See: JDBC batch insert performance

    0 讨论(0)
提交回复
热议问题