hibernate insert batch with partitioned postgresql

前端 未结 6 1556
一生所求
一生所求 2021-02-07 21:27

is there a solution for batch insert via hibernate in partitioned postgresql table? currently i\'m getting an error like this...

ERROR org.hibernate.jdbc.Abstra         


        
相关标签:
6条回答
  • 2021-02-07 21:39

    They say to use two triggers in a partitioned table or the @SQLInsert annotation here: http://www.redhat.com/f/pdf/jbw/jmlodgenski_940_scaling_hibernate.pdf pages 21-26 (it also mentions an @SQLInsert specifying a String method).

    Here is an example with an after trigger to delete the extra row in the master: https://gist.github.com/copiousfreetime/59067

    0 讨论(0)
  • 2021-02-07 21:44

    thnx! it did the trick, no problems poped up, so far :)....one thing thou... i had to implement BatcherFactory class and put it int the persistence.xml file, like this:

    property name="hibernate.jdbc.factory_class" value="path.to.my.batcher.factory.implementation"
    

    from that factory i've called my batcher implementation with the code above

    ps hibernate core 3.2.6 GA

    thanks once again

    0 讨论(0)
  • 2021-02-07 21:45

    I faced the same problem while inserting documents through hibernate after lot of search found that it is expecting that updated rows should be returned so instead of null change it to new in trigger procedure which will resolve the problem as shown below

    RETURN NEW

    0 讨论(0)
  • 2021-02-07 21:46

    Appears if you can use RULES instead of triggers for the insert, then it can return the right number, but only with a single RULE without a WHERE statement.

    ref1

    ref2

    ref3

    another option may be to create a view that 'wraps' the partitioned table, then you return the NEW row out to indicate a successful row update, without accidentally adding an extra unwanted row to the master table.

    create view tablename_view as select * from tablename; -- create trivial wrapping view
    
    CREATE OR REPLACE FUNCTION partitioned_insert_trigger() -- partitioned insert trigger
    RETURNS TRIGGER AS $$
    BEGIN
       IF (NEW.partition_key>= 5500000000 AND
           NEW.partition_key <  6000000000) THEN
          INSERT INTO tablename_55_59 VALUES (NEW.*);
       ELSIF (NEW.partition_key >= 5000000000 AND
              NEW.partition_key <  5500000000) THEN
          INSERT INTO tablename_50_54 VALUES (NEW.*);
       ELSIF (NEW.partition_key >= 500000000 AND
              NEW.partition_key  <  1000000000) THEN
          INSERT INTO tablename_5_9 VALUES (NEW.*);
       ELSIF (NEW.partition_key >= 0 AND
              NEW.partition_key <  500000000) THEN
          INSERT INTO tablename_0_4 VALUES (NEW.*);
       ELSE
          RAISE EXCEPTION 'partition key is out of range.  Fix the trigger function';
       END IF;
       RETURN NEW; -- RETURN NEW in this case, typically you'd return NULL from this trigger, but for views we return NEW
    END;
    $$
    LANGUAGE plpgsql;
    
    CREATE TRIGGER insert_view_trigger
       INSTEAD OF INSERT ON tablename_view
       FOR EACH ROW EXECUTE PROCEDURE partitioned_insert_trigger(); -- create "INSTEAD OF" trigger
    

    ref: http://www.postgresql.org/docs/9.2/static/trigger-definition.html

    If you went the view wrapper route one option is to also define trivial "instead of" triggers for delete and update, as well, then you can just use the name of the view table in place of your normal table in all transactions.

    Another option that uses the view is to create an insert rule so that any inserts on the main table go to the view [which uses its trigger], ex (assuming you already have partitioned_insert_trigger and tablename_view and insert_view_trigger created as listed above)

    create RULE use_right_inserter_tablename AS
          ON INSERT TO tablename
          DO INSTEAD insert into tablename_view VALUES (NEW.*);
    

    Then it will use your new working view wrapper insert.

    0 讨论(0)
  • 2021-02-07 21:47

    I found another solution for the same problem on this webpage:

    This suggests the same solution that @rogerdpack said, changing the Return Null to Return NEW, and adding a new trigger that deletes the duplicated tuple in the master with the query:

    DELETE FROM ONLY master_table;
    
    0 讨论(0)
  • 2021-02-07 21:55

    You might want to try using a custom Batcher by setting the hibernate.jdbc.factory_class property. Making sure hibernate won't check the update count of batch operations might fix your problem, you can achieve that by making your custom Batcher extend the class BatchingBatcher, and then overriding the method doExecuteBatch(...) to look like:

        @Override
        protected void doExecuteBatch(PreparedStatement ps) throws SQLException, HibernateException {
            if ( batchSize == 0 ) {
                log.debug( "no batched statements to execute" );
            }
            else {
                if ( log.isDebugEnabled() ) {
                    log.debug( "Executing batch size: " + batchSize );
                }
    
                try {
    //              checkRowCounts( ps.executeBatch(), ps );
                    ps.executeBatch();
                }
                catch (RuntimeException re) {
                    log.error( "Exception executing batch: ", re );
                    throw re;
                }
                finally {
                    batchSize = 0;
                }
    
            }
    
        }
    

    Note that the new method doesn't check the results of executing the prepared statements. Keep in mind that making this change might affect hibernate in some unexpected way (or maybe not).

    0 讨论(0)
提交回复
热议问题