ORA-04091: table [blah] is mutating, trigger/function may not see it

前端 未结 4 631
清歌不尽
清歌不尽 2020-12-02 00:54

I recently started working on a large complex application, and I\'ve just been assigned a bug due to this error:

ORA-04091: table SCMA.TBL1 is mutating, trig         


        
相关标签:
4条回答
  • 2020-12-02 01:03

    I think I disagree with your description of what the trigger is trying to do. It looks to me like it is meant to enforce this business rule: For a given value of t1_appnt_event, only one row can have a non-NULL value of t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)

    Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.

    There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:

    • The table has a numeric primary key
    • The primary key and the t1_prnt_t1_pk are both always positive numbers

    If these assumptions are true, you could create a function like this:

    dev> create or replace function f( a number, b number ) return number deterministic as
      2  begin
      3    if a is null then return 0-b; else return a; end if;
      4  end;
    

    and an index like this:

    CREATE UNIQUE INDEX my_index ON my_table
      ( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
    

    So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.

    This is perhaps overly "clever", but it might help you get around your problem.

    Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:

     CREATE UNIQUE INDEX cappec_ccip_uniq_idx 
     ON tbl1 (t1_appnt_event, 
        CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
    
    0 讨论(0)
  • 2020-12-02 01:03

    I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).

    If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.

    Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)

    CREATE OR REPLACE PACKAGE trigger_pkg;
       PROCEDURE before_stmt_trigger;
       PROCEDURE for_each_row_trigger(row IN ROWID);
       PROCEDURE after_stmt_trigger;
    END trigger_pkg;
    
    CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
       TYPE rowid_tbl IS TABLE OF(ROWID);
       modified_rows rowid_tbl;
    
       PROCEDURE before_stmt_trigger IS
       BEGIN
          modified_rows := rowid_tbl();
       END before_each_stmt_trigger;
    
       PROCEDURE for_each_row_trigger(row IN ROWID) IS
       BEGIN
          modified_rows(modified_rows.COUNT) = row;
       END for_each_row_trigger;
    
       PROCEDURE after_stmt_trigger IS
       BEGIN
          FOR i IN 1 .. modified_rows.COUNT LOOP
             SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
             -- do whatever you want to
          END LOOP;
       END after_each_stmt_trigger;
    END trigger_pkg;
    
    CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
    BEGIN
       trigger_pkg.before_stmt_trigger;
    END;
    
    CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
    BEGIN
       trigger_pkg.after_stmt_trigger;
    END;
    
    CREATE OR REPLACE TRIGGER for_each_row_trigger
    BEFORE INSERT OR UPDATE ON mytable
    WHEN (new.mycolumn IS NOT NULL) AS
    BEGIN
       trigger_pkg.for_each_row_trigger(:new.rowid);
    END;
    
    0 讨论(0)
  • With any trigger-based (or application code-based) solution you need to put in locking to prevent data corruption in a multi-user environment. Even if your trigger worked, or was re-written to avoid the mutating table issue, it would not prevent 2 users from simultaneously updating t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not null: assume there are currenly no rows where t1_appnt_evnt_id=123 and t1_prnt_t1_pk is not null:

    Session 1> update tbl1 
               set t1_appnt_evnt_id=123 
               where t1_prnt_t1_pk =456;
               /* OK, trigger sees count of 0 */
    
    Session 2> update tbl1
               set t1_appnt_evnt_id=123
               where t1_prnt_t1_pk =789;
               /* OK, trigger sees count of 0 because 
                  session 1 hasn't committed yet */
    
    Session 1> commit;
    
    Session 2> commit;
    

    You now have a corrupted database!

    The way to avoid this (in trigger or application code) would be to lock the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:

    select appe_id 
    into   v_app_id
    from parent_table
    where appe_id = :new.t1_appnt_evnt_id
    for update;    
    

    Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.

    It would be much simpler and safer to implement Dave Costa's index!

    Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...

    0 讨论(0)
  • 2020-12-02 01:22

    I had similar error with Hibernate. And flushing session by using

    getHibernateTemplate().saveOrUpdate(o);
    getHibernateTemplate().flush();
    

    solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

    0 讨论(0)
提交回复
热议问题