INSERT of 10 million queries under 10 minutes in Oracle?

前端 未结 4 1736
伪装坚强ぢ
伪装坚强ぢ 2020-12-15 09:08

I am working on a file loader program.

The purpose of this program is to take an input file, do some conversions on its data and then upload the data into the databa

相关标签:
4条回答
  • 2020-12-15 09:10

    You should try bulk insert your data. For that purpose, you can use OCI*ML. The discussion of it is here. Noteable article is here. Or you may try Oracle SQL Bulk Loader SQLLDR itself to increase your upload speed. To do that, serialize the data into csv file and call SQLLDR passing csv as an argument.

    Another possible optimization is transaction strategy. Try insert all data in 1 transaction per thread/connection.

    Another approach is to use MULTIPLE INSERT:

    INSERT ALL
       INTO ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, 
       override, update_source ) VALUES ('c','b',NULL, 'test', 123 , 'N', 'asdf')
       INTO ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, 
       override, update_source ) VALUES ('a','b',NULL, 'test', 123 , 'N', 'asdf')
       INTO ABC (SSM_ID, invocation_id , calc_id, analytic_id, analytic_value, 
       override, update_source ) VALUES ('b','b',NULL, 'test', 123 , 'N', 'asdf')
    
    SELECT 1 FROM DUAL;
    

    instead insert .. union all.

    Your sample data looks interindependent, that leads to inserting 1 significant row, then extending it into 4 rows with post-insert sql query.

    Also, turn off all indexes before insert batch (or delete them and re-create on bulk done). Table Index reduces insert perfomance while you dont actually use it at that time (it calculates some id over every inserted row and performs corresponding operations).

    Using prepared statement syntax should speed up upload routine due server would have an already parsed cached statement.

    Then, optimize your C++ code: move ops out of cycle:

     //! Preparing the Query
       std::string insert_query = "insert into ";
       insert_query += Context::instance().getUpdateTable();
       insert_query += " (SSM_ID, invocation_id , calc_id, 
            analytic_id, analytic_value, override, update_source)\n";
       while (startOffset < statements.size())
       { ... }
    
    0 讨论(0)
  • 2020-12-15 09:15

    By the way, did you try to increase number of physical clients, not just threads? By running in a cloud on several VMs or on several physical machines. I recently read comments I think from Aerospike developers where they explain that many people are unable to reproduce their results just because they don't understand it's not that easy to make a client actually send that much queries per second (above 1M per second in their case). For instance, for their benchmark they had to run 4 clients in parallel. Maybe this particular oracle driver just is not fast enough to support more than 7-8 thousands of request per second on single machine?

    0 讨论(0)
  • 2020-12-15 09:18

    I know others have mentioned this and you don't want to hear it but use SQL*Loader or external tables. My average load time for tables of approximately the same width is 12.57 seconds for just over 10m rows. These utilities have been explicitly designed to load data into the database quickly and are pretty good at it. This may incur some additional time penalties depending on the format of your input file, but there are quite a few options and I've rarely had to change files prior to loading.

    If you're unwilling to do this then you don't have to upgrade your hardware yet; you need to remove every possible impediment to loading this quickly. To enumerate them, remove:

    1. The index
    2. The trigger
    3. The sequence
    4. The partition

    With all of these you're obliging the database to perform more work and because you're doing this transactionally, you're not using the database to its full potential.

    Load the data into a separate table, say ABC_LOAD. After the data has been completely loaded perform a single INSERT statement into ABC.

    insert into abc
    select abc_seq.nextval, a.*
      from abc_load a
    

    When you do this (and even if you don't) ensure that the sequence cache size is correct; to quote:

    When an application accesses a sequence in the sequence cache, the sequence numbers are read quickly. However, if an application accesses a sequence that is not in the cache, then the sequence must be read from disk to the cache before the sequence numbers are used.

    If your applications use many sequences concurrently, then your sequence cache might not be large enough to hold all the sequences. In this case, access to sequence numbers might often require disk reads. For fast access to all sequences, be sure your cache has enough entries to hold all the sequences used concurrently by your applications.

    This means that if you have 10 threads concurrently writing 500 records each using this sequence then you need a cache size of 5,000. The ALTER SEQUENCE document states how to change this:

    alter sequence abc_seq cache 5000
    

    If you follow my suggestion I'd up the cache size to something around 10.5m.

    Look into using the APPEND hint (see also Oracle Base); this instructs Oracle to use a direct-path insert, which appends data directly to the end of the table rather than looking for space to put it. You won't be able to use this if your table has indexes but you could use it in ABC_LOAD

    insert /*+ append */ into ABC (SSM_ID, invocation_id , calc_id, ... )
    select 'c','b',NULL, 'test', 123 , 'N', 'asdf' from dual
    union all select 'a','b',NULL, 'test', 123 , 'N', 'asdf' from dual
    union all select 'b','b',NULL, 'test', 123 , 'N', 'asdf' from dual
    union all select 'c','g',NULL, 'test', 123 , 'N', 'asdf' from dual
    

    If you use the APPEND hint; I'd add TRUNCATE ABC_LOAD after you've inserted into ABC otherwise this table will grow indefinitely. This should be safe as you will have finished using the table by then.

    You don't mention what version or edition or Oracle you're using. There are a number of extra little tricks you can use:

    • Oracle 12c

      This version supports identity columns; you could get rid of the sequence completely.

      CREATE TABLE ABC(
         seq_no         NUMBER GENERATED AS IDENTITY (increment by 5000)
      
    • Oracle 11g r2

      If you keep the trigger; you can assign the sequence value directly.

      :new.seq_no := ABC_seq.nextval;
      
    • Oracle Enterprise Edition

      If you're using Oracle Enterprise you can speed up the INSERT from ABC_LOAD by using the PARALLEL hint:

      insert /*+ parallel */ into abc
      select abc_seq.nextval, a.*
        from abc_load a
      

      This can cause it's own problems (too many parallel processes etc), so test. It might help for the smaller batch inserts but it's less likely as you'll lose time computing what thread should process what.


    tl;dr

    Use the utilities that come with the database.

    If you can't use them then get rid of everything that might slow the insert down and do it in bulk, 'cause that's what the database is good at.

    0 讨论(0)
  • 2020-12-15 09:33

    If you have a text file you should try SQL LOADER with direct path. It is really fast and it is designed for this kind of massive data loads. Have a look at this options that can improve the performance.

    As a secondary advantage for ETL, your file in clear text will be smaller and easier to audit than 10^7 inserts.

    If you need to make some transformation you can do it afterwards with oracle.

    0 讨论(0)
提交回复
热议问题