问题
I am currently inserting records one-by-one into a table from C++ code using OCI. The data is in a hashmap of structs, I iterate over the elements of the map, binding the attributes of the struct to the columns of a record in the table (e.g.
define insert query use OCIBindByname( ) for all the columns of record iterate over map assign bind variables as attributes of the struct OCIStmtExecute end
This is pretty slow, so I'd like to speed up by doing a bulk insert. What is a good way to do this? Should I use an array of struct to insert all the records in one OCIStmtExecute? Do you have any example code which shows how to do this?
回答1:
Here is some sample code showing how I implemented this in OCI*ML. In summary, the way to do this is (say for a table with one column of integers):
malloc()
a block of memory ofsizeof(int)
× the number of rows and populate it. This could be an array.- Call
OCIBindByPos()
with that pointer for*valuep
and the size forvalue_sz
. - Call
OCIStmtExecute()
withiters
set to the number of rows from step 1
In my experience, speedups of 100× are certainly possible.
回答2:
What you probably want to do is 'bulk inserts'. Bulk inserts in array are done by using ArrayBinds where you bind the data of the first row with the first structure of the array and set jumps, which generally is size of structure. After this you can just do statement execute with number of arrays. Multiple binds will create overheads, hence bulk inserts are used.
回答3:
bulk insert example.txt
by
{
delimeter=',' // or any delimiter specified in your text files
size=200kb //or your size of text file
}
回答4:
Use DPL(Direct Path Loading). Refer to docs.oracle.com for more info.
来源:https://stackoverflow.com/questions/7733934/bulk-insert-using-oci