Instead of opening several transactions (read a table, write to a table, write to another table, etc) is it possible to do this all from a single transaction as long as you
Short answer: If you provide an event handler for a "success" or "error" event, you can place a new request inside that event handler and not have to worry about the transaction getting automatically closed.
Long answer: Transaction committing should generally be completely transparent. The only rule is that you can't hold a transaction open while doing non-database "stuff". I.e. you can't start a transaction, then hold it open while doing some XMLHttpRequests, or while waiting for the user to click a button.
As soon as you stop placing requests against a transaction and the last request callback finishes, the transaction is automatically closed.
However you can start a transaction, use that transaction to read some data and then write some results.
So make sure that you have all the data that you need before you start the transaction, then do all reads and writes that you want to do in the request callbacks. Once you are done the transaction will automatically finish.
To keep the transaction active, keep performing the next operations from the callbacks of the complete operation. Refer to the following sample code.
function put_data(db,tableName,data_array)
{
var objectStore=db.transaction([tableName],"readwrite").objectStore(tableName);
put_record(data_array,objectStore,num_rows,0);
}
function put_record(data_array,objectStore,row_index)
{
if(row_index<data_array.length)
{
var req=objectStore.put(data_array[row_index]);
req.onsuccess=function(e)
{
row_index+=1;
put_record(data_array,objectStore,row_index);
};
req.onerror = function()
{
console.error("error", this.error);
row_index+=1;
put_record(data_array,objectStore,row_index);
};
}
}
IndexedDB transactions commit as soon as the last callback is fired, so the way to keep them alive is to pass them along via callbacks.
I'm sourcing my transaction info from Jonas Sicking, a Mozilla dev and co-spec writer for IndexedDB, who commented on this excellent blog post to say the following:
The following sentence isn't correct "Transactions today auto-commit when the transaction variable goes out of scope and no more requests can be placed against it".
Transaction never automatically commit when a variable goes out of scope. Generally they only commit when the last success/error callback fires and that callback schedules no more requests. So it's not related to the scope of any variables.
The only exception to this is if you create a transaction but place no requests against it. In that case the transaction is "committed" (whatever that means for a transaction which has no requests) as soon as you return to the event loop. In this scenario you could technically "commit" the transaction as soon as all references to it go out of scope, but it's not a particularly interesting use case to optimize.
Short answer: Don't keep.
To prevent race condition, IndexedDB is designed for implicit commit and hence you must NOT explicitly keep an transaction alive. If it is required, change your algorithm so that it does not require to keep it alive.
Reuse transaction for performance and executing ordered requests. In these cases, transaction will implicitly keep alive.