问题
I found 4 "proper" ways to do this:
- In the cheat sheet for ActiveRecord users substitutes for ActiveRecord's
increment
andincrement_counter
are supposed to bealbum.values[:column] -= 1 # or += 1 for increment
andalbum.update(:counter_name=>Sequel.+(:counter_name, 1))
- In a SO solution
update_sql
is suggested for the same effects[:query_volume].update_sql(:queries => Sequel.expr(3) + :queries)
- In a random thread I found this one
dataset.update_sql(:exp => 'exp + 10'.lit)
- In the Sequels API docs for update I found this solution
http://sequel.jeremyevans.net/rdoc/classes/Sequel/Dataset.html#method-i-update
yet none of the solutions actually update the value and return the result in a safe, atomic way.
Solutions based on "adding a value and then saving" should, afaik, fail nondeterministically in multiprocessing environments resulting with errors such as:
- album's counter is 0
- thread A and thread B both fetch
album
- thread A and thread B both increment the value in the hash/model/etc
- thread A and thread B both update the counter to same value
- as a result: A and B both set the counter to 1 and work with counter value 1
Sequel.expr
and Sequel.+
on the other hand don't actually return a value, but a Sequel::SQL::NumericExpression
and (afaik) you have no way of getting it out short of doing another DB roundtrip, which means this can happen:
- album's counter is 0
- thread A and B both increment the value, value is incremented by 2
- thread A and B both fetch the row from the DB
- as a result: A and B both set the counter to 2 and work with counter value 2
So, short of writing custom locking code, what's the solution? If there's none, short of writing custom locking code :) what's the best way to do it?
Update 1
I'm generally not happy with answers saying that I want too much of life, as 1 answer suggests :)
The albums are just an example from the docs.
Imagine for example that you have a transaction counter on an e-commerce POS which can accept 2 transactions at the same time on different hosts and to the bank you need to send them with an integer counter unique in 24h (called systan), send 2 trx with same systan and 1 will be declined, or worse, gaps in the counts are alerted (because they hint at "missing transactions") so it's not possible to use the DB's ID value.
A less severe example, but more related to my use case, several file exports get triggered simultaneously in a background worker, every file destination has its own counter. Gaps in the counters are alerted, workers are on different hosts (so mutexes are not useful). And I have a feeling I'll soon be solving the more severe problem anyway.
The DB sequences are no good either because it would mean doing DDL on addition of every terminal, and we're talking 1000s here. Even in my less sever use case DDLing on web portal actions is still a PITA, and might even not work depending on the cacheing scheme below (due to implementation of ActiveRecord
and Sequel
- and in my case I use both - might require server restart just to register a merchant).
Redis can do this, but it seems insane to add another infrastructure component just for counters when you're sitting on an ACID-compliant database.
回答1:
If you are using PostgreSQL, you can use UPDATE RETURNING: DB[:table].returning(:counter).update(:counter => Sequel.expr(1) + :counter)
However, without support for UPDATE RETURNING or something similar, there is no way to atomically increment at the same time as return the incremented value.
回答2:
The answer is - in a multithreaded environment, don't use DB counters. When faced with this dilema:
- If I need a unique integer counter, use a threadsafe counter generator that parcels out counters as threads require them. This can be a simple integer or something more complex like a Twitter Snowflake-like generator.
- If I need a unique identifier, I use something like a uuid
In your particular situation, where you need a count of albums - is there a reason you need this on the database rather than as a derived field on the model?
Update 1:
Given that you're dealing with something approximating file exports with workers on multiple hosts, you either need to parcel out the ids in advance (i.e. seed a worker with a job and the next available id from a single canonical source) or have the workers call in to a central service which allocates transaction ids on a first come first served basis.
I can't think of another way to do it. I've never worked with a POS system, but the telecoms network provisioning systems I've worked on have generally used a single transaction generator service which namespaced ids as appropriate.
来源:https://stackoverflow.com/questions/38713501/sequel-ruby-how-to-increment-and-use-a-db-counter-in-a-safe-way