Pop multiple values from Redis data structure atomically?

徘徊边缘 提交于 2019-12-04 22:15:30

Starting from Redis 3.2, the command SPOP has a [count] argument to retrieve multiple elements from a set.

See http://redis.io/commands/spop#count-argument-extension

Use LRANGE with LTRIM in a pipeline. The pipeline will be run as one atomic transaction. Your worry above about WATCH, EXEC will not be applicable here because you are running the LRANGE and LTRIM as one transaction without the ability for any other transactions from any other clients to come between them. Try it out.

thom_nic

To expand on Eli's response with a complete example for list collections, using lrange and ltrim builtins instead of Lua:

127.0.0.1:6379> lpush a 0 1 2 3 4 5 6 7 8 9
(integer) 10
127.0.0.1:6379> lrange a 0 3        # read 4 items off the top of the stack
1) "9"
2) "8"
3) "7"
4) "6"
127.0.0.1:6379> ltrim a 4 -1        # remove those 4 items
OK
127.0.0.1:6379> lrange a 0 999      # remaining items
1) "5"
2) "4"
3) "3"
4) "2"
5) "1"
6) "0"

If you wanted to make the operation atomic, you would wrap the lrange and ltrim in multi and exec commands.

Also as noted elsewhere, you should probably ltrim the number of returned items not the number of items you asked for. e.g. if you did lrange a 0 99 but got 50 items you would ltrim a 50 -1 not ltrim a 100 -1.

To implement queue semantics instead of a stack, replace lpush with rpush.

if you want a lua script, this should be fast and easy.

local result = redis.call('lrange',KEYS[1],0,ARGV[1]-1)
redis.call('ltrim',KEYS[1],ARGV[1],-1)
return result

then you don't have to loop.

update: I tried to do this with srandmember (in 2.6) with the following script:

local members = redis.call('srandmember', KEYS[1], ARGV[1])
redis.call('srem', KEYS[1], table.concat(table, ' '))
return members

but I get an error:

error: -ERR Error running script (call to f_6188a714abd44c1c65513b9f7531e5312b72ec9b): 
Write commands not allowed after non deterministic commands

I don't know if future version allow this but I assume not. I think it would be problem with replication.

Redis 4.0+ now supports modules which add all kinds of new functionality and data types with much faster and safer processing than Lua scripts or multi/exec pipelines.

Redis Labs, the current sponsor behind Redis, has a useful set of extension modules called redex here: https://github.com/RedisLabsModules/redex

The rxlists module adds several list operations including LMPOP and RMPOP so you can atomically pop multiple values from a Redis list. The logic is still O(n) (basically doing a single pop in a loop) but all you have to do is install the module once and just send that custom command. I use it on lists with millions of items and thousands popped at once generating 500MB+ of network traffic without issue.

Here is a python snippet that can achieve this using redis-py and pipeline:

from redis import StrictRedis

client = StrictRedis()

def get_messages(q_name, prefetch_count=100):
    pipe = client.pipeline()
    pipe.lrange(q_name, 0, prefetch_count - 1)  # Get msgs (w/o pop)
    pipe.ltrim(q_name, prefetch_count, -1)  # Trim (pop) list to new value
    messages, trim_success = pipe.execute()
    return messages

I was thinking that I could just do a a for loop of pop but that would not be efficient, even with pipeline especially if the list queue is smaller than prefetch_count. I have a full RedisQueue class implemented here if you want to look. Hope it helps!

I think you should look at LUA support in Redis. If you write a LUA script and executes it on redis, it is guaranteed that it is atomic (because Redis is mono-threaded). No queries will be performed before the end of your LUA script (ie: you can't implement a big task in LUA or redis will get slow).

So, in this script you add your SPOP and RPOP, you can append the results from each redis command in an LUA array for instance and then return the array to your redis client.

What the documentation is saying about MULTI is that it is optimistic locking, that means it will retry doing the multi thing with WATCH until the watched value is not modified. If you have many writes on the watched value, it will be slower than 'pessimistic' locking (like many SQL databases: POSTGRESQL, MYSQL...) that in some manner 'stops the world' in order for the query to be executed first. Pessimistic locking is not implemented in redis, but you can implement it if you want, but it is complex and maybe you don't need it (not so many writes on this value: optimistic should be quite enough).

you probably can try a lua script (script.lua) like this:

local result = {}
for i = 0 , ARGV[1] do
    local val = redis.call('RPOP',KEYS[1])
    if val then
        table.insert(result,val)
    end
end
return result

you can call it this way :

redis-cli  eval "$(cat script.lua)" 1 "listName" 1000
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!