Cassandra Timing out because of TTL expiration

久未见 提交于 2019-12-24 14:08:39

问题


Im using a DataStax Community v 2.1.2-1 (AMI v 2.5) with preinstalled default settings+ increased read time out to 10sec here is the issue

 create table simplenotification_ttl (
               user_id varchar, 
               real_time timestamp,
               insert_time timeuuid,
               read boolean,
               msg varchar, PRIMARY KEY (user_id, real_time, insert_time));

Insert Query:

insert into simplenotification_ttl (user_id, real_time, insert_time, read) 
  values ('test_3',14401440123, now(),false) using TTL 800;

For same 'test_3' I inserted 33,000 tuples. [This problem does not happen for 24,000 tuples]

Gradually i see

cqlsh:notificationstore> select count(*)  from simplenotification_ttl where user_id = 'test_3'; 

 count
-------
 15681

(1 rows)

cqlsh:notificationstore> select count(*)  from simplenotification_ttl where user_id = 'test_3'; 

 count
-------
 12737

(1 rows)

cqlsh:notificationstore> select count(*)  from simplenotification_ttl where user_id = 'test_3'; 
**errors={}, last_host=127.0.0.1**

I have experimented this many times even on different tables. Once this happens, even if i insert with same user_id and do a retrieval with limit 1. It times out.

I require TTL to work properly ie give count 0 after speculated time. How to solve this issue? Thanks

[My other node related setup is using m3.large with 2 nodes EC2Snitch]


回答1:


You're running into a problem where the number of tombstones (deleted values) is passing a threshold, and then timing out.

You can see this if you turn on tracing and then try your select statement, for example:

cqlsh> tracing on;
cqlsh> select count(*) from test.simple;

 activity                                                                        | timestamp    | source       | source_elapsed
---------------------------------------------------------------------------------+--------------+--------------+----------------
...snip...
 Scanned over 100000 tombstones; query aborted (see tombstone_failure_threshold) | 23:36:59,324 |  172.31.0.85 |         123932
                                                    Scanned 1 rows and matched 1 | 23:36:59,325 |  172.31.0.85 |         124575
                           Timed out; received 0 of 1 responses for range 2 of 4 | 23:37:09,200 | 172.31.13.33 |       10002216

You're kind of running into an anti-pattern for Cassandra where data is stored for just a short time before being deleted. There are a few options for handling this better, including revisiting your data model if needed. Here are some resources:

  • The cassandra.yaml configuration file - See section on tombstone settings
  • Cassandra anti-patterns: Queues and queue-like datasets
  • About deletes

For your sample problem, I tried lowering the gc_grace_seconds setting to 300 (5 minutes). That causes the tombstones to be cleaned up more frequently than the default 10 days, but that may or not be appropriate based on your application. Read up on the implications of deletes and you can adjust as needed for your application.



来源:https://stackoverflow.com/questions/27376784/cassandra-timing-out-because-of-ttl-expiration

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!