Cassandra CQLSH OperationTimedOut error=Client request timeout. See Session.execute[_async](timeout)

夙愿已清 提交于 2019-12-06 17:40:58

问题


I want to transfer data from one Cassandra cluster (reached via 192.168.0.200) to another Cassandra cluster (reached via 127.0.0.1). The data is 523 rows but each row is about 1 MB. I am using the COPY FROM and COPY TO command. I get the following error when I issue the COPY TO command:

Error for (8948428671687021382, 9075041744804640605):
OperationTimedOut - errors={
'192.168.0.200': 'Client request timeout. See Session.execute[_async](timeout)'},
last_host=192.168.0.200 (will try again later attempt 1 of 5).

I tried to change the ~/.cassandra/cqlshrc file to:

[connection]
client_timeout = 5000

But this hasn't helped.


回答1:


It's not clear which version of Cassandra you're using here so I'm going to assume 3.0.x

The COPY function is good but not always the best choice (i.e. if you have a lot of data), however for this though you might want to check some of your timeout settings in cassandra

The docs here show a pagetimeout setting too which may help you.

Moving data between two clusters can be done a number of other ways. You could use of any of the following:

  1. The sstableloader
  2. One of the drivers like the java driver
  3. Using spark to copy data from one cluster to another, like in this example
  4. Using OpsCenter to clone a cluster
  5. The cassandra bulk loader (I've known a number of people to use this)

Of course #3 and #4 need DSE cassandra but its just to give you an idea. I wasn't sure if you were using Apache Cassandra or Datastax Enterprise Cassandra.

Anyway, hope this helps!




回答2:


You may want to increment the request timeout (default: 10 seconds), not the connect timeout.

Try:

cqlsh --request-timeout=6000

or add:

[connection]
request_timeout = 6000

to your ~/.cassandra/cqlshrc file.




回答3:


Regarding the copy timeout the correct way is to use the PAGETIMEOUT parameter as already pointed.

copy keyspace.table to '/dev/null' WITH PAGETIMEOUT=10000;

Trying to set the --request-timeout=6000 with cqlsh does not help in that situation.




回答4:


Hi besides the following,

1.Check tombstones
In cassandra tombstones degrade the performance of reads and following issue occur OperationTimedOut: errors={'127.0.0.1': 'Client request timeout. See Session.execute_async'}, last_host=127.0.0.1
Note
When we insert data in to the table with null values in columns it creates a tombstones. we need to avoid null inserts inside the table.
There are multiple options available like unset(https://docs.datastax.com/en/latest-csharp-driver-api/html/T_Cassandra_Unset.htm) and ignoreNulls (https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md) property in spark.
You can check your table status using the following command
nodetool tablestats keyspace1.tablename

2.Remove Tombstones
If your working on a single node you can remove tombstones by altering your table ALTER table keyspace1.tablename WITH gc_grace_seconds = '0';

3.read_request_timeout_in_ms:Configure the value in cassandra.yaml file to increase timeout for a read request



来源:https://stackoverflow.com/questions/39955968/cassandra-cqlsh-operationtimedout-error-client-request-timeout-see-session-exec

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!