cassandra-2.1

Can't write to Cluster if replication__factor is greater than 1

不羁的心 提交于 2019-12-12 02:25:32
问题 I'm using Spark 1.6.1, Cassandra 2.2.3 and Cassandra-Spark connector 1.6. . I already tried to write to multi node cluster but with replication_factor:1 . Now, I'm trying to write to 6-node cluster with one seed one and keyspace which has replication_factor > 1 but Spark is not responding and he is refusing to do that. As I mention, it works when I'm writing to coordinator with keyspace set to 1 . This is an log which I'm getting and it always stops here or after half an hour he starts to

Cassandra Python driver OperationTimedOut issue

吃可爱长大的小学妹 提交于 2019-12-11 13:43:37
问题 I have a python script which is used to interact with cassandra with datastax python driver It has been running since March 14th, 2016 and had not problem until today. 2016-06-02 13:53:38,362 ERROR ('Unable to connect to any servers', {'172.16.47.155': OperationTimedOut('errors=Timed out creating connection (5 seconds), last_host=None',)}) 2016-06-02 13:54:18,362 ERROR ('Unable to connect to any servers', {'172.16.47.155': OperationTimedOut('errors=Timed out creating connection (5 seconds),

Spark Cassandra NoClassDefFoundError guava/cache/CacheLoader

天涯浪子 提交于 2019-12-11 07:35:49
问题 Running Cassandra 2.2.8, Win7, JDK8, Spark2, HAve thse in the CP: Cassandra core 3.12, spark-cassandra-2.11, Spark-cassandra-java-2.11, Spark2.11, spark-network-common_2.11, Guava-16.0.jar, sacala2.11.jar, etc Trying to run a basic example- compiles fine , but when when I try to run- at the first line itself get error: SparkConf conf = new SparkConf(); java.lang.NoClassDefFoundError: org/spark_project/guava/cache/CacheLoader Missing spark-network-common is supposed to cause this error - but I

Unable to see all keyspaces in C* 2.1.7 after I downgraded from 3.0 to 2.1.7

匆匆过客 提交于 2019-12-11 07:19:44
问题 I've been using Cassandra 2.1.7 and For some reason I upgraded to 3.0.12 and later realized that some dependent apps won't work with 3.0.12 and I downgraded and using C* 2.1.7 as I was using before. But Now I'm not able to see Keyspaces in C*. (Just FYI: Data directory is same in both C*yaml files) Do I have to make any changes? Appreciate your help. 回答1: When you upgrade from 2.x to 3.x you have to run upgradesstables command in nodetool. I assume this is what you did. Now when you

Cassandra write benchmark, low (20%) CPU usage

為{幸葍}努か 提交于 2019-12-11 04:23:54
问题 I'm building Cassandra 3x m1.large cluster on Amazon EC2. I've used DataStax Auto-Clustering AMI 2.5.1-pv, with Cassandra DataStax Community version 2.2.0-1. When doing write benchmarks, on 'production' data, it seems that cluster can handle around 3k to 5k write requests per second, without read load. Nearly all the time nodes do: Compaction of system.hints Compaction of mykeyspace.mybigtable Compaction of mybigtable index However, what worries me is the low CPU usage. All of the 3 nodes

Cassandra eats memory

依然范特西╮ 提交于 2019-12-10 23:46:32
问题 I have Cassandra 2.1 and following properties set: MAX_HEAP_SIZE="5G" HEAP_NEWSIZE="800M" memtable_allocation_type: heap_buffers top utility shows that cassandra eats 14.6G virtual memory: KiB Mem: 16433148 total, 16276592 used, 156556 free, 22920 buffers KiB Swap: 16777212 total, 0 used, 16777212 free. 9295960 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23120 cassand+ 20 0 14.653g 5.475g 29132 S 318.8 34.9 27:07.43 java It also dies with various OutOfMemoryError

Leveled Compaction Strategy with low disk space

China☆狼群 提交于 2019-12-08 13:17:08
问题 We have Cassandra 1.1.1 servers with Leveled Compaction Strategy. The system works so that there are read and delete operations. Every half a year we delete approximately half of the data while new data comes in. Sometimes it happens that disk usage goes up to 75% while we know that real data take about 40-50% other space is occupied by tombstones. To avoid disk overflow we force compaction of our tables by dropping all SSTables to Level 0. For that we remove .json manifest file and restart

What are best practices for deleting/altering cassandra columns of collection data-type?

£可爱£侵袭症+ 提交于 2019-12-08 09:45:58
问题 In our Cassandra table, every time we change data-types of "collection-type" columns it start causing issue. For example: For changing datatype from text to Map<text,float> we do this: drop existing column wait for cassandra to assimilate this change. add column (same name) but different data-type. This reflects fine in all nodes, but Cassandra logs start complaining during compaction with: RuntimeException: 6d6...73 is not defined as a collection I figured out the comparator entries are not

How can I create User Defined Functions in Cassandra with Custom Java Class?

江枫思渺然 提交于 2019-12-05 10:51:32
I couldn't find this anywhere online. How can I create a custom user defined function in cassandra?. For Ex : CREATE OR REPLACE FUNCTION customfunc(custommap map<text, int>) CALLED ON NULL INPUT RETURNS map<int,bigint> LANGUAGE java AS 'return MyClass.mymethod(custommap);'; Where "MyClass" is a class that I can register in the Classpath? Just adding my 2 cents to this thread as I tried building an external class method to support something similar. After trying for hours with Datastax Sandbox 5.1 I could not get this to work as it couldn't seem to find my class and kept raising type errors. My

Cassandra load balancing with TokenAwarePolicy and shuffleReplicas

爷,独闯天下 提交于 2019-12-04 13:37:26
We have 6 node cluster where we deploy everything to one region on AWS with 3 Availability Zones. We are using Ec2Snitch which should distribute one replica in each availability zone. We use DataStax Java driver. Servers doing write and read are distributed in availability zones same as nodes are (1 server by AZ). What we want to achieve is best possible read performance, write for us is not that important in a sense that we need to write data but not necessary fast. We use replication factor 3 but read and write with consistency level ONE. We are investigating shuffle replicas in