datastax-java-driver

What should be datatype for timeuuid in datastax mapper class?

早过忘川 提交于 2019-12-04 21:14:45
Datatype for one of the column in cassandra table is timeuuid . While creating my Mapper class as per docs , I am not sure of data type I should use for timeuuid column. I understand that it should be an equivalent Java data type and hence I tried java.util.Date. Refer column definition and Mapper class column definition as below start timeuuid @PartitionKey(1) @Column(name="start") private UUID start; I get the below during CRUD operation Codec not found for requested operation: [timeuuid -> java.util.Date] I have customize the UUIDs class of datastax to get TimeUUID from a Time Here is the

User Defined Type (UDT) behavior in Cassandra

一世执手 提交于 2019-12-04 20:13:56
if someone has some experience in using UDT (User Defined Types), I would like to understand how the backward compatibility would work. Say I have the following UDT CREATE TYPE addr ( street1 text, zip text, state text ); If I modify "addr" UDT to have a couple of more attributes (say for example zip_code2 int, and name text): CREATE TYPE addr ( street1 text, zip text, state text, zip_code2 int, name text ); how does the older rows that does have these attributes work? Is it even compatible? Thanks The new UDT definition would be compatible with the old definition. User-defined types can have

Exception in main thread java.lang.NoClassDefFoundError

谁都会走 提交于 2019-12-04 16:46:15
Getting error Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureCallback, while running below code. Pls advise which Jar file am missing. I am executing from Eclipse IDE package Datastax; import com.datastax.driver.core.Cluster; import com.datastax.driver.core.Host; import com.datastax.driver.core.Metadata; import com.datastax.driver.core.Session; public class DataStaxPOC { private Cluster cluster; public void connect(String node) { cluster = Cluster.builder().addContactPoint(node).build(); Metadata metadata = cluster.getMetadata(); System.out

Cassandra load balancing with TokenAwarePolicy and shuffleReplicas

爷,独闯天下 提交于 2019-12-04 13:37:26
We have 6 node cluster where we deploy everything to one region on AWS with 3 Availability Zones. We are using Ec2Snitch which should distribute one replica in each availability zone. We use DataStax Java driver. Servers doing write and read are distributed in availability zones same as nodes are (1 server by AZ). What we want to achieve is best possible read performance, write for us is not that important in a sense that we need to write data but not necessary fast. We use replication factor 3 but read and write with consistency level ONE. We are investigating shuffle replicas in

Exception when connecting to Cassandra with CQL using DataStax Java driver 1.0.4

牧云@^-^@ 提交于 2019-12-04 12:47:53
I have Cassandra 1.2.11 running on my laptop. I can connect to it using nodetool and cqlsh but when I try and use the DataStax 1.0.4 Java API to connect using CQL 3.0 I get the following error: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1 ([localhost/127.0.0.1] Unexpected error during transport initialization (com.datastax.driver.core.TransportException: [localhost/127.0.0.1] Channel has been closed))) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:186) I am using the

Cassandra row level locking support with DataStax driver

安稳与你 提交于 2019-12-04 06:50:58
Cassandra row level locking support while accessing same row by cocurrent users we are in design phase of our shooping cart application considering Cassandra as Inventory database. now requirment is that if multiple users access same product row in Inventory DB at same time. for example :- Product table : productID productQuantitiy 1000 1 If first user selects product '1000' and add product quantity as '1' in shopping cart, other users accessing the same product should not be able to select this product until it gets free by first user (updated product quantity as 0). so does cassandra provide

Upsert/Read into/from Cassandra database using Datastax API (using new Binary protocol)

帅比萌擦擦* 提交于 2019-12-04 06:16:37
问题 I have started working with Cassandra database . I am planning to use Datastax API to upsert/read into/from Cassandra database . I am totally new to this Datastax API (which uses new Binary protocol) and I am not able to find lot of documentations as well which have some proper examples. create column family profile with key_validation_class = 'UTF8Type' and comparator = 'UTF8Type' and default_validation_class = 'UTF8Type' and column_metadata = [ {column_name : crd, validation_class :

How to prevent Cassandra commit logs filling up disk space

谁说我不能喝 提交于 2019-12-04 01:15:23
问题 I'm running a two node Datastax AMI cluster on AWS. Yesterday, Cassandra started refusing connections from everything. The system logs showed nothing. After a lot of tinkering, I discovered that the commit logs had filled up all the disk space on the allotted mount and this seemed to be causing the connection refusal (deleted some of the commit logs, restarted and was able to connect). I'm on DataStax AMI 2.5.1 and Cassandra 2.1.7 If I decide to wipe and restart everything from scratch, how

Cassandra Bulk-Write performance with Java Driver is atrocious compared to MongoDB

喜夏-厌秋 提交于 2019-12-03 22:04:42
问题 I have built an importer for MongoDB and Cassandra. Basically all operations of the importer are the same, except for the last part where data gets formed to match the needed cassandra table schema and wanted mongodb document structure. The write performance of Cassandra is really bad compared to MongoDB and I think I'm doing something wrong. Basically, my abstract importer class loads the data, reads out all data and passes it to the extending MongoDBImporter or CassandraImporter class to

Atomic Batches in Cassandra

时光总嘲笑我的痴心妄想 提交于 2019-12-03 14:46:30
What do you mean by Batch Statements are atomic in cassandra? Docs are a bit confusing in nature to be precise. Does it mean that queries are atomic across nodes in cluster? Say,for example, i have a batch with 100 queries. If the 40th query in batch fails, what happens to the 39 queries executed in the batch? I understand that there is a batchlog created under the hood and it will take care of the consistency for partial batches. Does it remove the rest of the 39 entries and provide the required atomic nature of batch queries. In MYSQL, we set autocommit to false and hence we can rollback.