datastax-java-driver

Exception when connecting to Cassandra with CQL using DataStax Java driver 1.0.4

守給你的承諾、 提交于 2019-12-06 05:35:19
问题 I have Cassandra 1.2.11 running on my laptop. I can connect to it using nodetool and cqlsh but when I try and use the DataStax 1.0.4 Java API to connect using CQL 3.0 I get the following error: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1 ([localhost/127.0.0.1] Unexpected error during transport initialization (com.datastax.driver.core.TransportException: [localhost/127.0.0.1] Channel has been closed))) at com

Datastax java driver 3.0.0 Enumerated annotation not found

China☆狼群 提交于 2019-12-06 03:02:04
Hope I am reading the docs well: http://docs.datastax.com/en/developer/java-driver/3.0/java-driver/reference/crudOperations.html . The Enumerated annotation If your class contains an enum type field, you use the Enumerated annotation. I have a Java enum and I want to use the @Enumerated annotation, but I can't seem to find it in 3.0.0 version of the driver, it was present in 2.1.9. $ find . -type f -name \*.jar|while read i; do echo ====== $i =====; jar -tf $i|grep Enumerated; done ====== ./cassandra-driver-core/2.1.4/cassandra-driver-core-2.1.4-javadoc.jar ===== ====== ./cassandra-driver-core

Cassandra: Adding new column to the table

邮差的信 提交于 2019-12-06 02:20:30
Hi I just added a new column Business_sys to my table my_table: ALTER TABLE my_table ALTER business_sys TYPE set<text>; But again I just droped this column name because I wanted to change the type of column: ALTER TABLE my_table DROP business_sys; Again when I tried to add the same colmn name with different type am getting error message "Cannnot add a collection with the name business_sys because the collection with the same name and different type has already been used in past" I just tried to execute this command to add a new column with different type- ALTER TABLE my_table ADD business_sys

Cassandra row level locking support with DataStax driver

ⅰ亾dé卋堺 提交于 2019-12-06 00:56:11
问题 Cassandra row level locking support while accessing same row by cocurrent users we are in design phase of our shooping cart application considering Cassandra as Inventory database. now requirment is that if multiple users access same product row in Inventory DB at same time. for example :- Product table : productID productQuantitiy 1000 1 If first user selects product '1000' and add product quantity as '1' in shopping cart, other users accessing the same product should not be able to select

Is there a good way to check whether a Datastax Session.executeAsync() has thrown an exception?

点点圈 提交于 2019-12-05 21:15:30
问题 I'm trying to speed up our code by calling session.executeAsync() instead of session.execute() for DB writes. We have use cases where the DB connection might be down, currently the previous execute() throws an exception when the connection is lost (no hosts reachable in the cluster). We can catch these exceptions and retry or save the data somewhere else etc... With executeAsync() , it doesn't look like there's any way to fulfill this use case - the returned ResultSetFuture object needs to be

What is the most efficient way to map/transform/cast a Cassandra BoundStatement's ResultSet to a Java classe built using the Object-mapping API?

流过昼夜 提交于 2019-12-05 15:57:05
Is there a builtin way in DataStax Java for Apache Cassandra to map the ResultSet coming from a BoundStatement to the domain objects Java classes built with using the Object-mapping API? I am a newbie moving from the Mapper + Accessor approach to BoundStatement approach and would like to continue using the domain objects' Java classes built with the Object-mapping API so I do minimal changes to the implementation of my DAO methods while moving to the BoundStatement. I am looking to do it in a generic way and avoid to iterate over each ResultSet row and do a row.get one by one for each domain

How to efficiently use Batch writes to cassandra using datastax java driver?

亡梦爱人 提交于 2019-12-05 10:46:10
I need to write in Batches to Cassandra using Datastax Java driver and this is my first time I am trying to use batch with datastax java driver so I am having some confusion - Below is my code in which I am trying to make a Statement object and adding it to Batch and setting the ConsistencyLevel as QUORUM as well. Session session = null; Cluster cluster = null; // we build cluster and session object here and we use DowngradingConsistencyRetryPolicy as well // cluster = builder.withSocketOptions(socketOpts).withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE) public void insertMetadata

How will i know that record was duplicate or it was inserted successfully?

…衆ロ難τιáo~ 提交于 2019-12-05 06:38:40
Here is my CQL table: CREATE TABLE user_login ( userName varchar PRIMARY KEY, userId uuid, fullName varchar, password text, blocked boolean ); I have this datastax java driver code PreparedStatement prepareStmt= instances.getCqlSession().prepare("INSERT INTO "+ AppConstants.KEYSPACE+".user_info(userId, userName, fullName, bizzCateg, userType, blocked) VALUES(?, ?, ?, ?, ?, ?);"); batch.add(prepareStmt.bind(userId, userData.getEmail(), userData.getName(), userData.getBizzCategory(), userData.getUserType(), false)); PreparedStatement pstmtUserLogin = instances.getCqlSession().prepare("INSERT

Cassandra Query Failures: All host(s) tried for query failed (no host was tried)

自闭症网瘾萝莉.ら 提交于 2019-12-05 02:03:49
问题 I am not able to do queries against the Cassandra Node. I am able to make the connection to the cluster and connect. However while doing the the query, it fails Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried) at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:217) at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:44) at com.datastax.driver.core.RequestHandler

Atomic Batches in Cassandra

こ雲淡風輕ζ 提交于 2019-12-04 23:09:17
问题 What do you mean by Batch Statements are atomic in cassandra? Docs are a bit confusing in nature to be precise. Does it mean that queries are atomic across nodes in cluster? Say,for example, i have a batch with 100 queries. If the 40th query in batch fails, what happens to the 39 queries executed in the batch? I understand that there is a batchlog created under the hood and it will take care of the consistency for partial batches. Does it remove the rest of the 39 entries and provide the