datastax-java-driver

How to prevent Cassandra commit logs filling up disk space

僤鯓⒐⒋嵵緔 提交于 2019-12-01 03:59:37
I'm running a two node Datastax AMI cluster on AWS. Yesterday, Cassandra started refusing connections from everything. The system logs showed nothing. After a lot of tinkering, I discovered that the commit logs had filled up all the disk space on the allotted mount and this seemed to be causing the connection refusal (deleted some of the commit logs, restarted and was able to connect). I'm on DataStax AMI 2.5.1 and Cassandra 2.1.7 If I decide to wipe and restart everything from scratch, how do I ensure that this does not happen again? You could try lowering the commitlog_total_space_in_mb

Datastax Cassandra Driver throwing CodecNotFoundException

ぐ巨炮叔叔 提交于 2019-12-01 03:49:35
The exact Exception is as follows com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal] These are the versions of Software I am using Spark 1.5 Datastax-cassandra 3.2.1 CDH 5.5.1 The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the google s guava library conflict initially which I was able to resolve by shading the guava library and

Cassandra read timeout

混江龙づ霸主 提交于 2019-12-01 02:43:59
问题 I am pulling big amount of data from cassandra 2.0, but unfortunately getting timeout exception. My table: CREATE KEYSPACE StatisticsKeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 }; CREATE TABLE StatisticsKeyspace.HourlyStatistics( KeywordId text, Date timestamp, HourOfDay int, Impressions int, Clicks int, AveragePosition double, ConversionRate double, AOV double, AverageCPC double, Cost double, Bid double, PRIMARY KEY(KeywordId, Date, HourOfDay) ); CREATE

Datastax Mismatch for Key Issue

落花浮王杯 提交于 2019-12-01 01:46:41
Our current setup contain DSE 5.0.2 version with 3 node cluster.Currently we are facing issue with heavy load and node failure issue.Debug.log details is given below: DEBUG [ReadRepairStage:8] 2016-09-27 14:11:58,781 ReadCallback.java:234 - Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(5503649670304043860, 343233) (45cf191fb10d902dc052aa76f7f0b54d vs ffa7b4097e7fa05de794371092c51c68) at org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159] at org.apache.cassandra.service

Datastax Cassandra Driver throwing CodecNotFoundException

江枫思渺然 提交于 2019-12-01 01:03:18
问题 The exact Exception is as follows com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal] These are the versions of Software I am using Spark 1.5 Datastax-cassandra 3.2.1 CDH 5.5.1 The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the

Datastax Mismatch for Key Issue

陌路散爱 提交于 2019-11-30 20:52:08
问题 Our current setup contain DSE 5.0.2 version with 3 node cluster.Currently we are facing issue with heavy load and node failure issue.Debug.log details is given below: DEBUG [ReadRepairStage:8] 2016-09-27 14:11:58,781 ReadCallback.java:234 - Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(5503649670304043860, 343233) (45cf191fb10d902dc052aa76f7f0b54d vs ffa7b4097e7fa05de794371092c51c68) at org.apache.cassandra.service.DigestResolver.resolve

Inserting into cassandra table from spark dataframe results in org.codehaus.commons.compiler.CompileException: File 'generated.java' Error

倖福魔咒の 提交于 2019-11-30 09:47:45
问题 I am using spark-sql.2.4.1v, datastax-java-cassandra-connector_2.11-2.4.1.jar and java8. I create the cassandra table like this: create company(company_id int PRIMARY_KEY, company_name text); JavaBean as below: class CompanyRecord( Integer company_id; String company_name; //getter and setters //default & parametarized constructors ) The spark code below saves the data into cassandra table: Dataset<Row> latestUpdatedDs = joinUpdatedRecordsDs.select("company_id", "company_name"); /// select

“All host(s) tried for query failed” Error

时光总嘲笑我的痴心妄想 提交于 2019-11-30 09:40:05
My Java code is as follows: import com.datastax.driver.core.Cluster; import com.datastax.driver.core.Metadata; import com.datastax.driver.core.Session; public class CustomerController { public void execute() { Cluster cluster = Cluster.builder() .addContactPoints("172.16.11.126", "172.16.11.130") .withPort(9042) .build(); Session session = cluster.connect(); String command = "drop keyspace if exists bookstore"; session.execute(command); cluster.close(); } } When I run the code, I get the following error: Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException:

How to use Asynchronous/Batch writes feature with Datastax Java driver

允我心安 提交于 2019-11-30 09:24:24
I am planning to use Datastax Java driver for writing to Cassandra.. I was mainly interested in Batch Writes and Asycnhronous features of Datastax java driver but I am not able to get any tutorials which can explain me how to incorporate these features in my below code which uses Datastax Java driver.. /** * Performs an upsert of the specified attributes for the specified id. */ public void upsertAttributes(final String userId, final Map<String, String> attributes, final String columnFamily) { try { // make a sql here using the above input parameters. String sql = sqlPart1.toString()+sqlPart2

How to get current timestamp with CQL while using Command Line?

谁说胖子不能爱 提交于 2019-11-30 04:10:33
I am trying to insert into my CQL table from the command line. I am able to insert everything. But I am wondering if I have a timestamp column, then how can I insert into timestamp column from the command line? Basically, I want to insert current timestamp whenever I am inserting into my CQL table - Currently, I am hardcoding the timestamp whenever I am inserting into my below CQL table - CREATE TABLE TEST (ID TEXT, NAME TEXT, VALUE TEXT, LAST_MODIFIED_DATE TIMESTAMP, PRIMARY KEY (ID)); INSERT INTO TEST (ID, NAME, VALUE, LAST_MODIFIED_DATE) VALUES ('1', 'elephant', 'SOME_VALUE', 1382655211694)