datastax-java-driver

ClassNotFoundException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition

只谈情不闲聊 提交于 2019-12-11 15:24:04
问题 I am using Spark version 2.2.1 Using Scala version 2.11.8 OpenJDK 64-Bit Server VM, 1.8.0_131 I have add jar dependency by using code JavaSparkContext sc = new JavaSparkContext(conf); sc.addJar("./target/CassandraSparkJava-1.0-SNAPSHOT-jar-with-dependencies.jar"); Executing below code, but facing ClassNotFoundException:com.datastax.spark.connector.rdd.partitioner.CassandraPartition Dataset<org.apache.spark.sql.Row> dataset = sparksession.read().format("org.apache.spark.sql.cassandra")

Iterating a GraphTraversal with GraphFrame causes UnsupportedOperationException Row to Vertex conversion

北慕城南 提交于 2019-12-11 12:08:01
问题 The following GraphTraversal<Row, Edge> traversal = gf().E().hasLabel("foo").limit(5); while (traversal.hasNext()) {} causes the following Exception: java.lang.UnsupportedOperationException: Row to Vertex conversion is not supported: Use .df().collect() instead of the iterator at com.datastax.bdp.graph.spark.graphframe.DseGraphTraversal.iterator$lzycompute(DseGraphTraversal.scala:92) at com.datastax.bdp.graph.spark.graphframe.DseGraphTraversal.iterator(DseGraphTraversal.scala:78) at com

Datastax: Re-preparing already prepared query warning

a 夏天 提交于 2019-12-11 09:37:11
问题 I have this code UUID notUuid = UUIDs.timeBased(); PreparedStatement pstmt = cqlSession.prepare("INSERT INTO mytable(userId, notifId, notification, time, read, deleted) VALUES(?, ?, ?, ?, ?, ?)"); BoundStatement boundStatement = new BoundStatement(pstmt); cqlSession.execute(boundStatement.bind(userId, notUuid, notfMsg, System.currentTimeMillis(), MigificConstants.UNREAD, "false")); when i run this code, in the log it shows Re-preparing already prepared query INSERT INTO mytable(userId,

How to map JavaBean columsn with Casssandra table fields?

两盒软妹~` 提交于 2019-12-11 04:59:21
问题 I am using spark-sql.2.4.1v , datastax-java-cassandra-connector_2.11-2.4.1.jar and java8. I have cassandra table like create company(company_id int PRIMARY_KEY, company_name text); JavaBean as below @Table(name = "company") class CompanyRecord( @PartitionKey(0) @Column(name="company_id") Integer companyId; @Column(name="company_name") String companyName; //getter and setters //default & parametarized constructors ) I have spark code below save the data into cassandra table. Dataset<Row>

Which additional libraries are required for client compression?

◇◆丶佛笑我妖孽 提交于 2019-12-11 03:03:28
问题 The Datastax Java driver supports client-node connection compression using snappy and LZ4. When starting the Java driver it states WARN [2015-04-28 16:13:59,906] com.datastax.driver.core.FrameCompressor: Cannot find LZ4 class, you should make sure the LZ4 library is in the classpath if you intend to use it. LZ4 compression will not be available for the protocol. Two questions: Which "LZ4 library" is the driver referring to in the above log message? Is there a Maven repo for it perhaps? I

Cassandra - Write doesn't fail, but values aren't inserted

本秂侑毒 提交于 2019-12-10 18:29:05
问题 I have a cluster of 3 Cassandra 2.0 nodes. My application I wrote a test which tries to write and read some data into/from Cassandra. In general this works fine. The curiosity is that after I restarted my computer, this test will fail, because after writting I read the same value I´ve write before and there I get null instead of the value, but the was no exception while writing. If I manually truncate the used column family, the test will pass. After that I can execute this test how often I

range query in Cassandra

拟墨画扇 提交于 2019-12-10 11:18:38
问题 I'm using Cassandra 2.1.2 with the corresponding DataStax Java driver and the Object mapping provided by DataStax. following table definition: CREATE TABLE IF NOT EXISTS ses.tim (id text PRIMARY KEY, start bigint, cid int); the mapping: @Table(keyspace = "ses", name = "tim") class MyObj { @PartitionKey private String id; private Long start; private int cid; } the accessor @Accessor interface MyAccessor { @Query("SELECT * FROM ses.tim WHERE id = :iid") MyObj get(@Param("iid") String id);

What are the implications of using lightweight transactions?

左心房为你撑大大i 提交于 2019-12-10 02:29:13
问题 In particular I was looking at this page where it says: If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used. I'm confused as to what using LWTs for read operations looks like. Specifically how this relates to per-query consistency (and serialConsistency) levels. The description for SERIAL read consistency raises further questions: Allows reading the current (and possibly uncommitted) state

How to know affected rows in Cassandra(CQL)?

你离开我真会死。 提交于 2019-12-08 17:30:25
问题 There doesn't seem to be any direct way to know affected rows in cassandra for update, and delete statements. For example if I have a query like this: DELETE FROM xyztable WHERE PKEY IN (1,2,3,4,5,6); Now, of course, since I've passed 6 keys, it is obvious that 6 rows will be affected. But, like in RDBMS world, is there any way to know affected rows in update/delete statements in datastax-driver? I've read cassandra gives no feedback on write operations here. Except that I could not see any

Creating a custom index on a collection using CQL 3.0

有些话、适合烂在心里 提交于 2019-12-07 22:26:08
问题 I have been looking at the CQL 3.0 data modelling documentation which describes a column family of songs with tags, created like this: CREATE TABLE songs ( id uuid PRIMARY KEY, title text, tags set<text> ); I would like to get a list of all songs which have a specific tag, so I need to add an appropriate index. I can create an index on the title column easily enough, but if I try to index the tags column which is a collection, like this: CREATE INDEX ON songs ( tags ); I get the following