Querying Data in Cassandra via Spark in a Java Maven Project

前端 未结 1 345
情歌与酒
情歌与酒 2021-01-24 22:47

I\'m trying to do a simple code where I create a schema, insert some tables, and then pull some information and print it out. However, I\'m getting an error. I\'m using the Data

1条回答
  •  长情又很酷
    2021-01-24 23:44

    Try running your CQL via cqlsh and you should get the same/similar error:

    aploetz@cqlsh:stackoverflow> CREATE TABLE dept (id INT PRIMARY KEY, dname TEXT);
    aploetz@cqlsh:stackoverflow> INSERT INTO dept (id, dname) VALUES (1553,Commerce);
    
    

    Put single quotes around "Commerce" and it should work:

    session.execute(
                      "INSERT INTO tester.dept (id, dname) " +
                      "VALUES (" +
                          "1553," +
                          "'Commerce'" +
                          ");");
    

    I'm getting a new error though now...

    Also try running that from cqlsh.

    aploetz@cqlsh:stackoverflow> SELECT * FROM emp WHERE role = 'IT Engineer';
    code=2200 [Invalid query] message="No indexed columns present in by-columns clause with Equal operator"
    

    This is happening because role is not defined as your primary key. Cassandra doesn't allow you to query by arbitrary column values. The best way to solve this one, is to create an additional query table called empByRole, with role as the partition key. Like this:

    CREATE TABLE empByRole 
        (id INT, fname TEXT, lname TEXT, role TEXT,
        PRIMARY KEY (role,id)
    );
    
    aploetz@cqlsh:stackoverflow> INSERT INTO empByRole (id, fname, lname, role) VALUES (0001,'Angel','Pay','IT Engineer');
    aploetz@cqlsh:stackoverflow> SELECT * FROM empByRole WHERE role = 'IT Engineer';
    
     role        | id | fname | lname
    -------------+----+-------+-------
     IT Engineer |  1 | Angel |   Pay
    
    (1 rows)
    

    0 讨论(0)
提交回复
热议问题