问题
I have DSE spark cluster with 2 nodes. One DSE analytics node with spark cannot start after I install it. Without spark it starts just fine. But on my other node spark is enabled and it can start and works just fine. Why is that and how can I solve that? Thanks.
Here is my error log:
ERROR [main] 2016-02-27 20:35:43,353 CassandraDaemon.java:294 - Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Cannot start node if snitch's data center (Analytics) differs from previous data center (Cassandra). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:629) ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:290) [cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:335) [dse-core-4.8.4.jar:4.8.4]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:564) [cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at com.datastax.bdp.DseModule.main(DseModule.java:74) [dse-core-4.8.4.jar:4.8.4]
INFO [Thread-2] 2016-02-27 20:35:43,355 DseDaemon.java:418 - DSE shutting down...
回答1:
You previously started this node with the DseSimpleSnitch which named the Datacenter Cassandra since analytics was not enabled.
Now when starting this node the records on disk state that the datacenter name should be "Cassandra" but since it was started in analytics mode the actual datacenter name is "Analytics". Clear out /var/lib/cassandra and it should wipe out the old data and start fresh.
In the future if you set your nodes to use the GossipingPropertyFileSnitch or another snitch that allows you to name the datacenter explicitly, you can avoid this problem since changing the workload will not change the Datacenter name.
回答2:
This rule was added recently to prevent people from accidentally changing rack / DC names and take their applications down.
Alternatively, if this is just a dev system and you can afford downtime, you can turn off the check (this assumes you know what you're doing).
Add:
JVM_OPTS="$JVM_OPTS -Dcassandra.ignore_rack=true -Dcassandra.ignore_dc=true"
to your cassandra-env.sh
来源:https://stackoverflow.com/questions/35670343/two-node-dse-spark-cluster-error-setting-up-second-node-why