问题
I have a scalatest suite that's failing, and I have narrowed the cause down to the code that runs before tests and truncates a data table. If I run the following code I can recreate the problem
session.execute(s"TRUNCATE ${dao.tableName};")
session.execute(s"TRUNCATE ${dao.tableName};")
Throws:
Error during truncate: Cannot achieve consistency level ALL
com.datastax.driver.core.exceptions.TruncateException: Error during truncate: Cannot achieve consistency level ALL
at com.datastax.driver.core.exceptions.TruncateException.copy(TruncateException.java:35)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
at com.datastax.driver.core.Session.execute(Session.java:126)
at com.datastax.driver.core.Session.execute(Session.java:77)
at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply$mcV$sp(PostingGroupDaoTest.scala:43)
at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply(PostingGroupDaoTest.scala:39)
at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply(PostingGroupDaoTest.scala:39)
at org.scalatest.FunSuite$$anon$1.apply(FunSuite.scala:1265)
at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
at ledger.testsupport.JUnitFunSuiteTest.withFixture(JUnitFunSuiteTest.scala:10)
at org.scalatest.FunSuite$class.invokeWithFixture$1(FunSuite.scala:1262)
at ...
Caused by: com.datastax.driver.core.exceptions.TruncateException: Error during truncate: Cannot achieve consistency level ALL
at com.datastax.driver.core.Responses$Error.asException(Responses.java:91)
at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:122)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:224)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:361)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:510)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
I'm using the datastax driver 2.0.0-RC2, and have a cluster of three nodes.
Any ideas as to what's going wrong here?
回答1:
Turns out this was an issue with a node that had got into an inconsistent state due to running out of diskspace
回答2:
This is because of consistency level . You can not truncate all nodes data using consistency level ALL . you have to put consistency level one or two then it will truncate all data from one nodes after some time this node will truncate all data from other nodes .
来源:https://stackoverflow.com/questions/21457800/cassandra-truncating-a-table-twice-throws-consistency-exception