Unable to connect to snappydata store with spark-shell command

你离开我真会死。 提交于 2019-12-10 12:18:41

问题


SnappyData v0.5

My goal is to start a "spark-shell" from my SnappyData install's /bin directory and issue Scala commands against existing tables in my SnappyData store.

I am on the same host as my SnappyData store, locator, and lead (and yes, they are all running).

To do this, I am running this command as per the documentation here:

Connecting to a Cluster with spark-shell

~/snappydata/bin$ spark-shell --master local[*] --conf snappydata.store.locators=10.0.18.66:1527 --conf spark.ui.port=4041

I get this error trying to create a spark-shell to my store:

[TRACE 2016/08/12 15:21:55.183 UTC GFXD:error:FabricServiceAPI tid=0x1] XJ040 error occurred while starting server : java.sql.SQLException(XJ040): Failed to start datab
ase 'snappydata', see the cause for details. java.sql.SQLException(XJ040): Failed to start database 'snappydata', see the cause for details. at com.pivotal.gemfirexd.internal.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:124) at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:110) at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:136) at com.pivotal.gemfirexd.internal.impl.jdbc.Util.generateCsSQLException(Util.java:245) at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:3380) at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection.(EmbedConnection.java:450) at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection30.(EmbedConnection30.java:94) at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection40.(EmbedConnection40.java:75) at com.pivotal.gemfirexd.internal.jdbc.Driver40.getNewEmbedConnection(Driver40.java:95) at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:351) at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:219) at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:195) at com.pivotal.gemfirexd.internal.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:141) at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServiceImpl.startImpl(FabricServiceImpl.java:290) at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServerImpl.start(FabricServerImpl.java:60) at io.snappydata.impl.ServerImpl.start(ServerImpl.scala:32)

Caused by: com.gemstone.gemfire.GemFireConfigException: Unable to contact a Locator service (timeout=5000ms). Operation either timed out or Locator does not exist. Configured list of locators is "[dev-snappydata-1(null):1527]". at com.gemstone.gemfire.distributed.internal.membership.jgroup.GFJGBasicAdapter.getGemFireConfigException(GFJGBasicAdapter.java:533) at com.gemstone.org.jgroups.protocols.TCPGOSSIP.sendGetMembersRequest(TCPGOSSIP.java:212) at com.gemstone.org.jgroups.protocols.PingSender.run(PingSender.java:82) at java.lang.Thread.run(Thread.java:745)


回答1:


hmm! I assume you are trying the Spark-shell from your desktop and connecting to the cluster in AWS? Not sure this is going to work because the local JVM launched by spark-shell will attempt to connect to the p2p cluster in Snappydata which is not likely to work.

Snappy-shell on the other hand merely uses the JDBC client to connect (and, hence will work).

And, you cannot use the locator client port (1527), anyway. See here

Can you try with snappydata.store.locators=10.0.18.66:10334 NOT 1527 as the port ? Unlikely this will work but worth a try.

Maybe there is a way to open up all ports and access to these nodes on AWS. Not recommended for production, though.

I am curious for other responses from the engg team. Until then, you may have to start the spark-shell from within the network (AWS node).



来源:https://stackoverflow.com/questions/38921733/unable-to-connect-to-snappydata-store-with-spark-shell-command

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!