janusgraph

Janusgraph 0.3.2 + HBase 1.4.9 - Can't set graph.timestamps

核能气质少年 提交于 2019-12-11 08:28:58
问题 I am running Janusgraph 0.3.2 in a docker container and trying to use an AWS EMR cluster running HBase 1.4.9 as the storage backend. I can run gremlin-server.sh, but if I try to save something, I get the stack trace pasted below. It looks to me like the locks are being created using different timestamps lengths causing it to look like no lock exists. I tried adding the graph.timestamps setting to the config file, but still got the same error. Here is my configuration gremlin-server.yml host:

Gremlin.net textContains equivalent

余生长醉 提交于 2019-12-11 02:08:31
问题 I am using Gremlin.net library to connect to a janus graph server. I am usin cassandra and elasstic search for data storage and indexing. In gremlin language and gremlin console I use textContains to search within the text of a property. I am using Mixed index for that, but I can find the equivalent for that in Gremlin.net Library? can anyone help? 回答1: Gremlin.Net will not have that. TinkerPop doesn't have text or geo search predicates that JanusGraph and other systems have. At this point,

How to increase performance of shortest path using Gremlin?

痞子三分冷 提交于 2019-12-09 03:35:26
I'm using JanusGraph with Gremlin and this dataset cotaining 2.6k nodes and 6.6k edges (3.3k edges on both sides). I've run the query for 10 minutes without find the shortest path. Using Gephi the shortest path is almost instantaneous. Here's my query: g.V(687).repeat(out().simplePath()).until(hasId(1343)).path().limit(1) With simplePath() your query still processes a lot more paths than necessary. For example, if 688 is a direct neighbor of 687 , but also a neighbor of 1000 , which is 10 hops away on another path, why would you want to follow the path from 1000 to 688 , if you've already seen

How to increase performance of shortest path using Gremlin?

余生颓废 提交于 2019-12-08 06:03:53
问题 I'm using JanusGraph with Gremlin and this dataset cotaining 2.6k nodes and 6.6k edges (3.3k edges on both sides). I've run the query for 10 minutes without find the shortest path. Using Gephi the shortest path is almost instantaneous. Here's my query: g.V(687).repeat(out().simplePath()).until(hasId(1343)).path().limit(1) 回答1: With simplePath() your query still processes a lot more paths than necessary. For example, if 688 is a direct neighbor of 687 , but also a neighbor of 1000 , which is

Create if not exist Vertex and Edge in 1 query gremlin

你说的曾经没有我的故事 提交于 2019-12-08 03:10:19
问题 I find the following code to create edge if it has not existed yet. g.V().hasLabel("V1") .has("userId", userId).as("a") .V().hasLabel("V1").has("userId", userId2) .coalesce( bothE("link").where(outV().as("a")), addE("link").from("a") ) It works fine but I want to create both vertices and edge if they are not existed in 1 query. I try the following code with new graph, it just create new vertices but no relation between them. g.V().hasLabel("V1") .has("userId", userId).fold() .coalesce( unfold

Using JanusGraph with Solr

此生再无相见时 提交于 2019-12-06 08:08:41
Setting up JanusGraph i noticed the following in the console: 09:04:12,175 INFO ReflectiveConfigOptionLoader:173 - Loaded and initialized config classes: 10 OK out of 12 attempts in PT0.023S 09:04:12,230 INFO Reflections:224 - Reflections took 28 ms to scan 1 urls, producing 2 keys and 2 values 09:04:12,291 WARN GraphDatabaseConfiguration:1445 - Local setting index.search.index-name=entity (Type: GLOBAL_OFFLINE) is overridden by globally managed value (janusgraph). Use the ManagementSystem interface instead of the local configuration to control this setting. 09:04:12,294 WARN

Setup and configuration of JanusGraph for a Spark cluster and Cassandra

余生颓废 提交于 2019-12-06 03:08:29
问题 I am running JanusGraph (0.1.0) with Spark (1.6.1) on a single machine. I did my configuration as described here. When accessing the graph on the gremlin-console with the SparkGraphComputer, it is always empty. I cannot find any error in the logfiles, it is just an empty graph. Is anyone using JanusGraph with Spark and can share his configuration and properties? Using a JanusGraph, I get the expected Output: gremlin> graph=JanusGraphFactory.open('conf/test.properties') ==>standardjanusgraph

Setup and configuration of JanusGraph for a Spark cluster and Cassandra

北战南征 提交于 2019-12-04 08:10:35
I am running JanusGraph (0.1.0) with Spark (1.6.1) on a single machine. I did my configuration as described here . When accessing the graph on the gremlin-console with the SparkGraphComputer, it is always empty. I cannot find any error in the logfiles, it is just an empty graph. Is anyone using JanusGraph with Spark and can share his configuration and properties? Using a JanusGraph, I get the expected Output: gremlin> graph=JanusGraphFactory.open('conf/test.properties') ==>standardjanusgraph[cassandrathrift:[127.0.0.1]] gremlin> g=graph.traversal() ==>graphtraversalsource[standardjanusgraph

IllegalStateException : Gremlin Server must be configured to use the JanusGraphManager

流过昼夜 提交于 2019-12-04 05:53:30
问题 Set<String> graphNames = JanusGraphFactory.getGraphNames(); for(String name:graphNames) { System.out.println(name); } The above snippet produces the following exception java.lang.IllegalStateException: Gremlin Server must be configured to use the JanusGraphManager. at com.google.common.base.Preconditions.checkState(Preconditions.java:173) at org.janusgraph.core.JanusGraphFactory.getGraphNames(JanusGraphFactory.java:175) at com.JanusTest.controllers.JanusController.getPersonDetail

Is cassandra unable to store relationships that cross partition size limit?

有些话、适合烂在心里 提交于 2019-12-02 03:54:47
I've noticed that relationships cannot be properly stored in C* due to its 100MB partition limit, denormalization doesn't help in this case and the fact that C* can have 2B cells per partition neither as those 2B cells of just Longs have 16GB ?!?!? Doesn't that cross 100MB partition size limit ? Which is what I don't understand in general, C* proclaims it can have 2B cells but a partition sizes should not cross 100MB ??? What is the idiomatic way to do this? People say that this an ideal use case for TitanDB or JanusDB that scale well with billions of nodes and edges. How do these databases