Getting results on a pandas dataframe from a cypher query on a Neo4j database with py2neo is really straightforward, as:
>>> from pandas import DataFram
You can use DataFrame.iterrows()
to iterate through the DataFrame and execute a query for each row, passing in the values from the row as parameters.
for index, row in df.iterrows():
graph.run('''
MATCH (a:Label1 {property:$label1})
MERGE (a)-[r:R_TYPE]->(b:Label2 {property:$label2})
''', parameters = {'label1': row['label1'], 'label2': row['label2']})
That will execute one transaction per row. We can batch multiple queries into one transaction for better performance.
tx = graph.begin()
for index, row in df.iterrows():
tx.evaluate('''
MATCH (a:Label1 {property:$label1})
MERGE (a)-[r:R_TYPE]->(b:Label2 {property:$label2})
''', parameters = {'label1': row['label1'], 'label2': row['label2']})
tx.commit()
Typically we can batch ~20k database operations in a single transaction.
I found out that the proposed solution doesn't work for me. The code above creates new nodes even though the nodes already exist. To make sure you don't create any duplicates, I suggest matching both a
and b
node before merge
:
tx = graph.begin()
for index, row in df.iterrows():
tx.evaluate('''
MATCH (a:Label1 {property:$label1}), (b:Label2 {property:$label2})
MERGE (a)-[r:R_TYPE]->(b)
''', parameters = {'label1': row['label1'], 'label2': row['label2']})
tx.commit()
Also in my case, I had to add relationship properties simultaneously (see the code below). Moreover, I had 500k+ relationships to add, so I expectedly run into the java heap memory error. I solved the problem by placing begin()
and commit()
inside the loop, so for each new relationship a new transaction is created:
for index, row in df.iterrows():
tx = graph.begin()
tx.evaluate('''
MATCH (a:Label1 {property:$label1}), (b:Label2 {property:$label2})
MERGE (a)-[r:R_TYPE{property_name:$p}]->(b)
''', parameters = {'label1': row['label1'], 'label2': row['label2'], 'p': row['property']})
tx.commit()