问题
I would like to use Space to extract word relation information in the form of "agent, action, and patient." For example, "Autonomous cars shift insurance liability toward manufacturers" -> ("autonomous cars", "shift", "liability") or ("autonomous cars", "shift", "liability towards manufacturers"). In other words, "who did what to whom" and "what applied the action to something else." I don't know much about my input data, so I can't make many assumptions.
I also want to extract logical relationships. For example, "Whenever/if the sun is in the sky, the bird flies" or cause/effect cases like "Heat makes ice cream melt."
For dependencies, Space recommends iterating through sentences word by word and finding the root that way, but I'm not sure what clear pattern in traversal to use in order to get the information in a reliable way I can organize. My use case involves structuring these sentences into a form that I can use for queries and logical conclusions. This might be comparable to my own mini Prolog data store.
For cause/effect, I could hard-code some rules, but then I still need to find a way of reliably traversing the dependency tree and extracting information. (I will probably combine this with core resolution using neuralcoref and also word vectors and concept net to resolve ambiguities, but this is a little tangential.)
In short, the question is really about how to extract that information / how best to traverse.
On a tangential note, I am wondering if I really need a constituency tree as well for phrase-level parsing to achieve this. I think that Stanford provides that, but Spacy might not.
来源:https://stackoverflow.com/questions/62601716/in-spacy-nlp-how-extract-the-agent-action-and-patient-as-well-as-cause-eff