问题
If I am building a graph on AWS Neptune with 20 million Nodes and 100 million edges, How much RAM and Disk space would I require? Can someone give me an rough order of magnitude estimate
回答1:
Storage capacity in Amazon Neptune is dynamically allocated as you write data into a Neptune cluster. A new cluster starts out with 10GB allocated and then grows in 10GB segments as your data grows. As such, there's no need to pre-provision or calculate storage capacity prior to use. A Neptune cluster can hold up to 64TB of data, which is in the 100s of billions of vertices, edges, and properties range (or triples, if using RDF on Neptune).
RAM (and CPU, for that matter) needs are driven by query complexity, not by graph size. RAM is also used for buffer pool cache, caching the vertices, edges, and properties that were most recently queried.
来源:https://stackoverflow.com/questions/63439818/estimating-graph-db-size-on-aws-neptune