Which database to choose (Cassandra, MongoDB, ?) for storing and querying event / log / metrics data?

后端 未结 3 2138
一整个雨季
一整个雨季 2021-02-14 09:29

In sql terms we\'re storing data like this:

table events (
  id
  timestamp
  dimension1
  dimension2
  dimension3
  etc.
)

All dimension value

相关标签:
3条回答
  • 2021-02-14 09:59

    Was also looking at MongoDB, but their "group()" function has severe limitations as far as I could read (max of 10,000 rows).

    To clarify, this is 10,000 rows returned. In your example, this will work for up to 10,000 combinations of dimension1/dimension2. If that's too large, then you can also use the slower Map / Reduce. Note that if you're running a query with more than 10k results, it may best to use Map / Reduce and save this data. 10k is a large query result to otherwise just "throw away".

    Do you have experience with any of these databases, and would you recommend it as a solution to the problem described above?

    Many people actually use MongoDB to do this type of summary "real-time", but they do it using "counters" instead of "aggregation". Instead of "rolling-up" detailed data, they'll do a regular insert and then they'll increment some counters.

    In particular, using the atomic modifiers like $inc & $push to atomically update data in a single request.

    Take a look at hummingbird for someone doing this right now. There's also an open source event-logging system backed by MongoDB: Graylog2. ServerDensity also does server event logging backed by MongoDB.

    Looking at these may give you some inspiration for the types of logging you want to do.

    0 讨论(0)
  • 2021-02-14 10:11

    I started to go down this path for a similar purpose (metrics gathering and reporting), and here's where I ended up...

    Getting the data in is the easy part. Getting the data out is the hard part.

    If you have time and talent, you could learn and use a combination of open source tools as described here: http://kibana.org/infrastructure.html. The parts list:

    • Syslog-ng - Syslogd
    • Logstash - Powerful log pipeline
    • RabbitMQ or Redis - For queuing messages
    • Elasticsearch - Full text document storage and search
    • Graphite - From Orbitz, Scalable real-time graphing
    • Statsd - From Etsy, counts occurrences of fields and ships to graphite
    • Graphital - A ruby daemon to send host level performance data to graphite
    • Kibana - A browser based log analysis front end for Logstash and Elasticsearch

    If you have more money than time, consider Splunk. It's expensive, but it's a good choice for a lot of situations. e.g. I'm in a situation where the client is extremely scarce on people, but they don't mind spending money, so splunk has been a good fit in that it's more of a turn-key solution than learning and stitching together a composite of tools.

    0 讨论(0)
  • 2021-02-14 10:20

    "Group by" and "stupidly fast" do not go together. That's just the nature of that beast... Hence the limitations on Mongo's group operation; Cassandra doesn't even support it natively (although it does for Hive or Pig queries via Hadoop... but those are not intended to be stupidly fast).

    Systems like Twitter's Rainbird (which uses Cassandra) doing realtime analytics do it by denormalizing/pre-computing the counts: http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011

    0 讨论(0)
提交回复
热议问题