Which clustered NoSQL DB for a Message Storing purpose?

后端 未结 3 957
长发绾君心
长发绾君心 2020-12-15 13:03

Yet another question about which NoSQL to choose. However, I haven\'t found yet someone asking for this type of purpose, message storing...

I have an Erlang Chat Ser

相关标签:
3条回答
  • 2020-12-15 13:07

    I can't speak to Riak at all, but I'd question your choice to discard Mongo. It's quite good as long as you leave journaling turned off and don't completely starve it for RAM.

    I know quite a lot about HBase, and it sounds like it would meet your needs easily. Might be overkill depending on how many users you have. It trivially supports things like storing many messages per user, and has functionality for automatic expiration of writes. Depending on how you architect your schema it may or may not be atomic, but that shouldn't matter for your use case.

    The downsides are that there is a lot of overhead to set it up correctly. You need to know Hadoop, get HDFS running, make sure your namenode is reliable, etc. before standing up HBase.

    0 讨论(0)
  • 2020-12-15 13:10

    I can't speak for Cassandra or Hbase, but let me address the Riak part.

    Yes, Riak would be appropriate for your scenario (and I've seen several companies and social networks use it for a similar purpose).

    To implement this, you would need the plain Riak Key/Value operations, plus some sort of indexing engine. Your options are (in rough order of preference):

    1. CRDT Sets. If your 1-N collection size is reasonably sized (let's say, there's less than 50 messages per user or whatever), you can store the keys of the child collection in a CRDT Set Data Type.

    2. Riak Search. If your collection size is large, and especially if you need to search your objects on arbitrary fields, you can use Riak Search. It spins up Apache Solr in the background, and indexes your objects according to a schema you define. It has pretty awesome searching, aggregation and statistics, geospatial capabilities, etc.

    3. Secondary Indexes. You can run Riak on top of an eLevelDB storage back end, and enable Secondary Index (2i) functionality.

    Run a few performance tests, to pick the fastest approach.

    As far as schema, I would recommend using two buckets (for the setup you describe): a User bucket, and a Message bucket.

    Index the message bucket. (Either by associating a Search index with it, or by storing a user_key via 2i). This lets you do all of the required operations (and the message log does not have to fit into memory):

    • Store from 1 to X messages per registered user - Once you create a User object and get a user key, storing an arbitrary amount of messages per user is easy, they would be straight up writes to the Message bucket, each message storing the appropriate user_key as a secondary index.
    • Get the number of stored messages per user - No problem. Get the list of message keys belonging to a user (via a search query, by retrieving the Set object where you're keeping the keys, or via a 2i query on user_key). This lets you get the count on the client side.
    • retrieve all messages from a user at once - See previous item. Get the list of keys of all messages belonging to the user (via Search, Sets or 2i), and then fetch the actual messages for those keys by multi-fetching the values for each key (all the official Riak clients have a multiFetch capability, client-side).
    • delete all messages from a user at once - Very similar. Get list of message keys for the user, issue Deletes to them on the client side.
    • delete all messages that are older than X months - You can add an index on Date. Then, retrieve all message keys older than X months (via Search or 2i), and issue client-side Deletes for them.
    0 讨论(0)
  • 2020-12-15 13:16

    I'd recommend using distributed key/value store like Riak or Couchbase and keep the whole message log for each user serialized (into binary erlang terms or JSON/BSON) as one value.

    So with your usecases it will look like:

    • Store from 1 to X messages per registered user - when user comes online spawn a stateful gen_server, which gets from storage and deserializes whole message log on startup, receives new messages, appends them to it's copy of log, on end of session it terminates, serializes the changed log and sends it to storage.
    • Get the number of stored messages per user - get the log out, deserialize, count; or maybe store count alongside in a separate k/v pair.
    • retrieve all messages from an user at once - just pull it from storage.
    • delete all messages from an user at once - just delete value from storage.
    • delete all messages that are older than X months - get, filter, put back.

    The obvious limitation - message log has to fit in memory.

    If you decide to store each message individually it will require from distributed database to sort them after retrieval if you want them to be in time-order, so it will hardly help to handle larger-than-memory datasets. If it is required - you will anyway end up with some more tricky scheme.

    0 讨论(0)
提交回复
热议问题