Downsides of storing binary data in Riak?

前端 未结 5 1758
攒了一身酷
攒了一身酷 2021-02-12 22:33

What are the problems, if any, of storing binary data in Riak?

Does it effect the maintainability and performance of the clustering?

What would the performance d

相关标签:
5条回答
  • 2021-02-12 23:01

    One problem may be that it is difficult, if not impossible, to use JavaScript map/reduce across your binary data. You'll probably need Erlang for that.

    0 讨论(0)
  • 2021-02-12 23:17

    I personally haven't noticed any issues storing data such as images and documents (both DOC and PDF) into Riak. I don't have performance numbers but might be able to gather some should I remember.

    Something of note, with Riak you can use Luwak which provides an api for storing large files. This has been pretty useful.

    0 讨论(0)
  • 2021-02-12 23:18

    With Riak, the recommended maximum is 2MB per object. Above that, it's recommended to use either Riak CS, which has been tested with objects up to 5TB (Stored in Riak as 1MB objects) or by naturally breaking up your large object into 2MB chunks and linking by a key and suffix.

    0 讨论(0)
  • 2021-02-12 23:21

    The only problem I can think of is storing binary data larger than 50MBs which they advise against. The whole point of Riak is just that:

    Another reason one might pick Riak is for flexibility in modeling your data. Riak will store any data you tell it to in a content-agnostic way — it does not enforce tables, columns, or referential integrity. This means you can store binary files right alongside more programmer-transparent formats like JSON or XML.

    Source: Schema Design in Riak - Introduction

    0 讨论(0)
  • 2021-02-12 23:22

    Adding to @Oscar-Godson's excellent answer, you're likely to experience problems with values much larger than 50MBs. Bitcask is best suited for values that are up to a few KBs. If you're storing large values, you may want to consider alternative storage backends, such as innostore.

    I don't have experience with storing binary values, but we've a medium-sized cluster in production (5 nodes, on the order of 100M values, 10's of TBs) and we're seeing frequent errors related to inserting and retrieving values that are 100's of KBs in size. Performance in this case is inconsistent - some times it works, others it doesn't - so if you're going to test, test at scale.

    We're also seeing problems with large values when running map-reduce queries - they simply time out. However that may be less relevant to binary values... (as @Matt-Ranney mentioned).

    Also see @Stephen-C's answer here

    0 讨论(0)
提交回复
热议问题