I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on.
The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second.
The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour.
I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below:
- The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers;
- The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster;
- The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down;
- Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster);
- There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB).
- Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must.
- The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available.
- Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option.
Thanks in advance for any ideas.
Have a look at Hazelcast. It is pure Java, open source (Apache license) highly scalable in-memory data grid product. It does offer 7X24 support. And it does solve all of your problems I tried to explain each of them below:
- It has a native Java Client.
- It is 100% dynamic. Add and remove nodes dynamically. No need to change anything.
- Again everything is dynamic.
- You can configure number of backup nodes.
- Hazelcast support persistency.
- Everything that Hazelcast offers is free(open source) and it does offer enterprise level support.
- Hazelcast is single jar file. super easy to use. Just add jar to your classpath. Have a look at screen cast in main page.
- Hazelcast is strictly consistent. You can never read stale data.
I suggest you to use Redisson - Redis based In-memory Data Grid for Java. Implements (BitSet
, BloomFilter
, Set
, SortedSet
, Map
, ConcurrentMap
, List
, Queue
, Deque
, BlockingQueue
, BlockingDeque
, ReadWriteLock
, Semaphore
, Lock
, AtomicLong
, CountDownLatch
, Publish / Subscribe
, RemoteService
, ExecutorService
, LiveObjectService
, SchedulerService
) on top of Redis server! It supports master/slave, sentinel and cluster server modes. Automatic cluster/sentinel servers topology discovery supported also. This lib is free and open-source.
Perfectly works in cloud thanks to AWS Elasticache support
Depending of what you prefer, i would surely follow the others by suggesting Hazelcast if you're towards AP from the CAP Theorem but if you need CP, i would choose Redis
You may want to checkout Java-specific solutions like Coherence: http://www.oracle.com/global/ru/products/middleware/coherence/index.html
However, I consider such solutions to be too complex and prefer to use solutions like memcached. Big disadvantage of memcached for your purpose is lack of record lock it seems and there is no built in way to replicate data for failover. That is why I would look into the key-value data stores. Many of them would satisfy your need completely.
Here is a list of key-value data stores that may help you with your task: http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores Just pick one that you fill comfortable with.
I am doing a similar project, but instead targeting the .NET platform. Apart from the already mentioned solutions, I think you should take a look at ScaleOut StateServer and Alachisoft NCache. I am afraid neither of these alternatives are cheap, but they are a safer bet than open source for commercial solutions according to my judgement.
- Both provide Java client APIs, even though I have only played around with the .NET APIs.
- StateServer features self-discovery of new cache nodes, and NCache has a management console where new cache nodes can be added.
- Both should be able to handle failovers seamlessly.
- StateServer can have 1 or 2 passive copies of the data. NCache features more caching topologies to choose between.
- If you mean write-through/write-behind to a database that is available in both.
- I have no idea how many cache servers you plan to use, but here are the full price specs: ScaleOut StateServer Alachisoft NCache
- Both are installed and configured locally on your server and they both have GUI Management.
- I am not sure exactly what strictly consistent involves, so I'll leave that for you to investigate..
Overall, StateServer is the best option if you want to skip configuring every little detail in the cache cluster, while NCache features very many features and caching topologies to choose from.
Depending on the behaviour of data towards the clients (if the data is read many times from the same client) it might be a good idea to mix local caching on the clients with the distributed caching in the cluster (available for both NCache and StateServer), just a thought.
Have a look at Terracotta's JVM clustering, it's OpenSource ;) It has no API while it works efficent at JVM level, when you store the value in a replicated object it is sent to all other nodes. Even locking and all those things work transparent and without adding any new code.
The specified use case seems to fit into Netflix's Hollow. This is a read-only replicated cache with a single producer and multiple consumers.
Have you tought about using a standard messaging solution like rabbitmq ? RabbitMQ is an open source implementation of the AMQP protocol.
Your application seems more or less like a Publish/subscribe system. The Publisher node is the one that does the processing and puts messages (processed data) in a queue in the servers. Subscribers can get messages from the server in various ways. AMQP decouples the producer and the consumer of messages and is very flexible in how you can combine the two sides.
来源:https://stackoverflow.com/questions/3045164/choosing-a-distributed-shared-memory-solution