I'm new to druid. I've already read "druid VS Elasticsearch", but I still don't know what druid is good at.
Below is my problem:
I have a solr cluster with 70 nodes.
I have a very big table in solr which has 1 billion rows, and each row has 100 fields.
The user will use different combinations range query of fields (20 combinations at least in one query) to count the distinct number of customer id, but the solr's distinct count algorithm is very slow and uses a lot of memory, so if the query result is more than 200 thousand, the solr's query node will crash.
Does druid has better performance than solr in distinct count?
Druid is vastly different from search-specific databases like ES/Solr. It is a database designed for analytics, where you can do rollups, column filtering, probabilistic computations, etc.
Druid does count distinct through its use of HyperLogLog, which is a probabilistic data-structure. So if you dont worry about 100% accuracy, you can definitely try Druid and I have seen drastic improvements in response times in one of my projects. But, if you care about accuracy, then Druid might not be the best solution (even though it is quite possible to achieve in Druid as well, with performance hits and extra space taken up) - see more here: https://groups.google.com/forum/#!topic/druid-development/AMSOVGx5PhQ
ES typically needs raw data because it's designed for search. It means the index is huge yet nested aggregations is expensive. (I know I skipped a lot of details here).
Druid is designed for metric calculation over timeseries data. It has clear distinction of dimensions and metrics. Based on dimension fields, the metric fields are pre-aggregated at the time of ingestion. This step helps reducing huge amount of data depending on cardinality of the dimensional data. In other words, Druid works best when the dimension is categorical value.
You mentioned range query
. Range filter on metrics works great. But if you mean query by numerical dimensions that's something Druid is still work in progress.
As for the distinct count, both ES and Druid support HyperLogLog. In Druid, you have to specify fields at the time of ingestion in order to apply HyperLogLog at the query time. It's pretty fast and efficient.
Recent versions (6.x AFAIK) of Elasticsearch support your use case and you will get the result from all 3 (Druid, ES, Solr), but to answer your last question about performance, I feel Druid will be the most performant with minimal resource requirement for your use case.
Though ES supports analytics and aggregations, it's primary design is based on free text search requirement. As ES does more things than your requirement mentioned above, it will use resources and may not be the right fit unless you want to do more than just the distinct count.
Quoting from Druid's website https://druid.apache.org/docs/latest/comparisons/druid-vs-elasticsearch.html
Druid focuses on OLAP workflows. Druid is optimized for high performance (fast aggregation and ingestion) at low cost and supports a wide range of analytic operations.
来源:https://stackoverflow.com/questions/39119568/druid-vs-elasticsearch