I want to get the counts of groups which satisfy a certain condition. In SQL terms, I want to do the following in Elasticsearch.
SELECT COUNT(*) FROM
(
SE
You may use one of the pipeline aggregations, namely bucket selector aggregation. The query would look like this:
POST my_index/tdrs/_search
{
"aggregations": {
"reseller_sale_sum": {
"aggregations": {
"sales": {
"sum": {
"field": "requestAmountValue"
}
},
"max_sales": {
"bucket_selector": {
"buckets_path": {
"var1": "sales"
},
"script": "params.var1 > 10000"
}
}
},
"terms": {
"field": "senderResellerId",
"order": {
"sales": "desc"
},
"size": 5
}
}
},
"size": 0
}
After putting the following documents in the index:
"hits": [
{
"_index": "my_index",
"_type": "tdrs",
"_id": "AV9Yh5F-dSw48Z0DWDys",
"_score": 1,
"_source": {
"requestAmountValue": 7000,
"senderResellerId": "ID_1"
}
},
{
"_index": "my_index",
"_type": "tdrs",
"_id": "AV9Yh684dSw48Z0DWDyt",
"_score": 1,
"_source": {
"requestAmountValue": 5000,
"senderResellerId": "ID_1"
}
},
{
"_index": "my_index",
"_type": "tdrs",
"_id": "AV9Yh8TBdSw48Z0DWDyu",
"_score": 1,
"_source": {
"requestAmountValue": 1000,
"senderResellerId": "ID_2"
}
}
]
The result of the query is:
"aggregations": {
"reseller_sale_sum": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "ID_1",
"doc_count": 2,
"sales": {
"value": 12000
}
}
]
}
}
I.e. only those senderResellerId
whose cumulative sales are >10000
.
To implement an equivalent of SELECT COUNT(*) FROM (... HAVING)
one may use a combination of bucket script aggregation with sum bucket aggregation. Though there seems to be no direct way to count how many buckets did bucket_selector
actually select, we may define a bucket_script
that produces 0
or 1
depending on a condition, and sum_bucket
that produces its sum
:
POST my_index/tdrs/_search
{
"aggregations": {
"reseller_sale_sum": {
"aggregations": {
"sales": {
"sum": {
"field": "requestAmountValue"
}
},
"max_sales": {
"bucket_script": {
"buckets_path": {
"var1": "sales"
},
"script": "if (params.var1 > 10000) { 1 } else { 0 }"
}
}
},
"terms": {
"field": "senderResellerId",
"order": {
"sales": "desc"
}
}
},
"max_sales_stats": {
"sum_bucket": {
"buckets_path": "reseller_sale_sum>max_sales"
}
}
},
"size": 0
}
The output will be:
"aggregations": {
"reseller_sale_sum": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
...
]
},
"max_sales_stats": {
"value": 1
}
}
The desired bucket count is located in max_sales_stats.value
.
I have to point out 2 things:
Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding information to the output tree.
This means that bucket_selector
aggregation will be applied after and on the result of terms
aggregation on senderResellerId
. For example, if there are more senderResellerId
than size
of terms
aggregation defines, you will not get all the ids in the collection with sum(sales) > 10000
, but only those that appear in the output of terms
aggregation. Consider using sorting and/or set sufficient size
parameter.
This also applies for the second case, COUNT() (... HAVING)
, which will only count those buckets that are actually present in the output of aggregation.
In case this query is too heavy or the number of buckets too big, consider denormalizing your data or store this sum directly in the document, so you can use plain range query to achieve your goal.