I have a \"status\" collection like this strcture -
{
_id: ObjectId(\"545a0b63b03dbcd1238b4567\"),
status: 1004,
comment: \"Rem dolor ipsam place
I like the other answer here, and mostly for the use of date math instead of aggregation date operators which while helpful can also be a little obscure.
The only thing I want to add here is that you can also return a Date
object from the aggregation framework by this approach as opposed to the "numeric" timestamp as the result. It's just a little extra math on the same principles, using $add:
db.collection.aggregate([
{ "$group": {
"_id": {
"$add": [
{ "$subtract": [
{ "$subtract": [ "$current_date", new Date(0) ] },
{ "$mod": [
{ "$subtract": [ "$current_date", new Date(0) ] },
1000 * 60 * 15
]}
] },
new Date(0)
]
},
"count": { "$sum": 1 }
}}
])
The Date(0)
contructs in JavaScript here represent the same "epoch" date in a shorter form, as 0 millisecond from epoch is epoch. But the main point is that when the "addition" to another BSON date object is done with a numeric identifier, then the inverse of the described condition is true and the end result is actually now a Date
.
All drivers will return the native Date
type to their language by this approach.
A little more beautiful for mongo db.version() < 3.0
db.collection.aggregate([
{$match: {created_at:{$exists:1}}},
{$group: {
_id: {$add:[
{$dayOfYear: "$created_at" },
{$multiply: [{$year: "$created_at"}, 1000]}
]},
count: {$sum: 1 }
}},
{$sort:{_id:-1}}
])
There are a couple of ways to do this.
The first is with Date Aggregation Operators, which allow you to dissect the "date" values in documents. Specifically for "grouping" as the primary intent:
db.collection.aggregate([
{ "$group": {
"_id": {
"year": { "$year": "$created_at" },
"dayOfYear": { "$dayOfYear": "$created_at" },
"hour": { "$hour": "$created_at" },
"interval": {
"$subtract": [
{ "$minute": "$created_at" },
{ "$mod": [{ "$minute": "$created_at"}, 15] }
]
}
}},
"count": { "$sum": 1 }
}}
])
The second way is by using a little trick of when a date object is subtracted (or other direct math operation) from another date object, then the result is a numeric value representing the epoch timestamp milliseconds between the two objects. So just using the epoch date you get the epoch milliseconds representation. Then use date math for the interval:
db.collection.aggregate([
{ "$group": {
"_id": {
"$subtract": [
{ "$subtract": [ "$created_at", new Date("1970-01-01") ] },
{ "$mod": [
{ "$subtract": [ "$created_at", new Date("1970-01-01") ] },
1000 * 60 * 15
]}
]
},
"count": { "$sum": 1 }
}}
])
So it depends on what kind of output format you want for the grouping interval. Both basically represent the same thing and have sufficient data to re-construct as a "date" object in your code.
You can put anything else you want in the "grouping operator" portion after the grouping _id
. I'm just using the basic "count" example in lieu of any real statement from yourself as to what you really want to do.
There were some additions to Date Aggregation Operators since the original writing, but from MongoDB 4.0 there will be actual "real casting of types" as opposed to the basic math tricks done here with BSON Date conversion.
For instance we can use $toLong and $toDate as new helpers here:
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": "$created_at" },
{ "$mod": [ { "$toLong": "$created_at" }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
That's a bit shorter and does not require defining an external BSON Date for the "epoch" value as a constant in defining the pipeline so it's pretty consistent for all language implementations.
Those are just two of the "helper" methods for type conversion which all tie back to the $convert method, which is a "longer" form of the implementation allowing for custom handling on null
or error in conversion.
It's even possible with such casting to get the Date
information from the ObjectId
of the primary key, as this would be a reliable source of "creation" date:
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": { "$toDate": "$_id" } },
{ "$mod": [ { "$toLong": { "$toDate": "$_id" } }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
So "casting types" with this sort of conversion can be pretty powerful tool.
Warning -
ObjectId
values are limited to precision to the second only for the internal time value that makes up part of their data allowing the $toDate conversion. The actual inserted "time" is most probably dependent on the driver in use. Where precision is required, it's still recommended to use a discrete BSON Date field instead of relying onObjectId
values.
@Neil Lunn's answer at https://stackoverflow.com/a/26814496/8474325 for MongoDb 4.x upwards is fantastic. But there is a small mistake in the code where he uses ObjectId for the aggregation. The Line { "$toDate": "_id" }
has to be changed to { "$toDate": "$_id" }
for the code to work.
Here's the corrected code.
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": { "$toDate": "$_id" } },
{ "$mod": [ { "$toLong": { "$toDate": "$_id" } }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
Another useful way:
db.collection.aggregate([
{$group: {
_id: {
overallTime: {
$dateToString: { format: "%Y-%m-%dT%H", date: "$created_at" }
},
interval: { $trunc: { $divide: [{ $minute: "$created_at" }, 15 ]}}
},
}},
])
And more easier for min, hour, day intervals:
var format = "%Y-%m-%dT%H:%M"; // 1 min
var format = "%Y-%m-%dT%H"; // 1 hour
var format = "%Y-%m-%d"; // 1 day
db.collection.aggregate([
{$group: {
_id: { $dateToString: { format: format, date: "$created_at" } },
}},
])