I have a logfile of timestamped values (concurrent users) of different \"zones\" of a chatroom webapp in the format \"Timestamp; Zone; Value\". For each zone exists one value pe
You can do this with just one MR using secondary sorting. Here are the steps
Define key as concatenation of zone, yyyy-mm-dd and the value as zone:yyyy-mm-dd:value As I will explain, you don't even need to emit any value from the mapper. NullWritable is good enough for the value
Implement key comparator such that zone:yyyy-mm-dd part of the key is ordered ascending and the values part is ordered descending. This will ensure that for all keys for given zone:yyyy-mm-dd, the first key in the group will have the highest value
Define partitioner and grouping comparator of the composite key based on the zone and day part of the key only i.e. zone:yyyy-mm-dd.
In your reducer input, you will get the first key for a key group, which will contain zone, day and the max value for that zone, day combination. The value part of the reducer input will be a list of NullWritable, which can be ignored.
I don't know that you'd need two map/reduce steps - you could certainly do it with one, it's just that your results would be lists instead of single entries. Otherwise, yes, you'd split it up by zones, then split it by date.
I'd probably split it up by zone, then have each zone return a list of the highest elements by day, since the reduction would be really easy at that point. To really get a benefit out of another map/reduction step you'd have to have a really large dataset and a lot of machines to split across - at which point I'd probably do a reduction on the entire key.
Secondary sort in Map reduce is solved with composite key pattern, therefor you create key like (ZoneId, TImeStamp) and in reducer you will firstly iterate over time zone, and then over timestamps so you can easily evaluate per day maximum.