In terms of performance, you can do this same kind of operation using the data.table package, which has built in aggregation and is very fast thanks to indices and a C based implementation. For instance, given df
already exists from your example:
library("data.table")
dt<-as.data.table(df)
setkey(dt,group1)
dt<-dt[,list(group2,values,meanValue=mean(values)),by=group1]
dt
group1 group2 values meanValue
[1,] 1 A 0.82122120 0.18810771
[2,] 1 C 0.78213630 0.18810771
[3,] 1 C 0.61982575 0.18810771
[4,] 1 A -1.47075238 0.18810771
[5,] 2 B 0.59390132 0.03354688
[6,] 2 A 0.07456498 0.03354688
[7,] 2 B -0.05612874 0.03354688
[8,] 2 A -0.47815006 0.03354688
[9,] 3 B 0.91897737 -0.20205707
[10,] 3 C -1.98935170 -0.20205707
[11,] 3 B -0.15579551 -0.20205707
[12,] 3 A 0.41794156 -0.20205707
I have not benchmarked it, but in my experience it is a lot faster.
If you decide to go down the data.table road, which I think is worth exploring if you work with large data sets, you really need to read the docs because there are some differences from data frame that can bite you if you are unaware of them. However, notably data.table generally does work with any function expecting a data frame,as a data.table will claim its type is data frame (data table inherits from data frame).
[ Feb 2011 ]
[ Aug 2012 ] Update from Matthew :
New in v1.8.2 released to CRAN in July 2012 is :=
by group. This is very similar to the answer above, but adds the new column by reference to dt
so there is no copy and no need for a merge step or relisting existing columns to return alongside the aggregate. There is no need to setkey
first, and it copes with non-contiguous groups (i.e. groups that aren't grouped together).
This is signficantly faster for large datasets, and has a simple and short syntax :
dt <- as.data.table(df)
dt[, meanValue := mean(values), by = group1]