How could I extract coefficients (b0 and b1) with their respectively standard errors for each experimental unit (plot )in a linear mixed model such as this one:
Better fits for a linear model
with this same dataset(df), and for the fitted model (fitL1): how could I get a data frame as this one...
plot b0 b0_se b1 b1_se
1 2898.69 53.85 -7.5 4.3
... ... ... ... ...
The first comment is that this is actually a non-trivial theoretical question: there is a rather long thread on r-sig-mixed-models that goes into some of the technical details; you should definitely have a look, even though it gets a bit scary. The basic issue is that the estimated coefficient values for each group are the sum of the fixed-effect parameter and the BLUP/conditional mode for that group, which are different classes of objects (one is a parameter, one is a conditional mean of a random variable), which creates some technical difficulties.
The second point is that (unfortunately) I don't know of an easy way to do this in lme
, so my answer uses lmer
(from the lme4
package).
If you are comfortable doing the easiest thing and ignoring the (possibly ill-defined) covariance between the fixed-effect parameters and the BLUPs, you can use the code below.
Two alternatives would be (1) to fit your model with a Bayesian hierarchical approach (e.g. the MCMCglmm
package) and compute the standard deviations of the posterior predictions for each level (2) use parametric bootstrapping to compute the BLUPs/conditional modes, then take the standard deviations of the bootstrap distributions.
Please remember that as usual this advice comes with no warranty.
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
cc <- coef(fm1)$Subject
## variances of fixed effects
fixed.vars <- diag(vcov(fm1))
## extract variances of conditional modes
r1 <- ranef(fm1,condVar=TRUE)
cmode.vars <- t(apply(cv <- attr(r1[[1]],"postVar"),3,diag))
seVals <- sqrt(sweep(cmode.vars,2,fixed.vars,"+"))
res <- cbind(cc,seVals)
res2 <- setNames(res[,c(1,3,2,4)],
c("int","int_se","slope","slope_se"))
## int int_se slope slope_se
## 308 253.6637 13.86649 19.666258 2.7752
## 309 211.0065 13.86649 1.847583 2.7752
## 310 212.4449 13.86649 5.018406 2.7752
## 330 275.0956 13.86649 5.652955 2.7752
## 331 273.6653 13.86649 7.397391 2.7752
## 332 260.4446 13.86649 10.195115 2.7752
To get you partway there using nlme...
You can pull the components of summary() using:
summary(fitL1)$tTable[,1] #fixed-effect parameter estimates
summary(fitL1)$tTable[,2] #fixed-effect parameter standard errors
etc.
You can further subset those by rows:
summary(fitL1)$tTable[1,1] #the first fixed-effect parameter estimate
summary(fitL1)$tTable[1,2] #the first fixed-effect parameter standard error
to extract individual parameters or standard errors and combine them into a data frame using, for example:
df<-data.frame(cbind(summary(fitL1)$tTable[1,1], summary(fitL1)$tTable[1,2]))
names(df)<-c("Estimate","SE")
df
To adjust these parameters for each plot (the random effect, I presume), you can pull the random coefficients with:
fitL1$coefficients$random
and add them to the parameter estimates (B0 (intercept), B1, etc.). However, I am not sure how the standard errors should be adjusted for each plot.
来源:https://stackoverflow.com/questions/26198958/extracting-coefficients-and-their-standard-error-from-lme