问题
Using lm, I would like to fit the model: y = b0 + b1*x1 + b2*x2 + b1*b2*x1*x2
My question is: How can I specify that the coefficient of the interaction should equal the multiplication of the coefficients the main effects?
I've seen that to set the coefficient to a specific value you can use offset() and I() but I don't know how to specify a relationship between coefficient.
Here is a simple simulated dataset:
n <- 50 # Sample size
x1 <- rnorm(n, 1:n, 0.5) # Independent variable 1
x2 <- rnorm(n, 1:n, 0.5) # Independent variable 2
b0 <- 1
b1 <- 0.5
b2 <- 0.2
y <- b0 + b1*x1 + b2*x2 + b1*b2*x1*x2 + rnorm(n,0,0.1)
To fit Model 1: y = b0 + b1*x1 + b2*x2 + b3*x1*x2, I would use:
summary(lm(y~ x1 + x2 + x1:x2))
But how do I fit Model 2: y = b0 + b1*x1 + b2*x2 + b1*b2*x1*x2?
One of the main differences between the two models is the number of parameters to estimate. In Model 1, we estimate 4 parameters: b0 (intercept), b1 (slope of var. 1), b2 (slope of var. 2), and b3 (slope for the interaction between vars. 1 & 2). In Model 2, we estimate 3 parameters: b0 (intercept), b1 (slope of var. 1 & part of slope of the interaction between vars. 1 & 2), and b2 (slope of var. 2 & part of slope of the interaction between vars. 1 & 2)
The reason why I want to do this is that when investigating whether there is a significant interaction between x1 & x2, model 2, y = b0 + b1*x1 + b2*x2 + b1*b2*x1*x2, can be a better null model than y = b0 + b1*x1 + b2*x2.
Many thanks!
Marie
回答1:
Because of the constraint that you impose on the coefficients, the model you specify is not a linear model and so lm
can not be used to fit it. You would need to use a non-linear regression, such as nls
.
> summary(nls(y ~ b0 + b1*x1 + b2*x2 + b1*b2*x1*x2, start=list(b0=0, b1=1, b2=1)))
Formula: y ~ b0 + b1 * x1 + b2 * x2 + b1 * b2 * x1 * x2
Parameters:
Estimate Std. Error t value Pr(>|t|)
b0 0.987203 0.049713 19.86 <2e-16 ***
b1 0.494438 0.007803 63.37 <2e-16 ***
b2 0.202396 0.003359 60.25 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1121 on 47 degrees of freedom
Number of iterations to convergence: 5
Achieved convergence tolerance: 2.545e-06
You can really see that the model is non-linear when you re-write it as
> summary(nls(y ~ b0+(1+b1*x1)*(1+b2*x2)-1, start=list(b0=0, b1=1, b2=1)))
Formula: y ~ b0 + (1 + b1 * x1) * (1 + b2 * x2) - 1
Parameters:
Estimate Std. Error t value Pr(>|t|)
b0 0.987203 0.049713 19.86 <2e-16 ***
b1 0.494438 0.007803 63.37 <2e-16 ***
b2 0.202396 0.003359 60.25 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1121 on 47 degrees of freedom
Number of iterations to convergence: 5
Achieved convergence tolerance: 2.25e-06
回答2:
Brian provides a way to fit the constrained model you specify but if you're interested in if the unconstrained model fits better than your constrained model you use the delta method to test that hypothesis.
# Let's make some fake data where the constrained model is true
n <- 100
b0 <- 2
b1 <- .2
b2 <- -1.3
b3 <- b1 * b2
sigma <- 1
x1 <- rnorm(n)
# make x1 and x2 correlated for giggles
x2 <- x1 + rnorm(n)
# Generate data according to the model
y <- b0 + b1*x1 + b2*x2 + b3*x1*x2 + rnorm(n, 0, sigma)
# Fit full model y = b0 + b1*x1 + b2*x3 + b3*x1*x2 + error
o <- lm(y ~ x1 + x2 + x1:x2)
# If we want to do a hypothesis test of Ho: b3 = b1*b2
# this is the same as Ho: b3 - b1*b2 = 0
library(msm)
# Get estimate of the difference specified in the null
est <- unname(coef(o)["x1:x2"] - coef(o)["x1"] * coef(o)["x2"])
# Use the delta method to get a standard error for
# this difference
standerr <- deltamethod(~ x4 - x3*x2, coef(o), vcov(o))
# Calculate a test statistic. We're relying on asymptotic
# arguments here so hopefully we have a decent sample size
z <- est/standerr
# Calculate p-value
pval <- 2 * pnorm(-abs(z))
pval
I explain what the delta method is used for and more on how to use it in R in this blog post.
Expanding on Brian's answer you could alternatively do this by comparing the full model to the constrained model - however you have to use nls to fit the full model to be able to easily compare the models.
o2 <- nls(y ~ b0 + b1*x1 + b2*x2 + b1*b2*x1*x2, start=list(b0=0, b1=1, b2=1))
o3 <- nls(y ~ b0 + b1*x1 + b2*x2 + b3*x1*x2, start = list(b0 = 0, b1 = 1, b2 = 1, b3 = 1))
anova(o2, o3)
回答3:
There's no way to do what you're asking for in lm
and there's no reason for it to be able to do it. You run lm
to get estimates of of your coefficients. If you don't want to estimate the coefficient then don't include the predictor in the model. You can use coef
to extract the coefficients you want and multiply them out afterwards.
Note that leaving the interaction out is a different model and will produce a different b1 and b2. You could alternatively leave I(x1 * x2)
in and not use the coefficient.
As for why you want to do this, there's not good a priori justification that your constrained model actually fits better than the simple additive model. Having more free parameters necessarily means a model fits better but you haven't added that, you've added a constraint that, in the real world, could make it fit worse. In that case would you consider it a better "baseline" for comparison to the model including the interaction?
来源:https://stackoverflow.com/questions/19316979/how-can-i-specify-a-relationship-between-parameter-estimates-in-lm