问题
I have data and I expect several linear correlations of the form
y_i = a_i + b_i * t_i, i = 1 .. N
where N
is a priori unknown. The short version of the question is: Given a fit
- how can I extract
N
? - how can I extract the equations?
In the reproducible example below, I have data (t,y)
with corresponding parameters p1
(levels p1_1
, p1_2
) and p2
(levels p2_1
, p2_2
, p2_3
).
Thus the data looks like (t, y, p1, p2)
which has at most 2*3 different best-fit lines and the linear fit from has then at most 2*2*3 non-zero coefficients.
I run into the following problems: Assume I have the three equations
y1 = 5 + 3*t (for p1=p1_1, p2=p2_1)
y2 = 3 + t (for p1=p1_2, p2=p2_2)
y3 = 1 – t (for p1=p1_2, p2=p2_3)
Running cv.glmnet(y ~ t * p1 * p2, ...) yields
(Intercept) 5
t 3 => y1 = 5 + 3t
p1p1_2 -2 => y2 = 3 + 3t?
p2p2_2 .
p2p2_3 -2 => y3 = 1 + 3t?
t:p1p1_2 -2 => y4 = 3 + t (or y4 = 1 + t?)
t:p2p2_2 .
t:p2p2_3 -2 => y5 = 1 - t
p1p1_2:p2p2_2 .
p1p1_2:p2p2_3 -0.1 => y6 = 0.9 – t?
t:p1p1_2:p2p2_2 .
t:p1p1_2:p2p2_3 .
Desired result: program should suggest 4 equations y1, correct y4, y5 and y6, hopefully there is a good reason (which one?) to ignore y6.
Running lm(y ~ t * p1 * p2) yields
(Intercept) 5
t 3 => y1 = 5 + 3t
p1p1_2 -4 => y2 = 1 + 3t?
p2p2_2 2 => y3 = 3 + 3t
p2p2_3 .
t:p1p1_2 -4 => y5 = 1 - x (or y4 = 3 - t?)
t:p2p2_2 2 => y6 = 3 + t?
t:p2p2_3 .
p1p1_2:p2p2_2 .
p1p1_2:p2p2_3 .
t:p1p1_2:p2p2_2 .
t:p1p1_2:p2p2_3 .
Desired result: program should suggest 3 equations y1, y3 and y6
Do I overlook something obvious?
Reproducible example
Column three is a dummy factor conatining noise. This column is not considered for simplicity
# Create testdata
sigma <- 0.5
t <- seq(0,10, length.out = 1000) # large sample of x values
# Create 3 linear equations of the form y_i = a*t_i + b
a <- c(3, 1, -1) # slope
b <- c(5, 3, 1) # offset
# create t_i, y_ti (theory) and y_i (including noise)
d <- list()
y <- list()
y_t <- list()
for (i in 1:3) {
set.seed(33*i)
d[[i]] <- sort(sample(t, 50, replace = F))
set.seed(33*i)
noise <- rnorm(10, 0, sigma)
y[[i]] <- a[i]*d[[i]] + b[i] + noise
y_t[[i]] <- a[i]*d[[i]] + b[i]
}
# Final data set
df1 <- data.frame(t=d[[1]], y=y[[1]], p1=rep("p1_1"), p2=rep("p2_1"),
p3=sample(c("p3_1", "p3_2", "p3_3"), length(d[[1]]), replace = T))
df2 <- data.frame(t=d[[2]], y=y[[2]], p1=rep("p1_2"), p2=rep("p2_2"),
p3=sample(c("p3_1", "p3_2", "p3_3"), length(d[[1]]), replace = T))
df3 <- data.frame(t=d[[3]], y=y[[3]], p1=rep("p1_2"), p2=rep("p2_3"),
p3=sample(c("p3_1", "p3_2", "p3_3"), length(d[[1]]), replace = T))
mydata <- rbind(df1, df2, df3)
mydata$p1 <- factor(mydata$p1)
mydata$p2 <- factor(mydata$p2)
mydata$p3 <- factor(mydata$p3)
mydata <- mydata[sample(nrow(mydata)), ]
# What the raw data looks like:
plot(x = mydata$t, y = mydata$y)
cols <- rainbow(length(levels(mydata$p1))*length(levels(mydata$p2))*length(levels(mydata$p3)))
rm(.Random.seed, envir=.GlobalEnv)
cols <- sample(cols) # most likely similar colors are not next to each other;-)
# Fit using lm disabled - just uncomment and comment the part below
# fit <- lm(y ~ t * p1 * p2, data = mydata)
# coef <- as.matrix(fit$coefficients)
# mydata$pred <- predict(fit)
# Fit using glmnet
set.seed(42)
fit_type <- c("lambda.min", "lambda.1se")[1]
x <- model.matrix(y ~ t * p1 * p2, data = mydata)[,-1]
fit <- glmnet::cv.glmnet(x, mydata$y, intercept = TRUE, nfolds = 10, alpha = 1)
coef <- glmnet::coef.cv.glmnet(fit, newx = x, s = fit_type)
mydata$pred <- predict(fit, newx = x, s = fit_type)
# plots
plot(d[[1]], y_t[[1]], type = "l", lty = 3, col = "black", main = "Raw data",
xlim = c(0, 10), ylim = c(min(mydata$y), max(mydata$y)), xlab = "t", ylab = "y")
lines(d[[2]], y_t[[2]], col = "black", lty = 3)
lines(d[[3]], y_t[[3]], col = "black", lty = 3)
# The following for loops are fixed right now. In the end this should be automated using
# the input from the fit (and the knowledge how to extract N and the lines above).
pn <- 0
for (p1 in 1:length(levels(mydata$p1))) {
for (p2 in 1:length(levels(mydata$p2))) {
pn <- pn + 1
tmp <- mydata[mydata$p1 == levels(mydata$p1)[p1] & mydata$p2 == levels(mydata$p2)[p2], ]
points(x = tmp$t, y = tmp$y, col = cols[pn]) # original data
points(x = tmp$t, y = tmp$pred, col = cols[pn], pch = 3) # estimated data from predict
if (length(tmp$pred) > 0) {
abline(lm(tmp$pred ~ tmp$t), col = cols[pn])
}
}
}
Related posts:
- linear regression based on subgroups:
shows how to use multilevel analysis. For me it still does not explain how to obtain the best-fit lines. The ggplot2 displays 6 of them, but to me
this is a mystery. Please note that I use a different set of test data which is much easier to interpret (lines well separated, less noise, integer
a
andb
). - different levels with different colors: explains how to display the lines if the number of lines is known and all levels are relevant.
回答1:
I think you are misinterpreting the regression results. If an equation contains the terms p1_m
and p2_n
, then it must also contain the interactions t:p1_m
and t:p2_n
. It cannot be one and not the other. In the sample data there are three pairs of coefficients:
> unique(mydata[,3:4])
# p1 p2
# 96 p1_2 p2_2
# 1 p1_1 p2_1
# 135 p1_2 p2_3
Looking at the lm
results, we reconstruct the equations as:
y = 5 + 3t + p1p1_2 + (t:p1p1_2)*t + p2p2_2 + (t:p2p2_2)*t = 3 + t
;y = 5 + 3t + p1p1_1 + (t:p1p1_1)*t + p2p2_1 + (t:p2p2_1)*t = 5 + 3t
;y = 5 + 3t + p1p1_2 + (t:p1p1_2)*t + p2p2_3 + (t:p2p2_3)*t = 1 - t
.
These match the equations that you specify at the beginning, so there is no ambiguity.
来源:https://stackoverflow.com/questions/41181636/how-can-i-extract-the-number-of-lines-and-the-corresponding-equations-from-a-lin