lm

Warning message 'newdata' had 1 row but variables found have 16 rows in R

一笑奈何 提交于 2019-12-10 17:49:46
问题 I am suppose to use the predict function to predict when fjbjor is 5.5 and I always get this warning message and I have tried many ways but it always comes so is there anyone who can see what I am doing wrong here This is my code fit.lm <- lm(fjbjor~amagn, data=bjor) summary(fit.lm) new.bjor<- data.frame(fjbjor=5.5) predict(fit.lm,new.bjor) and this comes out 1 2 3 4 5 6 7 8 9 10 11 5.981287 2.864521 9.988559 5.758661 4.645530 2.419269 4.645530 5.313409 6.871792 3.309773 4.200278 12 13 14 15

Using `%>%` with `lm` and `rbind`

蹲街弑〆低调 提交于 2019-12-10 17:26:26
问题 I have a dataframe Z looking like t x y d 0 1 2 1 1 2 3 1 2 3 4 1 0 1 2 2 1 2 3 2 2 3 4 2 with d being a factor column. I know want to fit a linear model with lm to y over t for both factors in d and add it as a new column to the dataframe. I tried Z %>% filter(d == 1) %>% lm(y ~ t) but this gives me an error saying "Error in as.data.frame.default(data) : cannot coerce class ""formula"" to a data.frame" . But lm(y ~ t, data = Z) works fine. Any help would be appreciated. 回答1: We need to

R: predict.lm() not recognizing an object

谁说胖子不能爱 提交于 2019-12-10 17:16:17
问题 > reg.len <- lm(chao1.ave ~ lg.std.len, b.div) # b.div is my data frame imported from a CSV file > reg.len Call: lm(formula = chao1.ave ~ lg.std.len, data = b.div) Coefficients: (Intercept) lg.std.len 282.4 -115.7 > newx <- seq(0.6, 1.4, 0.01) > prd.len <- predict(reg.len, newdata=data.frame(x=newx), interval="confidence", level=0.90, type="response") Error in eval(expr, envir, enclos) : object 'lg.std.len' not found I've tried doing the lm like this: lm(b.div$chao1.ave ~ b.div$lg.std.len) ,

Test if the slope in simple linear regression equals to a given constant in R

喜你入骨 提交于 2019-12-10 16:22:03
问题 I want to test if the slope in a simple linear regression is equal to a given constant other than zero. > x <- c(1,2,3,4) > y <- c(2,5,8,13) > fit <- lm(y ~ x) > summary(fit) Call: lm(formula = y ~ x) Residuals: 1 2 3 4 0.4 -0.2 -0.8 0.6 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -2.0000 0.9487 -2.108 0.16955 x 3.6000 0.3464 10.392 0.00913 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7746 on 2 degrees of freedom Multiple R

How to display different levels in a multilevel analysis with different colors

五迷三道 提交于 2019-12-10 15:55:18
问题 I am a beginner at multilevel analysis and try to understand how I can do graphs with the plot functions from base-R . I understand the output of fit below but I am struggeling with the visualization. df is just some simple test data: t <- seq(0, 10, 1) df <- data.frame(t = t, y = 1.5+0.5*(-1)^t + (1.5+0.5*(-1)^t) * t, p1 = as.factor(rep(c("p1", "p2"), 10)[1:11])) fit <- lm(y ~ t * p1, data = df) # I am looking for an automated version of that: plot(df$t, df$y) lines(df$t[df$p1 == "p1"], fit

AIC different between biglm and lm

一曲冷凌霜 提交于 2019-12-10 15:39:53
问题 I have been trying to use biglm to run linear regressions on a large dataset (approx 60,000,000 lines). I want to use AIC for model selection. However I discovered when playing with biglm on smaller datasets that the AIC variables returned by biglm are different from those returned by lm. This even applies to the example in the biglm help. data(trees) ff<-log(Volume)~log(Girth)+log(Height) chunk1<-trees[1:10,] chunk2<-trees[11:20,] chunk3<-trees[21:31,] library(biglm) a <- biglm(ff,chunk1) a

Using lm() of R, a formula object should be passed as character?

和自甴很熟 提交于 2019-12-10 15:15:35
问题 I found a strange behavior of R using lm(). Based on cars object, following function is to plot fitted breaking distance with a localized linear regression at speed 30. func1 <- function(fm, spd){ w <- dnorm(cars$speed - spd, sd=5) fit <- lm(formula = as.formula(fm), weights = w, data=cars) plot(fitted(fit)) } func2 <- function(fm, spd){ w <- dnorm(cars$speed - spd, sd=5) fit <- lm(formula = fm, weights = w, data=cars) plot(fitted(fit)) } func1("dist ~ speed", 30) func2(dist ~ speed, 30)

Does R always return NA as a coefficient as a result of linear regression with unnecessary variables?

微笑、不失礼 提交于 2019-12-10 13:49:39
问题 My question is about the unnecessary predictors, namely the variables that do not provide any new linear information or the variables that are linear combinations of the other predictors. As you can see the swiss dataset has six variables. library(swiss) names(swiss) # "Fertility" "Agriculture" "Examination" "Education" # "Catholic" "Infant.Mortality" Now I introduce a new variable ec . It is the linear combination of Examination and Education . ec <- swiss$Examination + swiss$Catholic When

lm called from inside dlply throws “0 (non-NA) cases” error [r]

为君一笑 提交于 2019-12-10 03:11:26
问题 I'm using dlply() with a custom function that averages slopes of lm() fits on data that contain some NA values, and I get the error "Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) : 0 (non-NA) cases" This error only happens when I call dlply with two key variables - separating by one variable works fine. Annoyingly I can't reproduce the error with a simple dataset, so I've posted the problem dataset in my dropbox. Here's the code, as minimized as possible while still

What does predict.glm(, type=“terms”) actually do?

走远了吗. 提交于 2019-12-09 06:07:49
问题 I am confused with the way predict.glm function in R works. According to the help, The "terms" option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale. Thus, if my model has form f(y) = X*beta, then command predict(model, X, type='terms') is expected to produce the same matrix X, multiplied by beta element-wise. For example, if I train the following model test.data = data.frame(y = c(0,0,0,1,1,1,1,1,1), x=c(1,2,3,1,2,2,3,3,3)) model =