predict

robust standard errors in ggplot2

拥有回忆 提交于 2019-12-30 07:09:12
问题 I would like to plot a model with ggplot2. I have estimated a robust variance-covariance matrix which I would like to use when estimating the confidence interval. Can I tell ggplot2 to use my VCOV, or, alternatively, can I somehow force predict.lm to use my VCOV matrix? A dummy example: source("http://people.su.se/~ma/clmclx.R") df <- data.frame(x1 = rnorm(100), x2 = rnorm(100), y = rnorm(100), group = as.factor(sample(1:10, 100, replace=T))) lm1 <- lm(y ~ x1 + x2, data = df) coeftest(lm1) ##

predict.svm does not predict new data

对着背影说爱祢 提交于 2019-12-29 06:59:16
问题 unfortunately I have problems using predict() in the following simple example: library(e1071) x <- c(1:10) y <- c(0,0,0,0,1,0,1,1,1,1) test <- c(11:15) mod <- svm(y ~ x, kernel = "linear", gamma = 1, cost = 2, type="C-classification") predict(mod, newdata = test) The result is as follows: > predict(mod, newdata = test) 1 2 3 4 <NA> <NA> <NA> <NA> <NA> <NA> 0 0 0 0 0 1 1 1 1 1 Can anybody explain why predict() only gives the fitted values of the training sample (x,y) and does not care about

predict.svm does not predict new data

与世无争的帅哥 提交于 2019-12-29 06:58:07
问题 unfortunately I have problems using predict() in the following simple example: library(e1071) x <- c(1:10) y <- c(0,0,0,0,1,0,1,1,1,1) test <- c(11:15) mod <- svm(y ~ x, kernel = "linear", gamma = 1, cost = 2, type="C-classification") predict(mod, newdata = test) The result is as follows: > predict(mod, newdata = test) 1 2 3 4 <NA> <NA> <NA> <NA> <NA> <NA> 0 0 0 0 0 1 1 1 1 1 Can anybody explain why predict() only gives the fitted values of the training sample (x,y) and does not care about

Predict.lm in R fails to recognize newdata

故事扮演 提交于 2019-12-29 02:00:54
问题 I'm running a linear regression where the predictor is categorized by another value and am having trouble generating modeled responses for newdata. First, I generate some random values for the predictor and the error terms. I then construct the response. Note that the predictor's coefficient depends on the value of a categorical variable. I compose a design matrix based on the predictor and its category. set.seed(1) category = c(rep("red", 5), rep("blue",5)) x1 = rnorm(10, mean = 1, sd = 1)

predict() and newdata - How does this work?

大城市里の小女人 提交于 2019-12-25 07:59:31
问题 Someone recently posted a question on this paper here: https://static.googleusercontent.com/media/www.google.com/en//googleblogs/pdfs/google_predicting_the_present.pdf The R code of the paper can be found at the very end of the paper. Essentially, the paper investigates one-month ahead predictions of sales through search queries. I think I understood the model and method, but there's one detail that puzzles me. It's the part: 1 ##### Divide data by two parts - model fitting & prediction dat1

factor(0) when using predict for SVM in R

倖福魔咒の 提交于 2019-12-24 16:30:13
问题 I have a data frame trainData which contains 198 rows and looks like Matchup Win HomeID AwayID A_TWPCT A_WST6 A_SEED B_TWPCT B_WST6 B_SEED 1 2010_1115_1457 1 1115 1457 0.531 5 16 0.567 4 16 2 2010_1124_1358 1 1124 1358 0.774 5 3 0.75 5 14 ... The testData is similar. In order to use SVM, I have to change the response variable Win to a factor . I tried the below: trainDataSVM <- data.frame(Win=as.factor(trainData$Win), A_WST6=trainData$A_WST6, A_SEED=trainData$A_SEED, B_WST6=trainData$B_WST6,

{ “error”: “Serving signature name: ”serving_default“ not found in signature def” }

蓝咒 提交于 2019-12-24 12:26:40
问题 I used GCP(google cloud platform) to train my model and I could export the exported model. I used the model and used a local docker image of Tensorflow serving 1.8 CPU and i get the following result as output for REST post call { "error": "Serving signature name: \"serving_default\" not found in signature def" } 回答1: View the SignatureDefs of your model using SavedModelCLI command as shown below: saved_model_cli show --dir /usr/local/google/home/abc/serving/tensorflow_serving/servables

Predict cannot display the standard errors of the predictions, with se.fit=TRUE

寵の児 提交于 2019-12-24 03:49:11
问题 As said in the help(predict.nls), when se.fit=TRUE,the standard errors of the predictions should be calculated. However, my codes in the following do not display that, but only the predictions. alloy <- data.frame(x=c(10,30,51,101,203,405,608,810,1013,2026,4052,6078, 8104,10130), y=c(0.3561333,0.3453,0.3355,0.327453,0.3065299,0.2839316, 0.2675214,0.2552821,0.2455726,0.2264957,0.2049573, 0.1886496,0.1755897,0.1651624)) model <- nls(y ~ a * x^(-b), data=alloy, start=list(a=.5, b=.1)) predict

'undefined columns selected' using 'predict' with newdata lme4

僤鯓⒐⒋嵵緔 提交于 2019-12-24 03:48:12
问题 I'm trying to use the 'predict' function on new data after running an nlmer model, and am running into issues. The dataset I use for the model looks like so (except patient_id has an actual value, which I can't share): > data patient_id variable value M_visit_time M_agesero D_intercept D_agesero 1: SECRET vl 4.542850 1.624658 33.16164 0 0.00000 2: SECRET vl 4.408664 2.010959 33.16164 0 0.00000 3: SECRET vl 4.493095 2.219178 33.16164 0 0.00000 4: SECRET vl 4.540980 2.583562 33.16164 0 0.00000

Rolling prediction in a data frame using dplyr and rollapply

拈花ヽ惹草 提交于 2019-12-23 18:33:34
问题 My first question here :) My goal is: Given a data frame with predictors (each column a predictor / rows observations) fit a regression using lm and then predict the value using the last observation using a rolling window. The data frame looks like: > DfPredictor[1:40,] Y X1 X2 X3 X4 X5 1 3.2860 192.5115 2.1275 83381 11.4360 8.7440 2 3.2650 190.1462 2.0050 88720 11.4359 8.8971 3 3.2213 192.9773 2.0500 74130 11.4623 8.8380 4 3.1991 193.7058 2.1050 73930 11.3366 8.7536 5 3.2224 193.5407 2.0275