predict

Using “predict” in nls

前提是你 提交于 2019-12-23 17:06:47
问题 I have data from the USGS Nation Water Data website. I am currently trying to plot and fit curves to the data to use in prediction for different measurements taken within the dataset (dissolved oxygen, pH, gage height and temp) all in relation to discharge rate. I used the "nls" command and I am using a book of equations to find which curve to use...for this example I have specifically used Schumacher’s equation (p.48 in the book). Find the link to data: curve book: http://www.for.gov.bc.ca

Keras Val_acc is good but prediction for same data is poor

社会主义新天地 提交于 2019-12-23 06:35:08
问题 I am using Keras for a CNN two class classification. While training my val_acc is above 95 percent. But when I predict result for the same validation data the acc is less than 60 percent, is that even possible? This is my Code: from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Convolution2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras import backend as K from keras.callbacks import

Simple Way to Combine Predictions from Multiple Models for Subset Data in R

喜你入骨 提交于 2019-12-22 17:52:59
问题 I would like to build separate models for the different segments of my data. I have built the models like so: log1 <- glm(y ~ ., family = "binomial", data = train, subset = x1==0) log2 <- glm(y ~ ., family = "binomial", data = train, subset = x1==1 & x2<10) log3 <- glm(y ~ ., family = "binomial", data = train, subset = x1==1 & x2>=10) If I run the predictions on the training data, R remembers the subsets and the prediction vectors are with the length of the respective subset. However, if I

GLM prediction for surface plot in scatter3D() in R

余生长醉 提交于 2019-12-22 13:02:59
问题 I'm trying to produce a surface plot with overlain points from a binomial GLM using the scatter3D() function. To do this I am using predict() to predict the z-surface for different values of x and y. # Data: library(plot3D) structure(list( x = c(0.572082281112671, -0.295024245977402, 0.295024245977402, 0.861117839813232, 0.572082281112671, -1.74020183086395, 0.861117839813232, 0.283046782016754, 0.861117839813232, 0.283046782016754, -0.295024245977402, 1.43918883800507, 1.43918883800507, -0

Running predict() after tobit() in package AER

╄→гoц情女王★ 提交于 2019-12-22 12:31:38
问题 I am doing a tobit analysis on a dataset where the dependent variable (lets call it y) is left censored at 0. So this is what I do: library(AER) fit <- tobit(data=mydata,formula=y ~ a + b + c) This is fine. Now I want to run the "predict" function to get the fitted values. Ideally I am interested in the predicted values of the unobserved latent variable "y*" and the observed censored variable "y" [See Reference 1]. I checked the documentation for predict.survreg [Reference 2] and I don't

Multiclass Classification with LightGBM

允我心安 提交于 2019-12-22 04:45:17
问题 I am trying to model a classifier for a multi-class Classification problem (3 Classes) using LightGBM in Python. I used the following parameters. params = {'task': 'train', 'boosting_type': 'gbdt', 'objective': 'multiclass', 'num_class':3, 'metric': 'multi_logloss', 'learning_rate': 0.002296, 'max_depth': 7, 'num_leaves': 17, 'feature_fraction': 0.4, 'bagging_fraction': 0.6, 'bagging_freq': 17} All the categorical features of the dataset is label encoded with LabelEncoder . I trained the

glmer - predict with binomial data (cbind count data)

南笙酒味 提交于 2019-12-21 04:38:11
问题 I am trying to predict values over time (Days in x axis) for a glmer model that was run on my binomial data. Total Alive and Total Dead are count data. This is my model, and the corresponding steps below. full.model.dredge<-glmer(cbind(Total.Alive,Total.Dead)~(CO2.Treatment+Lime.Treatment+Day)^3+(Day|Container)+(1|index), data=Survival.data,family="binomial") We have accounted for overdispersion as you can see in the code (1:index). We then use the dredge command to determine the best fitted

How to predict x values from a linear model (lm)

孤者浪人 提交于 2019-12-20 19:44:56
问题 I have this data set: x <- c(0, 40, 80, 120, 160, 200) y <- c(6.52, 5.10, 4.43, 3.99, 3.75, 3.60) I calculated a linear model using lm() : model <- lm(y ~ x) I want know the predicted values of x if I have new y values, e.g. ynew <- c(5.5, 4.5, 3.5) , but if I use the predict() function, it calculates only new y values. How can I predict new x values if I have new y values? 回答1: Since this is a typical problem in chemistry (predict values from a calibration), package chemCal provides inverse

How to predict x values from a linear model (lm)

强颜欢笑 提交于 2019-12-20 19:44:07
问题 I have this data set: x <- c(0, 40, 80, 120, 160, 200) y <- c(6.52, 5.10, 4.43, 3.99, 3.75, 3.60) I calculated a linear model using lm() : model <- lm(y ~ x) I want know the predicted values of x if I have new y values, e.g. ynew <- c(5.5, 4.5, 3.5) , but if I use the predict() function, it calculates only new y values. How can I predict new x values if I have new y values? 回答1: Since this is a typical problem in chemistry (predict values from a calibration), package chemCal provides inverse

Predict y value for a given x in R

南笙酒味 提交于 2019-12-20 03:07:31
问题 I have a linear model: mod=lm(weight~age, data=f2) I would like to input an age value and have returned the corresponding weight from this model. This is probably simple, but I have not found a simple way to do this. 回答1: If your purposes are related to just one prediction you can just grab your coefficient with coef(mod) Or you can just build a simple equation like this. coef(mod)[1] + "Your_Value"*coef(mod)[2] 回答2: Its usually more robust to use the predict method of lm : f2<-data.frame(age