问题
I'm experiencing some trouble when using the polr function.
Here is a subset of the data I have:
# response variable
rep = factor(c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03))
# "rep" is discrete variable which represents proportion so that it varies between 0 and 1
# It is discrete proportions because it is the proportion of TRUE over a finite list of TRUE/FALSE. example: if the list has 3 arguments, the proportions value can only be 0,1/3,2/3 or 1
# predicted variable
set.seed(10)
pred.1 = sample(x=rep(1:5,10),size=50)
pred.2 = sample(x=rep(c('a','b','c','d','e'),10),size=50)
# "pred" are discrete variables
# polr
polr(rep~pred.1+pred.2)
The subset I gave you works fine ! But my entire data set and some subset of it does not work ! And I can't find anything in my data that differ from this subset except the quantity. So, here is my question: Is there any limitations in terms of the number of levels for example that would yield to the following error message:
Error in optim(s0, fmin, gmin, method = "BFGS", ...) :
the initial value in 'vmin' is not finite
and the notification message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
(I had to translate these two messages into english so they might no be 100% correct)
I sometimes only get the notification message and sometimes everything is fine depending on the what subset of my data I use.
My rep variable have a total of 101 levels for information (and contain nothing else than the kind of data I described)
So it is a terrible question that I am asking becaue I can't give you my full dataset and I don't know where is the problem. Can you guess where my problem comes from thanks to these informations ?
Thank you
回答1:
Following @joran's advice that your problem is probably the 100-level factor, I'm going to recommend something that probably isn't statistically valid but will probably still be effective in your particular situation: don't use logistic regression at all. Just drop it. Perform a simple linear regression and then discretize your output as necessary using a specialized rounding procedure. Give it a shot and see how well it works for you.
rep.v = c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03)
set.seed(10)
pred.1 = factor(sample(x=rep(1:5,10),size=50))
pred.2 = factor(sample(x=rep(c('a','b','c','d','e'),10),size=50))
model = lm(rep.v~as.factor(pred.1) + as.factor(pred.2))
output = predict(model, newx=data.frame(pred.1, pred.2))
# Here's one way you could accomplish the discretization/rounding
f.levels = unique(rep.v)
rounded = sapply(output, function(x){
d = abs(f.levels-x)
f.levels[d==min(d)]
}
)
>rounded
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
0.06 0.07 0.00 0.06 0.15 0.00 0.07 0.00 0.13 0.06 0.06 0.15 0.15 0.92 0.15 0.92 0.15 0.15 0.06 0.06 0.00 0.07 0.15 0.15
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
0.15 0.15 0.00 0.00 0.15 0.00 0.15 0.15 0.07 0.15 0.00 0.07 0.15 0.00 0.15 0.15 0.00 0.15 0.15 0.15 0.92 0.15 0.15 0.00
49 50
0.13 0.15
回答2:
orm
from rms can handle ordered outcomes with a large number of categories.
library(rms)
orm(rep ~ pred.1 + pred.2)
来源:https://stackoverflow.com/questions/17839292/polr-ordinal-logistic-regression-in-r