multinomial

Multinomial distribution in PyMC

烂漫一生 提交于 2020-01-02 22:11:12
问题 I am a newbie to pymc. I have read the required stuff on github and was doing fine till I was stuck with this problem. I want to make a collection of multinomial random variables which I can later sample using mcmc. But the best I can do is rv = [ Multinomial("rv", count[i], p_d[i]) for i in xrange(0, len(count)) ] for i in rv: print i.value i.random() for i in rv: print i.value But it is of no good since I want to be able to call rv.value and rv.random() , otherwise I won't be able to sample

GBM multinomial distribution, how to use predict() to get predicted class?

独自空忆成欢 提交于 2019-12-30 08:27:47
问题 I am using the multinomial distribution from the gbm package in R. When I use the predict function, I get a series of values: 5.086328 -4.738346 -8.492738 -5.980720 -4.351102 -4.738044 -3.220387 -4.732654 but I want to get the probability of each class occurring. How do I recover the probabilities? Thank You. 回答1: Take a look at ?predict.gbm , you'll see that there is a "type" parameter to the function. Try out predict(<gbm object>, <new data>, type="response") . 回答2: predict.gbm(..., type=

multinominal regression with imputed data

耗尽温柔 提交于 2019-12-24 06:21:32
问题 I need to impute missing data and then coduct multinomial regression with the generated datasets. I have tried using mice for the imputing and then multinom function from nnet for the multnomial regression. But this gives me unreadable output. Here is an example using the nhanes2 dataset available with the mice package: library(mice) library(nnet) test <- mice(nhanes2, meth=c('sample','pmm','logreg','norm')) #age is categorical, bmi is continuous m <- with(test, multinom(age ~ bmi, model = T)

How to get average marginal effects (AMEs) with standard errors of a multinomial logit model?

核能气质少年 提交于 2019-12-23 22:43:04
问题 I want to get the average marginal effects (AME) of a multinomial logit model with standard errors. For this I've tried different methods, but they haven't led to the goal so far. Best attempt My best attempt was to get the AMEs by hand using mlogit which I show below. library(mlogit) ml.d <- mlogit.data(df1, choice="Y", shape="wide") # shape data for `mlogit()` ml.fit <- mlogit(Y ~ 1 | D + x1 + x2, reflevel="1", data=ml.d) # fit the model # coefficient names c.names <- names(ml.fit$model)[-

Multinomial Naive Bayes Classifier

怎甘沉沦 提交于 2019-12-22 06:46:59
问题 I have been looking for a multinomial naive Bayes classifier on CRAN, and so far all I can come up with is the binomial implementation in package e1071 . Does anyone know of a package that has a multinomial Bayes classifier? 回答1: bnlearn not doing it for you? http://www.bnlearn.com/ Is on CRAN, and claims to implement "naive Bayes" network classifiers and "Discrete (multinomial) data sets are supported". 来源: https://stackoverflow.com/questions/8874058/multinomial-naive-bayes-classifier

Efficient Matlab implementation of Multinomial Coefficient

风流意气都作罢 提交于 2019-12-21 06:16:40
问题 I want to calculate the multinomial coefficient: where it is satisifed n=n0+n1+n2 The Matlab implementation of this operator can be easily done in the function: function N = nchooseks(k1,k2,k3) N = factorial(k1+k2+k3)/(factorial(k1)*factorial(k2)*factorial(k3)); end However, when the index is larger than 170, the factorial would be infinite which would generate NaN in some cases, e.g. 180!/(175! 3! 2!) -> Inf/Inf-> NaN . In other posts, they have solved this overflow issue for C and Python.

In gbm multinomial dist, how to use predict to get categorical output? [duplicate]

☆樱花仙子☆ 提交于 2019-12-20 12:31:09
问题 This question already has answers here : GBM multinomial distribution, how to use predict() to get predicted class? (2 answers) Closed 4 years ago . My response is a categorical variable (some alphabets), so I used distribution='multinomial' when making the model, and now I want to predict the response and obtain the output in terms of these alphabets, instead of matrix of probabilities. However in predict(model, newdata, type='response') , it gives probabilities, same as the result of type=

How does multinom() treat NA values by default?

旧巷老猫 提交于 2019-12-20 07:35:42
问题 When I am running multinom() , say Y ~ X1 + X2 + X3 , if for one particular row X1 is NA (i.e. missing), but Y , X2 and X3 all have a value, would this entire row be thrown out (like it does in SAS)? How are missing values treated in multinom() ? 回答1: Here is a simple example (from ?multinom from the nnet package) to explore the different na.action : > library(nnet) > library(MASS) > example(birthwt) > (bwt.mu <- multinom(low ~ ., bwt)) Intentionally create a NA value: > bwt[1,"age"]<-NA #

Dealing with negative values in sklearn MultinomialNB

☆樱花仙子☆ 提交于 2019-12-18 15:34:54
问题 I am normalizing my text input before running MultinomialNB in sklearn like this: vectorizer = TfidfVectorizer(max_df=0.5, stop_words='english', use_idf=True) lsa = TruncatedSVD(n_components=100) mnb = MultinomialNB(alpha=0.01) train_text = vectorizer.fit_transform(raw_text_train) train_text = lsa.fit_transform(train_text) train_text = Normalizer(copy=False).fit_transform(train_text) mnb.fit(train_text, train_labels) Unfortunately, MultinomialNB does not accept the non-negative values created

KeyError while printing trace in PyMC

独自空忆成欢 提交于 2019-12-12 01:39:16
问题 I had read that by default some names are assigned to Stochastic vaiables. I am writing the relevant portion of my code below. lam = pm.Uniform('lam', lower=0.0, upper=5, doc='lam') parameters = pm.Dirichlet('parameters',[1,1,1,1], doc='parameters') rv = [ pm.Multinomial("rv"+str(i), count[i], prob_distribution[i], value = data[i], observed = True) for i in xrange(0, len(count)) ] m = pm.MCMC([lam, parameters, rv]) m.sample(10) print m.trace('lam')[:] print m.trace('parameters_0')[:] The last