bayesian

Hyperparameter tune for Tensorflow

对着背影说爱祢 提交于 2019-12-29 04:02:35
问题 I am searching for a hyperparameter tune package for code written directly in Tensorflow (not Keras or Tflearn). Could you make some suggestion? 回答1: Usually you don't need to have your hyperparameter optimisation logic coupled with the optimised model (unless your hyperparemeter optimisation logic is specific to the kind of model that you are training, in which case you would need to tell us a bit more). There are several tools and packages available for the task. Here is a good paper on the

Weka: why getMargin returns all zeros?

和自甴很熟 提交于 2019-12-25 00:23:13
问题 I am using Weka Java API. I trained a Bayesnet on an Instances object (data set) data . /** * Initialization */ Instances data = ...; BayesNet bn = new EditableBayesNet(data); SearchAlgorithm learner = new TAN(); SimpleEstimator estimator = new SimpleEstimator(); /** * Training */ bn.initStructure(); learner.buildStructure(bn, data); estimator.estimateCPTs(bn); getMargin returns marginal distibution for a node. Ideally, assuming node A has 3 possible values, and its node index is 0. Then, bn

Stan version of a JAGS model which includes a sum of discrete values - Is it possible?

时光毁灭记忆、已成空白 提交于 2019-12-24 14:34:08
问题 I was trying to run this model in Stan. I have a running JAGS version of it (that returns highly autocorrelated parameters) and I know how to formulate it as CDF of a double exponential (with two rates), which would probably run without problems. However, I would like to use this version as a starting point for similar but more complex models. By now I have the suspicion that a model like this is not possible in Stan. Maybe because of the discreteness introduces by taking the sum of a Boolean

JAGS Poisson count censored data

偶尔善良 提交于 2019-12-24 13:32:36
问题 R, Bayestats and Jags newbie here. I'm working on modeling some count data, right censored. Poisson seems to be my best guess. I wanna do a hierarchical model, as it leaves me with more possibilities to fine tune the parameterss. Can I simply write something like this: A[i,j] <- dpois(a[i,j]) a[i,j]) <- b[i,]*x[i,j] +c[i] for all j, where x[i,j] are my variables, or should I separate the censored time interval from the previous ones or something? b[,] and c have a prior. Thank you! 回答1: This

How to specify initial probability values for variables in a dynamic Bayesian network using Bayes server

回眸只為那壹抹淺笑 提交于 2019-12-24 08:25:38
问题 I am trying to create a dynamic Bayesian network for parameter learning using the Bayes server in C# in my Unity game. The implementation is based on this article. A brief explanation of the model shown in the figure below: When a player starts playing the level I assign them an initial probability of 0.5 that they already know the stuff that they are learning which is represented as Prior node in the network shown with the associated variable called as priorKnowledge . This prior node is

Determining High Density Region for a distribution in R

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-24 04:13:08
问题 Background: Normally, R gives quantiles for well-known distributions. Out of these quantiles, the lower 2.5% up to the upper 97.5% covers 95% of the area under these distributions. Question: Suppose I have a F distribution (df1 = 10, df2 = 90). In R, how can I determine the 95% of the area under this distribution such that this 95% only covers the HIGH DENSITY area, not the 95% that R normally gives (see my R code Below )? Note: Clearly, the highest density is the "mode" (dashed line in the

Bayesian Optimisation applied in CatBoost

巧了我就是萌 提交于 2019-12-23 22:25:05
问题 This is my attempt at applying BayesSearch in CatBoost: from catboost import CatBoostClassifier from skopt import BayesSearchCV from sklearn.model_selection import StratifiedKFold # Classifier bayes_cv_tuner = BayesSearchCV( estimator = CatBoostClassifier( silent=True ), search_spaces = { 'depth':(2,16), 'l2_leaf_reg':(1, 500), 'bagging_temperature':(1e-9, 1000, 'log-uniform'), 'border_count':(1,255), 'rsm':(0.01, 1.0, 'uniform'), 'random_strength':(1e-9, 10, 'log-uniform'), 'scale_pos_weight

Java, Weka: NaiveBayesUpdateable: Cannot handle numeric class

左心房为你撑大大i 提交于 2019-12-23 20:14:03
问题 I am trying to use NaiveBayesUpdateable classifier from Weka. My data contains both nominal and numeric attributes: @relation cars @attribute country {FR, UK, ...} @attribute city {London, Paris, ...} @attribute car_make {Toyota, BMW, ...} @attribute price numeric %% car price @attribute sales numeric %% number of cars sold I need to predict the number of sales (numeric!) based on other attributes. When I run: // Train classifier ArffLoader loader = new ArffLoader(); loader.setFile(new File

How to calculate log(sum of terms) from its component log-terms

萝らか妹 提交于 2019-12-23 15:14:12
问题 (1) The simple version of the problem: How to calculate log(P1+P2+...+Pn), given log(P1), log(P2), ..., log(Pn), without taking the exp of any terms to get the original Pi. I don't want to get the original Pi because they are super small and may cause numeric computer underflow. (2) The long version of the problem: I am using Bayes' Theorem to calculate a conditional probability P(Y|E). P(Y|E) = P(E|Y)*P(Y) / P(E) I have a thousand probabilities multiplying together. P(E|Y) = P(E1|Y) * P(E2|Y

What pymc3 Monte-Carlo stepper can I use for a custom categorical distribution?

穿精又带淫゛_ 提交于 2019-12-23 04:53:25
问题 I am working on implementing hidden-Markov-Chains in pymc3. I have gotten pretty far in implementing the hidden states. Below, I am showing a simple 2-state Markov-chain: import numpy as np import pymc3 as pm import theano.tensor as tt # Markov chain sample with 2 states that was created # to have prob 0->1 = 0.1 and prob 1->0 = 0.3 sample = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0,