问题
How can one use pymc to parameterize a probabilistic graphical model?
Suppose I have a PGM with two nodes X
and Y
.
Lets say X->Y
is the graph.
And X
takes two values {0,1}
, and
Y
also takes two values {0,1}
.
I want to use pymc to learn the parameters of the distribution and populate the graphical model with it for running inferences.
The way I could think of is as follows:
X_p = pm.Uniform("X_p", 0, 1)
X = pm.Bernoulli("X", X_p, values=X_Vals, observed=True)
Y0_p = pm.Uniform("Y0_p", 0, 1)
Y0 = pm.Bernoulli("Y0", Y0_p, values=Y0Vals, observed=True)
Y1_p = pm.Uniform("Y1_p", 0, 1)
Y1 = pm.Bernoulli("Y1", Y1_p, values=Y1Vals, observed=True)
Here Y0Vals
are values of Y
corresponding to X
values = 0
And Y1Vals
are values of Y
corresponding to X
values = 1.
The plan is to draw MCMC samples from these and use the means of Y0_p
and Y1_p
to populate the discrete bayesian network's probability... So the probability table
for P(X) = (X_p,1-X_p)
while that of P(Y/X)
:
Y 0 1
X
0 Y0_p 1-Y0_p
1 Y1_p 1-Y1_p
Questions:
- Is this the correct way of doing this?
- Does not this get clumsy, especially if I have
X
having 100s of discrete values? or if a variable has two parentsX
andY
with 10 discrete values each? - Is there something better I can do?
- Are there any good books that detail how we can do this kind of interconnection.
来源:https://stackoverflow.com/questions/37658557/how-to-use-pymc-to-parameterize-a-probabilistic-graphical-model