问题
Use case
Suppose I have an observation y_0
at X_0
which I'd like to model with a Gaussian process with hyper params theta
. Suppose I then determine a distribution in the hyper params theta
by hierarchically sampling the posterior.
Now, I'd like to evaluate the log posterior probability of another observation say y_1
at X_1
, averaged over the hyper param distribution,
E_theta [ log P(y_1 | y_0, X_0, X_1, theta) ]
Ideally, I'd draw from the posterior in theta
and calculate log P(y_1 | y_0, X_0, X_1, theta)
and then take the geometric mean.
Question 1
Suppose I have a have the result of a sampling, i.e. a trace for example:
with pm.Model() as model:
...
trace = pm.sample(1000)
How do I evaluate another tensor over these samples (or a subset of them)? I can only find a partial solution using pm.Deterministic
defined as part of the model,
with model:
h = pm.Deterministic('h',thing_to_calculate_over_the_posterior)
### Make sure nothing then depends on h so it doesn't affect the sampling
trace = pm.sample(1000)
h_sampled = trace['h']
This doesn't feel right exactly. I feel like you should be able to evaluate anything over a subset of the trace after they have been sampled.
Question 2
In pymc3 is there a method part of pm.gp
that will create the tensor representing log P(y_1 | y_0 X_0 X_1 theta)
OR must I create this myself which entails writing out the posterior mean and covariance (something already done inside pm.gp
) and then calling cholesky decomp etc.
回答1:
I'll answer with an example for this use case. Essentially, separate sampling from the conditional definition, then use method 1 of method 2 below.
import numpy as np
import pymc3 as pm
# Data generation
X0 = np.linspace(0, 10, 100)[:,None]
y0 = X0**(0.5) + np.exp(-X0/5)*np.sin(10*X0)
y0 += 0.1*np.random.normal(size=y0.shape)
y0 = np.squeeze(y0)
# y1
X1 = np.linspace(0, 15, 200)[:,None]
y1 = X1**(0.5) + np.exp(-X1/6)*np.sin(8*X1)
y1 = np.squeeze(y1)
# 'Solve' the inference problem
with pm.Model() as model:
l = pm.HalfNormal('l',5.)
cov_func = pm.gp.cov.ExpQuad(1, ls=l)
gp = pm.gp.Marginal(cov_func=cov_func)
y0_ = gp.marginal_likelihood('y0',X0,y0,0.1)
trace = pm.sample(100)
# Define the object P(y1 | X1 y0 X0)
with model:
y1_ = gp.conditional('y1',X1,given={'X':X0,'y':y0,'noise':0.1})
# Note the given=... is not strictly required as it's cached from above
###
# Method 1
logp = y1_.logp
logp_vals1 = []
for point in trace:
point['y1'] = y1
logp_vals1.append(logp(point))
# note this is approximately 100x faster than logp_vals1.append(y1_.logp(point))
# because logp is a property with a lot of overhead
###
# Method 2
import theano
y1_shr = theano.shared(y1)
with model:
logp = pm.Deterministic('logp', y1_.distribution.logp(y1_shr))
logp_val2 = [pm.distributions.draw_values([logp], point) for point in trace]
Method 1 appears to be 2-3 times faster on my machine.
来源:https://stackoverflow.com/questions/48565385/how-to-calculate-log-posterior-of-a-gp-over-a-trace-in-pymc3