Generating predictions from inferred parameters in pymc3

前端 未结 2 420
南笙
南笙 2020-12-25 15:15

I run into a common problem I\'m wondering if someone can help with. I often would like to use pymc3 in two modes: training (i.e. actually running inference on parameters) a

相关标签:
2条回答
  • 2020-12-25 15:21

    Note: This functionality is now incorporated in the core code as the pymc.sample_ppc method. Check out the docs for more info.

    Based on this link (dead as of July 2017) sent to me by twiecki, there are a couple tricks to solve my issue. The first is to put the training data into a shared theano variable. This allows us to change the data later without screwing up the theano computation graph.

    X1_shared = theano.shared(X1)
    X2_shared = theano.shared(X2)
    

    Next, build the model and run the inference as usual, but using the shared variables.

    with basic_model:
    
        # Priors for unknown model parameters
        alpha = Normal('alpha', mu=0, sd=10)
        beta = Normal('beta', mu=0, sd=10, shape=2)
        sigma = HalfNormal('sigma', sd=1)
    
        # Expected value of outcome
        mu = alpha + beta[0]*X1_shared + beta[1]*X2_shared
    
        # Likelihood (sampling distribution) of observations
        Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
    
        start = find_MAP()
        step = NUTS(scaling=start)
        trace = sample(2000, step, start=start)
    

    Finally, there's a function under development (will likely eventually get added to pymc3) that will allow to predict posteriors for new data.

    from collections import defaultdict
    
    def run_ppc(trace, samples=100, model=None):
        """Generate Posterior Predictive samples from a model given a trace.
        """
        if model is None:
             model = pm.modelcontext(model)
    
        ppc = defaultdict(list)
        for idx in np.random.randint(0, len(trace), samples):
            param = trace[idx]
            for obs in model.observed_RVs:
                ppc[obs.name].append(obs.distribution.random(point=param))
    
        return ppc
    

    Next, pass in the new data that you want to run predictions on:

    X1_shared.set_value(X1_new)
    X2_shared.set_value(X2_new)
    

    Finally, you can generate posterior predictive samples for the new data.

    ppc = run_ppc(trace, model=model, samples=200)
    

    The variable ppc is a dictionary with keys for each observed variable in the model. So, in this case ppc['Y_obs'] would contain a list of arrays, each of which is generated using a single set of parameters from trace.

    Note that you can even modify the parameters extracted from the trace. For example, I had a model using a GaussianRandomWalk variable and I wanted to generate predictions into the future. While you could allow pymc3 to sample into the future (i.e. allow the random walk variable to diverge), I just wanted to use a fixed value of the coefficient corresponding to the last inferred value. This logic can implemented in the run_ppc function.

    It's also worth mentioning that the run_ppc function is extremely slow. It takes about as much time as running the actual inference. I suspect this has to do with some inefficiency related to how theano is used.

    EDIT: The link originally included seems to be dead.

    0 讨论(0)
  • 2020-12-25 15:37

    Above answer from @santon is correct. I am just adding to that.

    Now you don't need to write your own method run_ppc. pymc3 provides sample_posterior_predictive method which does the same.

    0 讨论(0)
提交回复
热议问题