问题
I wasn't sure whether to make an issue on github out of this, but I think this is not so much an issue rather than my lack of understanding, so I post it here.
I would like to put priors on the hyperparameters on the kernels in a GPFlow model (an RBF kernel in this case). This is easy to do -- for example, I can write:
kern.variance.prior = gpf.priors.Gaussian(0, 1)
On the kernel variance parameter.
What I am unsure about is what this statement does with constrained parameters, such as the variance above. It is constrained positive, and the manual writes that there is also an unconstrained representation, log(exp(theta) - 1).
What I would like to understand is what the prior is placed on. Will this normal distribution be placed on the unconstrained representation, or directly on the transformed one? The latter would be a little strange, since it has support for negative values (perhaps I should use only distributions with positive support?).
Thanks!
回答1:
Yes, the distribution is placed on the constrained (+ve) parameter.
Note that the change of variables is accounted for using the Jacobian of the transform.
True, in this case putting a Gaussian prior on a +ve variable makes little sense. The outcome might be that you have a truncated Gaussian prior, but I’d have to check... that’s not how it’s intended to be used!
Perhaps GPflow should warn users if priors are not compatible with constraints? PRs welcome.
来源:https://stackoverflow.com/questions/57067948/interpreting-priors-on-constrained-parameters-in-gpflow