gpflow

How to fix some dimensions of a kernel lengthscale in gpflow?

我只是一个虾纸丫 提交于 2021-02-11 04:54:12
问题 I have a 2d kernel, k = gpflow.kernels.RBF(lengthscales=[24*5,1e-5]) m = gpflow.models.GPR(data=(X,Y), kernel=k, mean_function=None) and I want to fix the lengthscale in the 2nd dimension, and just optimise the other. I can disable all lengthscale optimisation using, gpflow.set_trainable(m.kernel.lengthscales, False) but I can't pass just one dimension to this method. In GPy we would call m.kern.lengthscale[1:].fixed() or something. Maybe I could use a transform to roughly achieve this (e.g.

How to fix some dimensions of a kernel lengthscale in gpflow?

99封情书 提交于 2021-02-11 04:53:22
问题 I have a 2d kernel, k = gpflow.kernels.RBF(lengthscales=[24*5,1e-5]) m = gpflow.models.GPR(data=(X,Y), kernel=k, mean_function=None) and I want to fix the lengthscale in the 2nd dimension, and just optimise the other. I can disable all lengthscale optimisation using, gpflow.set_trainable(m.kernel.lengthscales, False) but I can't pass just one dimension to this method. In GPy we would call m.kern.lengthscale[1:].fixed() or something. Maybe I could use a transform to roughly achieve this (e.g.

How to fix some dimensions of a kernel lengthscale in gpflow?

余生颓废 提交于 2021-02-11 04:52:08
问题 I have a 2d kernel, k = gpflow.kernels.RBF(lengthscales=[24*5,1e-5]) m = gpflow.models.GPR(data=(X,Y), kernel=k, mean_function=None) and I want to fix the lengthscale in the 2nd dimension, and just optimise the other. I can disable all lengthscale optimisation using, gpflow.set_trainable(m.kernel.lengthscales, False) but I can't pass just one dimension to this method. In GPy we would call m.kern.lengthscale[1:].fixed() or something. Maybe I could use a transform to roughly achieve this (e.g.

GPFlow-2.0 - issue with default_float and likelihood variance

非 Y 不嫁゛ 提交于 2021-01-29 15:00:52
问题 I am trying to use gpflow (2.0rc) with float64 and had been struggling to get even simple examples to work. I configure gpflow using: gpflow.config.set_default_float(np.float64) I am using GPR: # Model construction: k = gpflow.kernels.Matern52(variance=1.0, lengthscale=0.3) m = gpflow.models.GPR((X, Y), kernel=k) m.likelihood.variance = 0.01 And indeed if I print a summary, both parameters have dtype float64. However if I try to predict with this model, I get an error. tensorflow.python

Kernel's hyper-parameters; initialization and setting bounds

有些话、适合烂在心里 提交于 2021-01-29 12:31:15
问题 I think many other people like me might be interested in how they can use GPFlow for their special problems. The key is how GPFlow is customizable, and a good example would be very helpful. In my case, I read and tried lots of comments in raised issues without any real success. Setting kernel model parameters is not straightforward (creating with default values, and then do it via the delete object method). Transform method is vague. It would be really helpful if you could add an example

Interpreting priors on constrained parameters in GPFlow

狂风中的少年 提交于 2021-01-28 22:16:50
问题 I wasn't sure whether to make an issue on github out of this, but I think this is not so much an issue rather than my lack of understanding, so I post it here. I would like to put priors on the hyperparameters on the kernels in a GPFlow model (an RBF kernel in this case). This is easy to do -- for example, I can write: kern.variance.prior = gpf.priors.Gaussian(0, 1) On the kernel variance parameter. What I am unsure about is what this statement does with constrained parameters, such as the

Save the model in gpflow 2

倖福魔咒の 提交于 2020-05-17 07:46:27
问题 I am trying to save a GPflow model (in GPflow version 2.0). model = gpflow.models.VGP((X, Y_data), kernel=kernel, likelihood=likelihood, num_latent_gps=1) Since the gpflow package no longer has a saver module, could anyone help me with an alternative way? 回答1: There are different ways of saving a GPflow model and the way to do it will depend on your use-case. You can either use TensorFlow's checkpointing (saving the trained weights) or use TensorFlow's SavedModel format (saving weights and

Setting hyperparameter optimization bounds in GPflow 2.0

拜拜、爱过 提交于 2020-01-15 08:20:07
问题 In GPflow 1.0, if I wanted to set hard bounds on a parameter like lengthscale (i.e. limiting the optimisation range for the parameter), transforms.Logistic(a=4., b=6.) would bound the parameter between 4 and 6. GPflow 2.0's documentation says that transforms are handled by TensorFlow Probability's Bijector classes. Which Bijector class handles setting hard limits on parameters, and what is the proper way to implement it? A similar question was asked here (Kernel's hyper-parameters;

Bounding hyperparameter optimization with Tensorflow bijector chain in GPflow 2.0

我们两清 提交于 2020-01-06 05:27:09
问题 While doing GP regression in GPflow 2.0, I want to set hard bounds on lengthscale (i.e. limiting lengthscale optimization range). Following this thread (Setting hyperparameter optimization bounds in GPflow 2.0), I constructed a TensorFlow Bijector chain (see bounded_lengthscale function below). However, the bijector chain below does not prevent the model from optimizing outside the supposed bounds. What do I need to change to make the bounded_lengthscale function put hard bounds on

ImportError: cannot import name 'AdamOptimizer' in gpflow

独自空忆成欢 提交于 2019-12-23 02:26:54
问题 I want to use AdamOptimizer with GPFlow, however I cannot import it as suggested as the source code in this link (line 26) specifies. I am unsure what I am missing. I have tried with different gpflow versions (1.1.1 and 1.3). Thanks 回答1: I guess it happens because you are using TF >=1.14. The released GPflow packages <= 1.4.1 support TF <=1.13.1 only. The GPflow develop branch now does support TF 1.14, but this has not yet been released. There is an unofficial (in progress) GPflow2 with TF 2