How to create sum of different kernel objects in TensorFlow Probability?

给你一囗甜甜゛ 提交于 2021-01-29 06:18:13

问题


I have one question about specifying kernel function in Tensorflow-probability.

Usually, if I want to create a kernel object, I will write

import tensorflow as tf
import tensorflow_probability as tfp
tfp_kernels = tfp.positive_semidefinite_kernels

kernel_obj = tfp_kernels.ExponentiateQuadratic(*args, **karwgs)

I know that kernel object support batch broadcasting. But what if I want to build a kernel object that is the sum of several different kernel objects, like additive Gaussian processes?

I am not sure how to "sum" up the kernel object in Tensorflow. What I am able to do is to create several separate kernel objects K1, ... KJ It seems that there is no similar question online.

Thanks for the help in advance.


Updates: I tried direct +, but there is something strange with the covariance matrix.

I made up the following example:

feature1 = np.array([1, 2, 3, 5], dtype=np.float32)[:, np.newaxis]
feature2 = np.array([4.2, 6.5, 7.4, 8.3], dtype=np.float32)[:, np.newaxis]
features = np.concatenate([feature1, feature2], axis=1)

k1 = tfp_kernels.ExponentiatedQuadratic(amplitude=tf.cast(2.0, tf.float32),
                                        length_scale=tf.cast(2.0, tf.float32),
                                        feature_ndims=1,
                                        name='k1')

k2 = tfp_kernels.ExponentiatedQuadratic(amplitude=tf.cast(1.5, tf.float32),
                                        length_scale=tf.cast(1.5, tf.float32),
                                        feature_ndims=1,
                                        name='k2')

K = k1 + k2


gp_1 = tfd.GaussianProcess(kernel=k1,
                           index_points=feature1,
                           jitter=tf.cast(0, tf.float32),
                           name='gp_1')

gp_2 = tfd.GaussianProcess(kernel=k2,
                           index_points=feature2,
                           jitter=tf.cast(0, tf.float32),
                           name='gp_2')

gp_K1 = tfd.GaussianProcess(kernel=K,
                           index_points=feature1,
                           jitter=tf.cast(0, tf.float32),
                           name='gp_K')

gp_K2 = tfd.GaussianProcess(kernel=K,
                           index_points=feature2,
                           jitter=tf.cast(0, tf.float32),
                           name='gp_K')

gp_K = tfd.GaussianProcess(kernel=K,
                           index_points=features,
                           jitter=tf.cast(0, tf.float32),
                           name='gp_K')


gp_1_cov = gp_1.covariance()
gp_2_cov = gp_2.covariance()
gp_K1_cov = gp_K1.covariance()
gp_K2_cov = gp_K2.covariance()
gp_K_cov = gp_K.covariance()

with tf.Session() as my_sess:
    [gp_1_cov_, gp_2_cov_, gp_K1_cov_, gp_K2_cov_, gp_K_cov_] = my_sess.run([gp_1_cov, gp_2_cov, gp_K1_cov, gp_K2_cov, gp_K_cov])
my_sess.close()

print(gp_1_cov_)
print(gp_2_cov_)
print(gp_K1_cov_)
print(gp_K2_cov_)
print(gp_K_cov_)

The first four covariance matrices are fine, and I double check it by comparing the k(x_i, x_j) element-wise.

However, I don't know how it computes the last one. I tried

  1. feature_1 with kernel_1 and feature_2 with kernel_2
  2. feature_1 with kernel_2 and feature_2 with kernel_1

Below are the results of the last three matrices:

[[6.25       5.331647   3.3511252  0.60561347]
 [5.331647   6.25       5.331647   1.6031142 ]
 [3.3511252  5.331647   6.25       3.3511252 ]
 [0.60561347 1.6031142  3.3511252  6.25      ]]
[[6.25       2.7592793  1.3433135  0.54289836]
 [2.7592793  6.25       5.494186   3.7630994 ]
 [1.3433135  5.494186   6.25       5.494186  ]
 [0.54289836 3.7630994  5.494186   6.25      ]]
[[6.25       2.3782768  0.769587   0.06774138]
 [2.3782768  6.25       4.694947   1.0143608 ]
 [0.769587   4.694947   6.25       2.9651313 ]
 [0.06774138 1.0143608  2.9651313  6.25      ]]

They don't match with my result. Does anyone know how they compute the last matrix with different index_points?

Or in general, how do I specify the kernel so that they can fit the model such as additive Gaussian processes, where different index_points correspond to different kernel functions, so that I can fit the model y_i = f_1(x_{1,i}) + f_2(x_{2,i}) + ... under TensorFlow Probability framework?


回答1:


You can just write k_sum = k1 + k2! Check out the base class PositiveSemidefiniteKernel, where we've overridden the addition and multiplication operators, of you want to see how it works.



来源:https://stackoverflow.com/questions/56199905/how-to-create-sum-of-different-kernel-objects-in-tensorflow-probability

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!