问题
My model has a LogNormal RV, C, of shape (W,D). Each row in W and each column in D has a parameter that is being fit. I have tried to specify my observations as a (W,D) matrix, however, that is leading to a theano compile error
raise Exception('Compilation failed (return status=%s): %s' %
Exception: ('The following error happened while compiling the node', Alloc(Elemwise{switch,no_inplace}.0, TensorConstant{10}, TensorConstant{10}), '\n', 'Compilation failed (return status=3): ', '[Alloc(<TensorType(float64, row)>, TensorConstant{10}, TensorConstant{10})]')
The model I am using is from the answer here. The model along with the along with the observed data is below:
logloss = np.array([[6.85856503, 6.35784227, 7.15773548, 6.7262334 , 4.34380542,
4.68213123, 4.20469262, 2.07944154, 1.38629436, 0. ],
[6.74405919, 6.57228254, 6.45833828, 5.43807931, 3.58351894,
2.94443898, 3.25809654, 2.56494936, 1.60943791, nan],
[6.89060912, 7.11314211, 6.42810527, 6.90975328, 5.33271879,
3.25809654, 3.61091791, 3.97029191, nan, nan],
[7.41276402, 6.93537045, 6.18208491, 6.06610809, 5.70378247,
6.04025471, 2.48490665, nan, nan, nan],
[6.83733281, 6.91572345, 6.53087763, 6.55961524, 3.58351894,
4.81218436, nan, nan, nan, nan],
[7.05789794, 7.12286666, 5.98393628, 5.28320373, 3.63758616,
nan, nan, nan, nan, nan],
[7.2984451 , 7.31455283, 6.8721281 , 6.64509097, nan,
nan, nan, nan, nan, nan],
[7.12286666, 6.73340189, 6.26720055, nan, nan,
nan, nan, nan, nan, nan],
[7.18992217, 6.9902565 , nan, nan, nan,
nan, nan, nan, nan, nan],
[7.25347038, nan, nan, nan, nan,
nan, nan, nan, nan, nan]])
logprem = np.array([8.66768002, 8.49862181, 8.60410456, 8.54966038, 8.55910259,
8.56216656, 8.51559191, 8.60630237, 8.56140145, 8.50956416])
D = 10
W = 10
logprem = tt._shared(logprem).T.repeat(D, axis=1))
with pm.Model() as model:
logelr = pm.Normal('logelr', -0.4, np.sqrt(10))
# col vector
alpha = pm.Normal("alpha", 0, sigma=np.sqrt(10), shape=(W-1,1))
# row vector
beta = pm.Normal("beta", 0, sigma=np.sqrt(10), shape=(1,D-1))
# prepend zero and broadcast to (W,D)
alpha_aug = tt.concatenate([tt.zeros((1,1)), alpha], axis=0).repeat(D, axis=1)
# append zero and broadcast to (W,D)
beta_aug = tt.concatenate([beta, tt.zeros((1,1))], axis=1).repeat(W, axis=0)
# technically the indices will be reversed
# e.g., a[0], a[9] here correspond to a_10, a_1 in the paper, resp.
a = pm.Uniform('a', 0, 1, shape=D)
# Note: the [::-1] sorts it in the order specified
# such that (sigma_0 > sigma_1 > ... )
sigma = pm.Deterministic('sigma', tt.extra_ops.cumsum(a)[::-1].reshape((1,D)))
# now everything here has shape (W,D) or is scalar (logelr)
mu = logprem + logelr + alpha_aug + beta_aug
# sigma will be broadcast automatically
C = pm.Lognormal('C', mu=mu, sigma=sigma, shape=(W,D), observed=logloss)
来源:https://stackoverflow.com/questions/60433749/multidimensional-pymc3-observations