I was wondering if there is an easy way to calculate the dot product of two vectors (i.e. 1-d tensors) and return a scalar value in tensorflow.
Given two vectors X=(
In newer versions (I think since 0.12), you should be able to do
tf.einsum('i,i->', x, y)
(Before that, the reduction to a scalar seemed not to be allowed/possible.)
One of the easiest way to calculate dot product between two tensors (vector is 1D tensor) is using tf.tensordot
a = tf.placeholder(tf.float32, shape=(5))
b = tf.placeholder(tf.float32, shape=(5))
dot_a_b = tf.tensordot(a, b, 1)
with tf.Session() as sess:
print(dot_a_b.eval(feed_dict={a: [1, 2, 3, 4, 5], b: [6, 7, 8, 9, 10]}))
# results: 130.0
Let us assume that you have two column vectors
u = tf.constant([[2.], [3.]])
v = tf.constant([[5.], [7.]])
If you want a 1x1 matrix you can use
tf.einsum('ij,ik->jk',x,y)
If you are interested in a scalar you can use
tf.einsum('ij,ik->',x,y)
ab = tf.reduce_sum(a*b)
Take a simple example as follows:
import tensorflow as tf
a = tf.constant([1,2,3])
b = tf.constant([2,3,4])
print(a.get_shape())
print(b.get_shape())
c = a*b
ab = tf.reduce_sum(c)
with tf.Session() as sess:
print(c.eval())
print(ab.eval())
# output
# (3,)
# (3,)
# [2 6 12]
# 20
You can do tf.mul(x,y), followed by tf.reduce_sum()
Maybe with the new docs you can just set the transpose option to true for either the first argument of the dot product or the second argument:
tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)
leading:
tf.matmul(a, b, transpose_a=True, transpose_b=False)
tf.matmul(a, b, transpose_a=False, transpose_b=True)