@Ehsan
Your explanation is very good. The key here is that every Variable has to be initialized before saver.save(...) call.
@Everyone
Also, tensorboard embedding is simply visualizing instances of saved Variable class. It doesn't care about whether it's words or images or anything else.
The official doc https://www.tensorflow.org/get_started/embedding_viz does not point out that it is a direction visualization of matrix, which in my opinion, introduced a lot of confusion.
Maybe you wonder what does it mean to visualize a matrix. A matrix can be interpreted as a collection of points in a space.
If I have a matrix with shape (100, 200), I can interpret it as a collection of 100 points, where each point has 200 dimension. In another words, 100 points in a 200 dimension space.
In the word2vec case, we have 100 words where each word is represented with a 200 length vector. Tensorboard embedding simply uses PCA or T-SNE to visualize this collection(matrix).
Therefore, you can through any random matrices. If you through an image with shape (1080, 1920), it will visualize each row of this image as if it's a single point.
That been said, you can visualize the embedding of any Variable class instances by simply saving then
saver = tf.train.Saver([a, _list, of, wanted, variables])
...some code you may or may not have...
saver.save(sess, os.path.join(LOG_DIR, 'filename.ckpt'))
I will try to make a detailed tutorial later.