how do I use a very large (>2M) word embedding in tensorflow?

痴心易碎 提交于 2019-12-20 19:57:10

问题


I am running a model with a very big word embedding (>2M words). When I use tf.embedding_lookup, it expects the matrix, which is big. When I run, I subsequently get out of GPU memory error. If I reduce the size of the embedding, everything works fine.

Is there a way to deal with larger embedding?


回答1:


The recommended way is to use a partitioner to shard this large tensor across several parts:

embedding = tf.get_variable("embedding", [1000000000, 20],
                            partitioner=tf.fixed_size_partitioner(3))

This will split the tensor into 3 shards along 0 axis, but the rest of the program will see it as an ordinary tensor. The biggest benefit is to use a partitioner along with parameter server replication, like this:

with tf.device(tf.train.replica_device_setter(ps_tasks=3)):
  embedding = tf.get_variable("embedding", [1000000000, 20],
                              partitioner=tf.fixed_size_partitioner(3))

The key function here is tf.train.replica_device_setter. It allows you to run 3 different processes, called parameter servers, that store all of model variables. The large embedding tensor will be split across these servers like on this picture.



来源:https://stackoverflow.com/questions/43288147/how-do-i-use-a-very-large-2m-word-embedding-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!