问题
I am very new to Tensorflow and have been messing around with a simple chatbot-building project from this link.
There were many warnings that were saying that things would be deprecated in Tensorflow 2.0 and that I should upgrade, so I did. I then used the automatic Tensorflow code upgrader to update all the necessary files to 2.0. There were a few errors with this.
When processing the model.py file, it returned these warnings:
133:20: WARNING: tf.nn.sampled_softmax_loss requires manual check. `partition_strategy` has been removed from tf.nn.sampled_softmax_loss. The 'div' strategy will be used by default.
148:31: WARNING: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
148:31: ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
171:33: ERROR: Using member tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
197:27: ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
The main problem I am having is working with the code from the now nonexistent contrib
module. How can I adapt the following three code blocks so they work in Tensorflow 2.0?
# Define the network
# Here we use an embedding model, it takes integer as input and convert them into word vector for
# better word representation
decoderOutputs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
self.encoderInputs, # List<[batch=?, inputDim=1]>, list of size args.maxLength
self.decoderInputs, # For training, we force the correct output (feed_previous=False)
encoDecoCell,
self.textData.getVocabularySize(),
self.textData.getVocabularySize(), # Both encoder and decoder have the same number of class
embedding_size=self.args.embeddingSize, # Dimension of each word
output_projection=outputProjection.getWeights() if outputProjection else None,
feed_previous=bool(self.args.test) # When we test (self.args.test), we use previous output as next input (feed_previous)
)
# Finally, we define the loss function
self.lossFct = tf.contrib.legacy_seq2seq.sequence_loss(
decoderOutputs,
self.decoderTargets,
self.decoderWeights,
self.textData.getVocabularySize(),
softmax_loss_function= sampledSoftmax if outputProjection else None # If None, use default SoftMax
)
encoDecoCell = tf.contrib.rnn.DropoutWrapper(
encoDecoCell,
input_keep_prob=1.0,
output_keep_prob=self.args.dropout
)
回答1:
tf.contrib
is basically a contribution made by the TensorFlow community, it works like below.
- Members of the community can submit code which is then distributed with the standard TensorFlow package. Their code is reviewed by the TensorFlow team and tested as part of TensorFlow's tests.
Now in tensorflow 2, Tensorflow removed contrib and now each project in contrib has one of three options for its future: move to core; move to a separate repository; or delete.
You can check all the lists of projects falling into which category from this link.
Coming to the alternative solution,migarting code from Tensorflow 1 to Tensorflow 2 will not happen automatically, which you have to change manually.
You can follow the below alternatives instead.
tf.contrib.rnn.DropoutWrapper
you can change it to tf.compat.v1.nn.rnn_cell.DropoutWrapper
For sequence to sequence, you can use TensorFlow Addons
.
The TensorFlow Addons project includes many sequence-to-sequence tools to let you easily build production-ready Encoder–Decoders.
For example, you can use something like below.
import tensorflow_addons as tfa
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
embeddings = keras.layers.Embedding(vocab_size, embed_size)
encoder_embeddings = embeddings(encoder_inputs)
decoder_embeddings = embeddings(decoder_inputs)
encoder = keras.layers.LSTM(512, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(512)
output_layer = keras.layers.Dense(vocab_size)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings, initial_state=encoder_state,
sequence_length=sequence_lengths)
Y_proba = tf.nn.softmax(final_outputs.rnn_output)
model = keras.Model(inputs=[encoder_inputs, decoder_inputs,
sequence_lengths],
outputs=[Y_proba])
Same way you need to change all the methods using tf.contrib to the compatible one's.
I hope this answeres your question.
来源:https://stackoverflow.com/questions/58326767/adapting-tensorflow-rnn-seq2seq-model-code-for-tensorflow-2-0