Using pretrained gensim Word2vec embedding in keras

前端 未结 3 686
盖世英雄少女心
盖世英雄少女心 2021-01-05 00:41

I have trained word2vec in gensim. In Keras, I want to use it to make matrix of sentence using that word embedding. As storing the matrix of all the sentences is very space

相关标签:
3条回答
  • 2021-01-05 01:15

    With the new Gensim version this is pretty easy:

    w2v_model.wv.get_keras_embedding(train_embeddings=False)
    

    there you have your Keras embedding layer

    0 讨论(0)
  • 2021-01-05 01:20

    Let's say you have following data that you need to encode

    docs = ['Well done!',
            'Good work',
            'Great effort',
            'nice work',
            'Excellent!',
            'Weak',
            'Poor effort!',
            'not good',
            'poor work',
            'Could have done better.']
    

    You must then tokenize it using the Tokenizer from Keras like this and find the vocab_size

    t = Tokenizer()
    t.fit_on_texts(docs)
    vocab_size = len(t.word_index) + 1
    

    You can then enocde it to sequences like this

    encoded_docs = t.texts_to_sequences(docs)
    print(encoded_docs)
    

    You can then pad the sequences so that all the sequences are of a fixed length

    max_length = 4
    padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
    

    Then use the word2vec model to make embedding matrix

    # load embedding as a dict
    def load_embedding(filename):
        # load embedding into memory, skip first line
        file = open(filename,'r')
        lines = file.readlines()[1:]
        file.close()
        # create a map of words to vectors
        embedding = dict()
        for line in lines:
            parts = line.split()
            # key is string word, value is numpy array for vector
            embedding[parts[0]] = asarray(parts[1:], dtype='float32')
        return embedding
    
    # create a weight matrix for the Embedding layer from a loaded embedding
    def get_weight_matrix(embedding, vocab):
        # total vocabulary size plus 0 for unknown words
        vocab_size = len(vocab) + 1
        # define weight matrix dimensions with all 0
        weight_matrix = zeros((vocab_size, 100))
        # step vocab, store vectors using the Tokenizer's integer mapping
        for word, i in vocab.items():
            weight_matrix[i] = embedding.get(word)
        return weight_matrix
    
    # load embedding from file
    raw_embedding = load_embedding('embedding_word2vec.txt')
    # get vectors in the right order
    embedding_vectors = get_weight_matrix(raw_embedding, t.word_index)
    

    Once you have the embedding matrix you can use it in Embedding layer like this

    e = Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=4, trainable=False)
    

    This layer can be used in making a model like this

    model = Sequential()
    e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=4, trainable=False)
    model.add(e)
    model.add(Flatten())
    model.add(Dense(1, activation='sigmoid'))
    # compile the model
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
    # summarize the model
    print(model.summary())
    # fit the model
    model.fit(padded_docs, labels, epochs=50, verbose=0)
    

    All the codes are adapted from this awesome blog post. follow it to know more about Embeddings using Glove

    For using word2vec see this post

    0 讨论(0)
  • 2021-01-05 01:27

    My code for gensim-trained w2v model. Assume all words trained in the w2v model is now a list variable called all_words.

    from keras.preprocessing.text import Tokenizer
    import gensim
    import pandas as pd
    import numpy as np
    from itertools import chain
    
    w2v = gensim.models.Word2Vec.load("models/w2v.model")
    vocab = w2v.wv.vocab    
    t = Tokenizer()
    
    vocab_size = len(all_words) + 1
    t.fit_on_texts(all_words)
    
    def get_weight_matrix():
        # define weight matrix dimensions with all 0
        weight_matrix = np.zeros((vocab_size, w2v.vector_size))
        # step vocab, store vectors using the Tokenizer's integer mapping
        for i in range(len(all_words)):
            weight_matrix[i + 1] = w2v[all_words[i]]
        return weight_matrix
    
    embedding_vectors = get_weight_matrix()
    emb_layer = Embedding(vocab_size, output_dim=w2v.vector_size, weights=[embedding_vectors], input_length=FIXED_LENGTH, trainable=False)
    
    0 讨论(0)
提交回复
热议问题