Doubts regarding `Understanding Keras LSTMs`

前端 未结 1 1193
天涯浪人
天涯浪人 2021-01-30 09:41

I am new to LSTMs and going through the Understanding Keras LSTMs and had some silly doubts related to a beautiful answer by Daniel Moller.

Here are some of my doubts:<

1条回答
  •  时光说笑
    2021-01-30 10:05

    Question 3

    Understanding the question 3 is sort of a key to understand the others, so, let's try it first.

    All recurrent layers in Keras perform hidden loops. These loops are totally invisible to us, but we can see the results of each iteration at the end.

    The number of invisible iterations is equal to the time_steps dimension. So, the recurrent calculations of an LSTM happen regarding the steps.

    If we pass an input with X steps, there will be X invisible iterations.

    Each iteration in an LSTM will take 3 inputs:

    • The respective slice of the input data for this step
    • The inner state of the layer
    • The output of the last iteration

    So, take the following example image, where our input has 5 steps:

    many to many

    What will Keras do in a single prediction?

    • Step 0:
      • Take the first step of the inputs, input_data[:,0,:] a slice shaped as (batch, 2)
      • Take the inner state (which is zero at this point)
      • Take the last output step (which doesn't exist for the first step)
      • Pass through the calculations to:
        • Update the inner state
        • Create one output step (output 0)
    • Step 1:
      • Take the next step of the inputs: input_data[:,1,:]
      • Take the updated inner state
      • Take the output generated in the last step (output 0)
      • Pass through the same calculation to:
        • Update the inner state again
        • Create one more output step (output 1)
    • Step 2:
      • Take input_data[:,2,:]
      • Take the updated inner state
      • Take output 1
      • Pass through:
        • Update the inner state
        • Create output 2
    • And so on until step 4.

    • Finally:

      • If stateful=False: automatically resets inner state, resets last output step
      • If stateful=True: keep inner state, keep last ouptut step

    You will not see any of these steps. It will look like just a single pass.

    But you can choose between:

    • return_sequences = True: every output step is returned, shape (batch, steps, units)
      • This is exactly many to many. You get the same number of steps in the output as you had in the input
    • return_sequences = False: only the last output step is returned, shape (batch, units)
      • This is many to one. You generate a single result for the entire input sequence.

    Now, this answers the second part of your question 2: Yes, predict will compute everything without you noticing. But:

    The number of output steps will be equal to the number of input steps

    Question 4

    Now, before going to the question 2, let's look at 4, which is actually the base of the answer.

    Yes, the batch division should be done manually. Keras will not change your batches. So, why would I want to divide a sequence?

    • 1, the sequence is too big, one batch doesn't fit the computer's or the GPU's memory
    • 2, you want to do what is happening on question 2: manipulate the batches between each step iteration.

    Question 2

    In question 2, we are "predicting the future". So, what is the number of output steps? Well, it's the number you want to predict. Suppose you're trying to predict the number of clients you will have based on the past. You can decide to predict for one month in the future, or for 10 months. Your choice.

    Now, you're right to think that predict will calculate the entire thing at once, but remember question 3 above where I said:

    The number of output steps is equal to the number of input steps

    Also remember that the first output step is result of the first input step, the second output step is result of the second input step, and so on.

    But we want the future, not something that matches the previous steps one by one. We want that the result step follows the "last" step.

    So, we face a limitation: how to define a fixed number of output steps if we don't have their respective inputs? (The inputs for the distant future are also future, so, they don't exist)

    That's why we break our sequence into sequences of only one step. So predict will also output only one step.

    When we do this, we have the ability to manipulate the batches between each iteration. And we have the ability to take output data (which we didn't have before) as input data.

    And stateful is necessary because we want that each of these steps be connected as a single sequence (don't discard the states).

    Question 5

    The best practical application of stateful=True that I know is the answer of question 2. We want to manipulate the data between steps.

    This might be a dummy example, but another application is if you're for instance receiving data from a user on the internet. Each day the user uses your website, you give one more step of data to your model (and you want to continue this user's previous history in the same sequence).

    Question 1

    Then, finally question 1.

    I'd say: always avoid stateful=True, unless you need it.
    You don't need it to build a one to many network, so, better not use it.

    Notice that the stateful=True example for this is the same as the predict the future example, but you start from a single step. It's hard to implement, it will have worse speed because of manual loops. But you can control the number of output steps and this might be something you want in some cases.

    There will be a difference in calculations too. And in this case I really can't answer if one is better than the other. But I don't believe there will be a big difference. But networks are some kind of "art", and testing might bring funny surprises.

    Answers for EDIT:

    A

    We should not mistake "states" with "weights". They're two different variables.

    • Weights: the learnable parameters, they're never reset. (If you reset the weights, you lose everything the model learned)
    • States: current memory of a batch of sequences (relates to which step on the sequence I am now and what I have learned "from the specific sequences in this batch" up to this step).

    Imagine you are watching a movie (a sequence). Every second makes you build memories like the name of the characters, what they did, what their relationship is.

    Now imagine you get a movie you never saw before and start watching the last second of the movie. You will not understand the end of the movie because you need the previous story of this movie. (The states)

    Now image you finished watching an entire movie. Now you will start watching a new movie (a new sequence). You don't need to remember what happened in the last movie you saw. If you try to "join the movies", you will get confused.

    In this example:

    • Weights: your ability to understand and intepret movies, ability to memorize important names and actions
    • States: on a paused movie, states are the memory of what happened from the beginning up to now.

    So, states are "not learned". States are "calculated", built step by step regarding each individual sequence in the batch. That's why:

    • resetting states mean starting new sequences from step 0 (starting a new movie)
    • keeping states mean continuing the same sequences from the last step (continuing a movie that was paused, or watching part 2 of that story )

    States are exactly what make recurrent networks work as if they had "memory from the past steps".

    B

    In an LSTM, the last output step is part of the "states".

    An LSTM state contains:

    • a memory matrix updated every step by calculations
    • the output of the last step

    So, yes: every step produces its own output, but every step uses the output of the last step as state. This is how an LSTM is built.

    • If you want to "continue" the same sequence, you want memory of the last step results
    • If you want to "start" a new sequence, you don't want memory of the last step results (these results will keep stored if you don't reset states)

    C

    You stop when you want. How many steps in the future do you want to predict? That's your stopping point.

    Imagine I have a sequence with 20 steps. And I want to predict 10 steps in the future.

    In a standard (non stateful) network, we can use:

    • input 19 steps at once (from 0 to 18)
    • output 19 steps at once (from 1 to 19)

    This is "predicting the next step" (notice the shift = 1 step). We can do this because we have all the input data available.

    But when we want the 10 future steps, we cannot output them at once because we don't have the necessary 10 input steps (these input steps are future, we need the model to predict them first).

    So we need to predict one future step from existing data, then use this step as input for the next future step.

    But I want that these steps are all connected. If I use stateful=False, the model will see a lot of "sequences of length 1". No, we want one sequence of length 30.

    D

    This is a very good question and you got me ....

    The stateful one to many was an idea I had when writing that answer, but I never used this. I prefer the "repeat" option.

    You could train step by step using train_on_batch only in the case you have the expected outputs of each step. But otherwise I think it's very complicated or impossible to train.

    E

    That's one common approach.

    • Generate a condensed vector with a network (this vector can be a result, or the states generated, or both things)
    • Use this condensed vector as initial input/state of another network, generate step by step manually and stop when a "end of sentence" word or character is produced by the model.

    There are also fixed size models without the manual loop. You suppose your sentence has a maximum length of X words. The result sentences that are shorter than this are completed with "end of sentence" or "null" words/characters. A Masking layer is very useful in these models.

    F

    You provide only the input. The other two things (last output and inner states) are already stored in the stateful layer.

    I made the input = last output only because our specific model is predicting the next step. That's what we want it to do. For each input, the next step.

    We taught this with the shifted sequence in training.

    G

    It doesn't matter. We want only the last step.

    • The number of sequences is kept by the first :.
    • And only the last step is considered by -1:.

    But if you want to know, you can print predicted.shape. It is equal to totalSequences.shape in this model.

    Edit 2

    I

    First, we can't use "one to many" models to predict the future, because we don't have data for that. There is no possibility to understand a "sequence" if you don't have the data for the steps of the sequence.

    So, this type of model should be used for other types of applications. As I said before, I don't really have a good answer for this question. It's better to have a "goal" first, then we decide which kind of model is better for that goal.

    II

    With "step by step" I mean the manual loop.

    If you don't have the outputs of later steps, I think it's impossible to train. It's probably not a useful model at all. (But I'm not the one that knows everything)

    If you have the outputs, yes, you can train the entire sequences with fit without worrying about manual loops.

    III

    And you're right about III. You won't use repeat vector in many to many because you have varying input data.

    "One to many" and "many to many" are two different techniques, each one with their advantages and disadvantages. One will be good for certain applications, the other will be good for other applications.

    0 讨论(0)
提交回复
热议问题