Liquid State Machine: How it works and how to use it?

前端 未结 3 2044
傲寒
傲寒 2021-01-30 09:38

I am now learning about LSM (Liquid State Machines), and I try to understand how they are used for learning.

I am pretty confused from what I read over the web.

相关标签:
3条回答
  • 2021-01-30 09:59

    From your questions, it seems that you are on the right track. Anyhow, the Liquid State Machine and Echo State machine are complex topics that deal with computational neuroscience and physics, topics like chaos, dynamic action system, and feedback system and machine learning. So, it’s ok if you feel like it’s hard to wrap your head around it.

    To answer your questions:

    1. Most implementations of Liquid State Machines using the reservoir of neurons untrained. There have been some attempts to train the reservoir but they haven't had the dramatic success that justifies the computational power that is needed for this aim. (See: Reservoir Computing Approaches to Recurrent Neural Network Training) or (The p-Delta Learning Rule for Parallel Perceptrons )

      My opinion is that if you want to use the Liquid as classifier in terms of separability or generalization of pattern, you can gain much more from the way the neurons connect between each other (see Hazan, H. and Manevitz, L., Topological constraints and robustness in liquid state machines, Expert Systems with Applications, Volume 39, Issue 2, Pages 1597-1606, February 2012.) or (Which Model to Use for the Liquid State Machine?) The biological approach (in my opinion the most interesting one) (What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? )
    2. You are right, you need to wait at least until you finish giving the input, otherwise you risk in detect your input, and not the activity that occurs as a result from your input as it should be.
    3. Yes, you can imagine that your liquid complexity is a kernel in SVM that try to project the data points to some hyperspace and the detector in the liquid as the part that try to separate between the classes in the dataset. As a rule of the thumb, the number of neurons and the way they connect between each other determine the degree of complexity of the liquid.

    Regarding LIF (Leaky Integrate & Fire neurons), as I see it (I could be wrong) the big difference between the two approaches is the individual unit. In liquid state machine uses biological like neurons, and in the Echo state uses more analog units. So, in terms of “very short term memory” the Liquid State approach each individual neuron remembers its own history, where in the Echo state approach each individual neuron reacts based only on the current state, and therefore the memory stored in the activity between the units.

    0 讨论(0)
  • 2021-01-30 10:03

    To understand LSMs you have to understand the comparision with Liquid. Regard the following image:

    • You are randomly throwing stones into water. Depending of what kind of stones you have throwed into the water, there's another wave pattern after x timesteps.
    • Regarding this wave pattern you can have conclusions about the features of the different stones
    • Out of this pattern you can tell what kind of stones you threw in.

    The LSM models this behavior we have:

    • A input layer which is randomly connected to the reservoir of neurons. Take it as the stones you throw into the water
    • A reservoir of randomly connected neurons. Those represent your Water which interacts with your stones in a specific way.

      • In terms of LSM we have special Neurons (they try to model real neurons). They add activations over the timesteps and only fire if a certain amount of activation is reached, a cooldown factor representing the natrium kalium pumps in the brain is applied in addition.
      • After x timesteps you'll have a pattern of spiking neurons at that time.
    • A output layer which interprets that pattern, and uses it for classification.

    0 讨论(0)
  • 2021-01-30 10:15

    I just want to add 2 additional points for other readers. First, that "natrium-kalium" pumps are sodium-potassium pumps in English. Second is the relationship between liquid state machines (LSM) and finite state machines (FSM) (since some readers may come with an understanding of finite state machines already).

    The relationship between LSM and FSM is mostly a mere analogy. However, the units (neurons) of an LSM can be individually modeled as FSM with regards to whether or not they fire action potentials (change state). A difficulty with this is that the timing of the state changes of each unit and its neighbors is not fixed. So when we consider the states of all the units and how they change in time then we get an infinite transition table, and that puts the LSM in the class of a transition system, not an FSM (maybe this is a little obvious). However, we then add the linear discriminator... This is a simple deterministic readout layer which is trained to pick out patterns in the LSM corresponding to desired computations. The readout system monitors a subset of units, and usually has well defined temporal rules. In other words, it ignores many state transitions and is sensitive to only a few. This makes it somewhat like an FSM.

    You may read that combinations of units in the LSM can form FSM such that the readout identifies FSM "virtually contained within it". This comes from a writer who is thinking about the LSM as a computer model first and foremost (when in principle you might elucidate the units and connections comprising a "virtual FSM" and construct an actual analogous FSM). Such a statement can be confusing for anyone thinking about LSM as a biological system, where it is better to think about the readout as an element which selects and combines features of the LSM in a manner that ignores the high-dimensional variability and produces a reliable low dimensional FSM like result.

    0 讨论(0)
提交回复
热议问题