I am now learning about LSM (Liquid State Machines), and I try to understand how they are used for learning.
I am pretty confused from what I read over the web.
From your questions, it seems that you are on the right track. Anyhow, the Liquid State Machine and Echo State machine are complex topics that deal with computational neuroscience and physics, topics like chaos, dynamic action system, and feedback system and machine learning. So, it’s ok if you feel like it’s hard to wrap your head around it.
To answer your questions:
Regarding LIF (Leaky Integrate & Fire neurons), as I see it (I could be wrong) the big difference between the two approaches is the individual unit. In liquid state machine uses biological like neurons, and in the Echo state uses more analog units. So, in terms of “very short term memory” the Liquid State approach each individual neuron remembers its own history, where in the Echo state approach each individual neuron reacts based only on the current state, and therefore the memory stored in the activity between the units.
To understand LSMs you have to understand the comparision with Liquid. Regard the following image:
The LSM models this behavior we have:
A reservoir of randomly connected neurons. Those represent your Water which interacts with your stones in a specific way.
A output layer which interprets that pattern, and uses it for classification.
I just want to add 2 additional points for other readers. First, that "natrium-kalium" pumps are sodium-potassium pumps in English. Second is the relationship between liquid state machines (LSM) and finite state machines (FSM) (since some readers may come with an understanding of finite state machines already).
The relationship between LSM and FSM is mostly a mere analogy. However, the units (neurons) of an LSM can be individually modeled as FSM with regards to whether or not they fire action potentials (change state). A difficulty with this is that the timing of the state changes of each unit and its neighbors is not fixed. So when we consider the states of all the units and how they change in time then we get an infinite transition table, and that puts the LSM in the class of a transition system, not an FSM (maybe this is a little obvious). However, we then add the linear discriminator... This is a simple deterministic readout layer which is trained to pick out patterns in the LSM corresponding to desired computations. The readout system monitors a subset of units, and usually has well defined temporal rules. In other words, it ignores many state transitions and is sensitive to only a few. This makes it somewhat like an FSM.
You may read that combinations of units in the LSM can form FSM such that the readout identifies FSM "virtually contained within it". This comes from a writer who is thinking about the LSM as a computer model first and foremost (when in principle you might elucidate the units and connections comprising a "virtual FSM" and construct an actual analogous FSM). Such a statement can be confusing for anyone thinking about LSM as a biological system, where it is better to think about the readout as an element which selects and combines features of the LSM in a manner that ignores the high-dimensional variability and produces a reliable low dimensional FSM like result.