While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1
Turing-completeness of recurrent neural networks could mean: the (finite) transition tables of each and every Turing machine (with a finite-state head and an infinite tape) can be modelled by a finite recurrent neural network (finitely many neurons with finitely many states, especially only two states). The transition tables define three functions:
next-state(current-state,current-symbol)
next-symbol(current-state, current-symbol)
direction(current-state,current-symbol)
This is how a recurrent neural network may perform this task (just a very raw sketch):
The green neurons read the symbol in the current cell (in binary representation), the gray neurons (initally mute) determine the current state, the red neurons write the new symbol to the current cell, the yellow neurons determine whether to go left or right. The blue neurons are the inner neurons (initially mute).
The claim is, that for each and every Turing machine there is such a recurrent neural network.
I wonder if there is a systematic way to construct such a network from given transition tables.