While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1
I think an important point about the turing machine is that for any given input and program, the machine will only need a finite amount of tape, assuming it halts some time. That's why I would say the term "turing complete" is useful: You only need finite memory to run one specific turing complete program on some specific input (if the program halts). But if you have a non-turing-complete machine/language/technology, it won't be able to simulate certain algorithms, not matter how much memory you add.
Basically it means that with programming language or architecture which are Turing complete
you can execute wide variety of algorithms... mostly -- any kind of them.
Non-Turing languages are greatly tighter in potential.
The answer is very simple. If you can emulate a NOR or a NAND gate with it, then it is Turing Complete, assuming that the rest is just a matter of combining things together.
Turing-completeness of recurrent neural networks could mean: the (finite) transition tables of each and every Turing machine (with a finite-state head and an infinite tape) can be modelled by a finite recurrent neural network (finitely many neurons with finitely many states, especially only two states). The transition tables define three functions:
next-state(current-state,current-symbol)
next-symbol(current-state, current-symbol)
direction(current-state,current-symbol)
This is how a recurrent neural network may perform this task (just a very raw sketch):
The green neurons read the symbol in the current cell (in binary representation), the gray neurons (initally mute) determine the current state, the red neurons write the new symbol to the current cell, the yellow neurons determine whether to go left or right. The blue neurons are the inner neurons (initially mute).
The claim is, that for each and every Turing machine there is such a recurrent neural network.
I wonder if there is a systematic way to construct such a network from given transition tables.
The point of stating that a mathematical model is Turing Complete is to reveal the capability of the model to perform any calculation, given a sufficient amount of resources (i.e. infinite), not to show whether a specific implementation of a model does have those resources. Non-Turing complete models would not be able to handle a specific set of calculations, even with enough resources, something that reveals a difference in the way the two models operate, even when they have limited resources. Of course, to prove this property, you have to do have to assume that the models are able to use an infinite amount of resources, but this property of a model is relevant even when resources are limited.