How useful is Turing completeness? are neural nets turing complete?

前端 未结 11 1174
迷失自我
迷失自我 2021-01-30 04:55

While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1

相关标签:
11条回答
  • 2021-01-30 05:48

    I think an important point about the turing machine is that for any given input and program, the machine will only need a finite amount of tape, assuming it halts some time. That's why I would say the term "turing complete" is useful: You only need finite memory to run one specific turing complete program on some specific input (if the program halts). But if you have a non-turing-complete machine/language/technology, it won't be able to simulate certain algorithms, not matter how much memory you add.

    0 讨论(0)
  • 2021-01-30 05:48

    Basically it means that with programming language or architecture which are Turing complete
    you can execute wide variety of algorithms... mostly -- any kind of them.

    Non-Turing languages are greatly tighter in potential.

    0 讨论(0)
  • 2021-01-30 05:50

    The answer is very simple. If you can emulate a NOR or a NAND gate with it, then it is Turing Complete, assuming that the rest is just a matter of combining things together.

    0 讨论(0)
  • 2021-01-30 05:53

    Turing-completeness of recurrent neural networks could mean: the (finite) transition tables of each and every Turing machine (with a finite-state head and an infinite tape) can be modelled by a finite recurrent neural network (finitely many neurons with finitely many states, especially only two states). The transition tables define three functions:

    • next-state(current-state,current-symbol)

    • next-symbol(current-state, current-symbol)

    • direction(current-state,current-symbol)

    This is how a recurrent neural network may perform this task (just a very raw sketch):

    enter image description here

    The green neurons read the symbol in the current cell (in binary representation), the gray neurons (initally mute) determine the current state, the red neurons write the new symbol to the current cell, the yellow neurons determine whether to go left or right. The blue neurons are the inner neurons (initially mute).

    The claim is, that for each and every Turing machine there is such a recurrent neural network.

    I wonder if there is a systematic way to construct such a network from given transition tables.

    0 讨论(0)
  • 2021-01-30 05:54

    The point of stating that a mathematical model is Turing Complete is to reveal the capability of the model to perform any calculation, given a sufficient amount of resources (i.e. infinite), not to show whether a specific implementation of a model does have those resources. Non-Turing complete models would not be able to handle a specific set of calculations, even with enough resources, something that reveals a difference in the way the two models operate, even when they have limited resources. Of course, to prove this property, you have to do have to assume that the models are able to use an infinite amount of resources, but this property of a model is relevant even when resources are limited.

    0 讨论(0)
提交回复
热议问题