While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1
I think an important point about the turing machine is that for any given input and program, the machine will only need a finite amount of tape, assuming it halts some time. That's why I would say the term "turing complete" is useful: You only need finite memory to run one specific turing complete program on some specific input (if the program halts). But if you have a non-turing-complete machine/language/technology, it won't be able to simulate certain algorithms, not matter how much memory you add.