Does Python optimize function calls from loops?

后端 未结 5 523
不思量自难忘°
不思量自难忘° 2021-02-04 04:50

Say, I have a code which calls some function millions time from loop and I want the code to be fast:

def outer_function(file):
    for line in file:
        inne         


        
5条回答
  •  挽巷
    挽巷 (楼主)
    2021-02-04 05:16

    Which Python? PyPy's JIT-compiler will - after a few hundred or dozen (depends on how many opcodes are executed on each iteration) iterations or so - start tracing execution, forget about Python function calls along the way, and compile the gathered information into a piece of optimized machine code which likely doesn't have any remnant of the logic that made the function call itself happen. Traces are linear, the JIT's backend doesn't even know there was a function call, it just sees the instructions from both functions mixed together as they were executed. (This is the perfect case, when e.g. there is branching in the loop or all iterations take the same branch. Some code is unsuited to this kind of JIT-compilation and invalidates the traces quickly, before they yield much speedup, although this is rather rare.)

    Now, CPython, what many people mean when they speak of "Python" or the Python interpreter, isn't that clever. It's a straightforward bytecode VM and will dutifully execute the logic associated with calling a function again and again in each iteration. But then again, why are you using an interpreter anyway if performance is that important? Consider writing that hot loop in native code (e.g. as a C extension or in Cython) if it's that important to keep such overhead as low as humanly possible.

    Unless you're doing only a tiny bit of number crunching per iteration, you won't get large improvements either way though.

提交回复
热议问题