When should I use __forceinline instead of inline?

前端 未结 12 1343
旧巷少年郎
旧巷少年郎 2021-01-01 09:34

Visual Studio includes support for __forceinline. The Microsoft Visual Studio 2005 documentation states:

The __forceinline keyword overrides the co

相关标签:
12条回答
  • 2021-01-01 09:34

    I've developed software for limited resource devices for 9 years or so and the only time I've ever seen the need to use __forceinline was in a tight loop where a camera driver needed to copy pixel data from a capture buffer to the device screen. There we could clearly see that the cost of a specific function call really hogged the overlay drawing performance.

    0 讨论(0)
  • 2021-01-01 09:36

    wA Case For noinline

    I wanted to pitch in with an unusual suggestion and actually vouch for __noinline in MSVC or the noinline attribute/pragma in GCC and ICC as an alternative to try out first over __forceinline and its equivalents when staring at profiler hotspots. YMMV but I've gotten so much more mileage (measured improvements) out of telling the compiler what to never inline than what to always inline. It also tends to be far less invasive and can produce much more predictable and understandable hotspots when profiling the changes.

    While it might seem very counter-intuitive and somewhat backward to try to improve performance by telling the compiler what not to inline, I'd claim based on my experience that it's much more harmonious with how optimizing compilers work and far less invasive to their code generation. A detail to keep in mind that's easy to forget is this:

    Inlining a callee can often result in the caller, or the caller of the caller, to cease to be inlined.

    This is what makes force inlining a rather invasive change to the code generation that can have chaotic results on your profiling sessions. I've even had cases where force inlining a function reused in several places completely reshuffled all top ten hotspots with the highest self-samples all over the place in very confusing ways. Sometimes it got to the point where I felt like I'm fighting with the optimizer making one thing faster here only to exchange a slowdown elsewhere in an equally common use case, especially in tricky cases for optimizers like bytecode interpretation. I've found noinline approaches so much easier to use successfully to eradicate a hotspot without exchanging one for another elsewhere.

    It would be possible to inline functions much less invasively if we could inline at the call site instead of determining whether or not every single call to a function should be inlined. Unfortunately, I've not found many compilers supporting such a feature besides ICC. It makes much more sense to me if we are reacting to a hotspot to respond by inlining at the call site instead of making every single call of a particular function forcefully inlined. Lacking this wide support among most compilers, I've gotten far more successful results with noinline.

    Optimizing With noinline

    So the idea of optimizing with noinline is still with the same goal in mind: to help the optimizer inline our most critical functions. The difference is that instead of trying to tell the compiler what they are by forcefully inlining them, we are doing the opposite and telling the compiler what functions definitely aren't part of the critical execution path by forcefully preventing them from being inlined. We are focusing on identifying the rare-case non-critical paths while leaving the compiler still free to determine what to inline in the critical paths.

    Say you have a loop that executes for a million iterations, and there is a function called baz which is only very rarely called in that loop once every few thousand iterations on average in response to very unusual user inputs even though it only has 5 lines of code and no complex expressions. You've already profiled this code and the profiler shows in the disassembly that calling a function foo which then calls baz has the largest number of samples with lots of samples distributed around calling instructions. The natural temptation might be to force inline foo. I would suggest instead to try marking baz as noinline and time the results. I've managed to make certain critical loops execute 3 times faster this way.

    Analyzing the resulting assembly, the speedup came from the foo function now being inlined as a result of no longer inlining baz calls into its body.

    I've often found in cases like these that marking the analogical baz with noinline produces even bigger improvements than force inlining foo. I'm not a computer architecture wizard to understand precisely why but glancing at the disassembly and the distribution of samples in the profiler in such cases, the result of force inlining foo was that the compiler was still inlining the rarely-executed baz on top of foo, making foo more bloated than necessary by still inlining rare-case function calls. By simply marking baz with noinline, we allow foo to be inlined when it wasn't before without actually also inlining baz. Why the extra code resulting from inlining baz as well slowed down the overall function is still not something I understand precisely; in my experience, jump instructions to more distant paths of code always seemed to take more time than closer jumps, but I'm at a loss as to why (maybe something to do with the jump instructions taking more time with larger operands or something to do with the instruction cache). What I can definitely say for sure is that favoring noinline in such cases offered superior performance to force inlining and also didn't have such disruptive results on the subsequent profiling sessions.

    So anyway, I'd suggest to give noinline a try instead and reach for it first before force inlining.

    Human vs. Optimizer

    In what scenario is it assumed that I know better than my compiler on this issue?

    I'd refrain from being so bold as to assume. At least I'm not good enough to do that. If anything, I've learned over the years the humbling fact that my assumptions are often wrong once I check and measure things I try with the profiler. I have gotten past the stage (over a couple of decades of making my profiler my best friend) to avoid completely blind stabs at the dark only to face humbling defeat and revert my changes, but at my best, I'm still making, at most, educated guesses. Still, I've always known better than my compiler, and hopefully, most of us programmers have always known this better than our compilers, how our product is supposed to be designed and how it is is going to most likely be used by our customers. That at least gives us some edge in the understanding of common-case and rare-case branches of code that compilers don't possess (at least without PGO and I've never gotten the best results with PGO). Compilers don't possess this type of runtime information and foresight of common-case user inputs. It is when I combine this user-end knowledge, and with a profiler in hand, that I've found the biggest improvements nudging the optimizer here and there in teaching it things like what to inline or, more commonly in my case, what to never inline.

    0 讨论(0)
  • 2021-01-01 09:38

    The compiler is making its decisions based on static code analysis, whereas if you profile as don says, you are carrying out a dynamic analysis that can be much farther reaching. The number of calls to a specific piece of code is often largely determined by the context in which it is used, e.g. the data. Profiling a typical set of use cases will do this. Personally, I gather this information by enabling profiling on my automated regression tests. In addition to forcing inlines, I have unrolled loops and carried out other manual optimizations on the basis of such data, to good effect. It is also imperative to profile again afterwards, as sometimes your best efforts can actually lead to decreased performance. Again, automation makes this a lot less painful.

    More often than not though, in my experience, tweaking alogorithms gives much better results than straight code optimization.

    0 讨论(0)
  • 2021-01-01 09:47

    The only way to be sure is to measure performance with and without. Unless you are writing highly performance critical code, this will usually be unnecessary.

    0 讨论(0)
  • 2021-01-01 09:47

    When you know that the function is going to be called in one place several times for a complicated calculation, then it is a good idea to use __forceinline. For instance, a matrix multiplication for animation may need to be called so many times that the calls to the function will start to be noticed by your profiler. As said by the others, the compiler can't really know about that, especially in a dynamic situation where the execution of the code is unknown at compile time.

    0 讨论(0)
  • 2021-01-01 09:47

    Actually, boost is loaded with it.

    For example

     BOOST_CONTAINER_FORCEINLINE flat_tree&  operator=(BOOST_RV_REF(flat_tree) x)
        BOOST_NOEXCEPT_IF( (allocator_traits_type::propagate_on_container_move_assignment::value ||
                            allocator_traits_type::is_always_equal::value) &&
                             boost::container::container_detail::is_nothrow_move_assignable<Compare>::value)
     {  m_data = boost::move(x.m_data); return *this;  }
    
     BOOST_CONTAINER_FORCEINLINE const value_compare &priv_value_comp() const
     { return static_cast<const value_compare &>(this->m_data); }
    
     BOOST_CONTAINER_FORCEINLINE value_compare &priv_value_comp()
     { return static_cast<value_compare &>(this->m_data); }
    
     BOOST_CONTAINER_FORCEINLINE const key_compare &priv_key_comp() const
     { return this->priv_value_comp().get_comp(); }
    
     BOOST_CONTAINER_FORCEINLINE key_compare &priv_key_comp()
     { return this->priv_value_comp().get_comp(); }
    
     public:
     // accessors:
     BOOST_CONTAINER_FORCEINLINE Compare key_comp() const
     { return this->m_data.get_comp(); }
    
     BOOST_CONTAINER_FORCEINLINE value_compare value_comp() const
     { return this->m_data; }
    
     BOOST_CONTAINER_FORCEINLINE allocator_type get_allocator() const
     { return this->m_data.m_vect.get_allocator(); }
    
     BOOST_CONTAINER_FORCEINLINE const stored_allocator_type &get_stored_allocator() const
     {  return this->m_data.m_vect.get_stored_allocator(); }
    
     BOOST_CONTAINER_FORCEINLINE stored_allocator_type &get_stored_allocator()
     {  return this->m_data.m_vect.get_stored_allocator(); }
    
     BOOST_CONTAINER_FORCEINLINE iterator begin()
     { return this->m_data.m_vect.begin(); }
    
     BOOST_CONTAINER_FORCEINLINE const_iterator begin() const
     { return this->cbegin(); }
    
     BOOST_CONTAINER_FORCEINLINE const_iterator cbegin() const
     { return this->m_data.m_vect.begin(); }
    
    0 讨论(0)
提交回复
热议问题