As the title says; what\'s the difference in practice between the inline keyword and the #define preprocessor directive?
Macros (created with #define
) are always replaced as written, and can have double-evaluation problems.
inline
on the other hand, is purely advisory - the compiler is free to ignore it. Under the C99 standard, an inline
function can also have external linkage, creating a function definition which can be linked against.
#define
is a preprocessor tool and has macro semantics. Consider this, if max(a,b)
is a macro defined as
#define max(a,b) ((a)>(b)?(a):(b))
:
Ex 1:
val = max(100, GetBloodSample(BS_LDL))
would spill extra innocent blood, because the function will actually be called twice. This might mean significant performance difference for real applications.
Ex 2:
val = max(3, schroedingerCat.GetNumPaws())
This demonstrates a serious difference in program logic, because this can unexpectedly return a number which is less than 3 - something the user would not expect.
Ex 3:
val = max(x, y++)
might increment y
more than one time.
With inline functions, none of these will happen.
The main reason is that macro concept targets transparency of implementation (textual code replace) and inline targets proper language concepts, making the invocation semantics more transparent to the user.
Functions (whether inline
or not) and macros fulfill different purposes. Their difference should not be seen as ideological as some seem to take it, and what is even more important, they may work nicely together.
Macros are text replacement that is done at compile time and they can do things like
#define P99_ISSIGNED(T) ((T)-1 < (T)0)
which gives you a compile time expression of whether or not an integral type is signed or not. That is, they are ideally used when the type of an expression is not known (at the definition) and you want to do something about it. On the other hand, the pitfall with macros is that their arguments may be evaluated several times, which is bad because of side effects.
Functions (inline
) on the other are typed, which makes them more strict or, phrased negatively, less flexible. Consider the functions
inline uintmax_t absU(uintmax_t a) { return a; }
inline uintmax_t absS(uintmax_t a) {
return (-a < a) ? -a : a;
}
The first implements the trivial abs
function for an unsigned integral type. The second implements it for a signed type. (yes, it takes an unsigned as argument, this is for purpose.)
We may use these with any integral type. But, the return type will always be of the largest width and there is a certain difficulty on knowing how do choose between the two.
Now with the following macro
#define ABS(T, A) ((T)(P99_ISSIGNED(T) ? absS : absU)(A))
we have implemented a
(Well, I admit that doing this with abs
is a bit artificial, but I hope you get the picture.)
Function-like macros give you absolutely zero sanity checking at the place where they're defined. When you screw up with a macro, it'll work fine in one place and be broken somewhere else, and you won't know why until you've lost several hours of work/sleep time. Function-like macros do not operate on data, they operate on source code. Sometimes this is good, for example when you want reusable debug statements that use FILE and LINE builtins, but most of the time, it can be done just as well with an inline function, which is checked for syntax at the point of definition just like any other function.
Well, a multi-line #define
is harder to write and edit than an inline function. You can define an inline function just like any normal function and can define variables without problems. Imagine that you want to call a code-block several times within another function, and that code block needs its own variables: it's easier to do with inline functions (yes, you can do so with #defines and do { ... } while (0);
but it's something you have to think about).
Also, with enabled debugging, you normally get a "simulated" stack-frame for inline functions which might make debugging easier sometimes (at least that's what you get when you compile/link with gcc's -g
and debug with GDB, IIRC). You can place breakpoints inside your inline'd function.
Apart from that the result should be almost identical, AFAIK.