What\'s the point of telling the compiler specifically to include the file only once? Wouldn\'t it make sense by default? Is there even any reason to include a single file m
Including multiple times is usable e.g., with the X-macro technique:
data.inc:
X(ONE)
X(TWO)
X(THREE)
use_data_inc_twice.c
enum data_e {
#define X(V) V,
#include "data.inc"
#undef X
};
char const* data_e__strings[]={
#define X(V) [V]=#V,
#include "data.inc"
#undef X
};
I don't know about any other use.
There are multiple related questions here:
Why is #pragma once
not automatically enforced?
Because there are situations in which you want to include files more than once.
Why would you want to include a file multiple times?
Several reasons have been given in other answers (Boost.Preprocessor, X-Macros, including data files). I would like to add a particular example of "avoid code duplication": OpenFOAM encourages a style where #include
ing bits and pieces within functions is a common concept. See for example this discussion.
Ok, but why is it not the default with an opt-out?
Because it is not actually specified by the standard. #pragma
s are by definition implementation-specific extensions.
Why has #pragma once
not become a standardized feature yet (as it is widely supported)?
Because pinning down what is "the same file" in a platform-agnostic way is actually surprisingly hard. See this answer for more information.
You can use #include
anywhere in a file, not just at global scope - like, inside a function (and multiple times if needed). Sure, ugly and not good style, but possible and occasionally sensible (depending on the file you include). If #include
was only ever a one time thing then that wouldn't work. #include
just does dumb text substitution (cut'n'paste) after all, and not everything you include has to be a header file. You might - for example - #include
a file containing auto generated data containing the raw data to initialize a std::vector
. Like
std::vector<int> data = {
#include "my_generated_data.txt"
}
And have "my_generated_data.txt" be something generated by the build system during compilation.
Or maybe I'm lazy/silly/stupid and put this in a file (very contrived example):
const noexcept;
and then I do
class foo {
void f1()
#include "stupid.file"
int f2(int)
#include "stupid.file"
};
Another, slightly less contrived, example would be a source file where many functions need to use a large amount of types in a namespace, but you don't want to just say using namespace foo;
globally since that would polute the global namespace with a lot of other stuff you don't want. So you create a file "foo" containing
using std::vector;
using std::array;
using std::rotate;
... You get the idea ...
And then you do this in your source file
void f1() {
#include "foo" // needs "stuff"
}
void f2() {
// Doesn't need "stuff"
}
void f3() {
#include "foo" // also needs "stuff"
}
Note: I'm not advocating doing things like this. But it is possible and done in some codebases and I don't see why it should not be allowed. It does have its uses.
It could also be that the file you include behaves differently depending on the value of certain macros (#define
s). So you may want to include the file in multiple locations, after first having changed some value, so you get different behaviour in different parts of your source file.
In the firmware for the product I mainly work on, we need to be able to specify where functions and global/static variables should be allocated in memory. Real-time processing needs to live in L1 memory on chip so the processor can access it directly, fast. Less important processing can go in L2 memory on chip. And anything that doesn't need to be handled particularly promptly can live in the external DDR and go through caching, because it doesn't matter if it's a little slower.
The #pragma to allocate where things go is a long, non-trivial line. It'd be easy to get it wrong. The effect of getting it wrong would be that the code/data would be silently put into default (DDR) memory, and the effect of that might be closed-loop control stopping working for no reason that's easy to see.
So I use include files, which contain just that pragma. My code now looks like this.
Header file...
#ifndef HEADERFILE_H
#define HEADERFILE_H
#include "set_fast_storage.h"
/* Declare variables */
#include "set_slow_storage.h"
/* Declare functions for initialisation on startup */
#include "set_fast_storage.h"
/* Declare functions for real-time processing */
#include "set_storage_default.h"
#endif
And source...
#include "headerfile.h"
#include "set_fast_storage.h"
/* Define variables */
#include "set_slow_storage.h"
/* Define functions for initialisation on startup */
#include "set_fast_storage.h"
/* Define functions for real-time processing */
You'll notice multiple inclusions of the same file there, even just in the header. If I mistype something now, the compiler will tell me it can't find the include file "set_fat_storage.h" and I can easily fix it.
So in answer to your question, this is a real, practical example of where multiple inclusion is required.
No, this would significantly hinder the options available to, for example, library writers. For example, Boost.Preprocessor allows one to use pre-processor loops, and the only way to achieve those is by multiple inclusions of the same file.
And Boost.Preprocessor is a building block for many very useful libraries.
You seem to be operating under the assumption that the purpose of the "#include" feature even existing in the language is to provide support for decomposition of programs into multiple compilation units. That is incorrect.
It can perform that role, but that was not its intended purpose. C was originally developed as slightly higher-level language than PDP-11 Macro-11 Assembly for reimplementing Unix. It had a macro preprocessor because that was a feature of Macro-11. It had the ability to textually include macros from another file because that was a feature of Macro-11 that the existing Unix they were porting to their new C compiler had made use of.
Now it turns out that "#include" is useful for separating code into compilation units, as (arguably) a bit of a hack. However, the fact that this hack existed meant that it became The Way that is done in C. The fact that a way existed meant no new method ever needed to be created to provide this functionality, so nothing safer (eg: not vulnerable to multiple-inclusion) was ever created. Since it was already in C, it got copied into C++ along with most of the rest of C's syntax and idioms.
There is a proposal for giving C++ a proper module system so this 45 year old preprocessor hack can finally be dispensed with. I don't know how imminent this is though. I've been hearing about it being in the works for more than a decade.