I\'ve tried to find an answer to this using SO. There are a number of questions that list the various pros and cons of building a header-only library in c++, but I haven\'t been
Update
This was Real Slaw's original answer. His answer above (the accepted one) is his second attempt. I feel that his second attempt answers the question entirely. - Homer6
Well, for comparison, you can look up the idea of "unity build" (nothing to do with the graphics engine). Basically, a "unity build" is where you include all the cpp files into a single file, and compile them all as one compilation unit. I think this should provide a good comparison, as AFAICT, this is equivalent to making your project header-only. You'd be surprised about the 2nd "con" you listed; the whole point of "unity builds" are to decrease compile times. Supposedly unity builds compile faster because they:
.. are a way of reducing build over-head (specifically opening and closing files and reducing link times by reducing the number of object files generated) and as such are used to drastically speed up build times.
― altdevblogaday
Compilation time comparison (from here):
Three major references for "unity build:
I assume you want reasons for the pros and cons listed.
Pros for header-only
[...]
3) It may be a lot faster. (quantifiable) The code might be optimized better. The reason is, when the units are separate, a function is just a function call, and thus must be left so. No information about this call is known, for example:
Furthermore, if the function internal code is known, it might be worthwhile to inline it (that is to dump its code directly into the calling function). Inlining avoids the function call overhead. Inlining also allows a whole host of other optimizations to occur (for example, constant propagation; for example we call factorial(10)
, now if the compiler doesn't know the code of factorial()
, it is forced to leave it like that, but if we know the source code of factorial()
, we can actually variables the variables in the function and replace it with 10, and if we are lucky we can even end up with the answer at compile time, without running anything at all at runtime). Other optimizations after inlining include dead-code elimination and (possibly) better branch prediction.
4) May give compiler/linker better opportunities for optimization (explanation/quantifiable, if possible)
I think this follows from (3).
Cons for header-only
1) It bloats the code. (quantifiable) (how does that affect both execution time and the memory footprint) Header-only can bloat the code in a few ways, that I know of.
The first is template bloat; where the compiler instantiates unnecessary templates of types that are never used. This isn't particular to header-only but rather templates, and modern compilers have improved on this to make it of minimal concern.
The second more obvious way, is the (over)inlining of functions. If a large function is inlined everywhere it is used, those calling functions will grow in size. This might have been a concern about executable size and executable-image-memory size years ago, but HDD space and memory have grown to make it almost pointless to care about. The more important issue is that this increased function size can ruin the instruction cache (so that the now-larger function doesn't fit into the cache, and now the cache has to be refilled as the CPU executes through the function). Register pressure will be increased after inlining (there is a limit on the number of registers, the on-CPU memory that the CPU can process with directly). This means that the compiler will have to juggle the registers in the middle of the now-larger-function, because there are too many variables.
2) Longer compile times. (quantifiable)
Well, header-only compilation can logically result in longer compile times for many reasons (notwithstanding the performance of "unity builds"; logic isn't necessarily real-world, where other factors get involved). One reason can be, if an entire project is header-only, then we lose incremental builds. This means any change in any part of the project means the entire project has to be rebuilt, while with separate compilation units, changes in one cpp just means that object file must be rebuilt, and the project relinked.
In my (anecdotal) experience, this is a big hit. Header-only increases performance a lot in some special cases, but productivity wise, it is usually not worth it. When you start getting a larger codebase, compilation time from scratch can take > 10 minutes each time. Recompiling on a tiny change starts getting tiresome. You don't know how many times I forgot a ";" and had to wait 5 mins to hear about it, only to go back and fix it, and then wait another 5 mins to find something else I just introduced by fixing the ";".
Performance is great, productivity is much better; it will waste a large chunk of your time, and demotivate/distract you from your programming goal.
Edit: I should mention, that interprocedural optimization (see also link-time optimization, and whole program optimization) tries to accomplish the optimization advantages of the "unity build". Implementations of this is still a bit shaky in most compilers AFAIK, but eventually this might overcome performance advantages.
I hope this isn't too similar to what Realz said.
Executable (/object) size: (executable 0% / object up to 50% bigger on header only)
I would assume defined functions in a header file will be copied into every object. When it comes to generating the executable, I'd say it should be rather easy to cut out duplicate functions (no idea which linkers do/don't do this, I assume most do), so (probably) no real difference in the executable size, but well in the object size. The difference should largely depend on how much code is actually in the headers versus the rest of the project. Not that the object size really matters these days, except for link time.
Runtime: (1%)
I'd say basically identical (a function address is a function address), except for inline functions. I'd expect inline functions to make less than a 1% difference in your average program, because function calls do have some overhead but this is nothing compared to the overhead of actually doing anything with a program.
Memory footprint: (0%)
Same things in the executable = same memory footprint (during runtime), assuming the linker cuts out duplicate functions. If duplicate functions aren't cut out, it can make quite a difference.
Compile time (for both entire project and by changing one file): (entire up to 50% faster for either one, single up to 99% faster for not header only)
Huge difference. Changing something in the header file causes everything that includes it to recompile, while changes in an cpp file just requires that object to be recreated and a re-link. And an easy 50% slower for a full compile for header only libraries. However, with pre-compiling headers or unity builds, a full compile with header-only libraries would probably be faster, but one change requiring a lot of files to recompile is a huge disadvantage, and I'd say that makes it not worth it. Full recompiles aren't needed often. Also, you can include something in a cpp file but not in it's header file (this can happen often), so, in a proper designed program (tree-like dependency structure / modularity), when changing a function declaration or something (always requires changes to the header file), header-only would cause a lot of things to recompile, but with not header-only you can limit this greatly.
Link time: (up to 50% faster for header-only)
The objects are likely bigger, thus it would take longer to process them. Probably linearly proportional to how much bigger the files are. From my limited experience in big projects (where compile + link time is long enough to actually matter), link time is almost negligible compared to compile time (unless you keep making small changes and building, then I'd expect you'd feel it, which I suppose can happen often).
Summary (notable points):
Box2D benchmark, data:
box2d_data_gcc.csv
Botan benchmark, data:
botan_data_gcc.csv
Box2D SUMMARY (78 Units)
Botan SUMMARY (301 Units)
NICE CHARTS:
Box2D executable size:
Box2D compile/link/build/run time:
Box2D compile/link/build/run max memory usage:
Botan executable size:
Botan compile/link/build/run time:
Botan compile/link/build/run max memory usage:
TL;DR
The projects tested, Box2D and Botan were chosen because they are potentially computationally expensive, contain a good number of units, and actually had few or no errors compiling as a single unit. Many other projects were attempted but were consuming too much time to "fix" into compiling as one unit. The memory footprint is measured by polling the memory footprint at regular intervals and using the maximum, and thus might not be fully accurate.
Also, this benchmark does not do automatic header dependency generation (to detect header changes). In a project using a different build system, this may add time to all benchmarks.
There are 3 compilers in the benchmark, each with 5 configurations.
Compilers:
Compiler configurations:
-O3 -march=native
-Os
-O3 -flto -march=native
with clang and gcc, -O3 -ipo -march=native
with icpc/icc-Os
I think these each can have different bearings on the comparisons between single-unit and multi-unit builds. I included LTO/IPO so we might see how the "proper" way to achieve single-unit-effectiveness compares.
Explanation of csv fields:
Test Name
- name of the benchmark. Examples: Botan, Box2D
.Test Name
.Compiler
- name of the compiler used. Examples: gcc,icc,clang
.Compiler Configuration
- name of a configuration of compiler options used. Example: gcc opt native
Compiler Version String
- first line of output of compiler version from the compiler itself. Example: g++ --version
produces g++ (GCC) 4.6.1
on my system.Header only
- a value of True
if this test case was built as a single unit, False
if it was built as a multi-unit project.Units
- number of units in the test case, even if it is built as a single unit.Compile Time,Link Time,Build Time,Run Time
- as it sounds.Re-compile Time AVG,Re-compile Time MAX,Re-link Time AVG,Re-link Time MAX,Re-build Time AVG,Re-build Time MAX
- the times across rebuilding the project after touching a single file. Each unit is touched, and for each, the project is rebuilt. The maximum times, and average times are recorded in these fields.Compile Memory,Link Memory,Build Memory,Run Memory,Executable Size
- as they sound.To reproduce the benchmarks:
"units"
- a list of c/cpp/cc
files that make up the units of this project"executable"
- A name of the executable to be compiled."link_libs"
- A space separated list of installed libraries to link to."include_directores"
- A list of directories to include in the project."command"
- optional. special command to execute to run the benchmark. For example, "command": "botan_test --benchmark"
test_base_cases
in run.py with the information for the project, including the data file name.data.csv
should contain the benchmark results.To produce the bar charts:
fields
list to decide which graphs to produce.python chart.py data.csv
.test.png
should now contain the result../configure.py --disable-asm --with-openssl --enable-modules=asn1,benchmark,block,cms,engine,entropy,filters,hash,kdf,mac,bigint,ec_gfp,mp_generic,numbertheory,mutex,rng,ssl,stream,cvc
, this generates the header files and Makefile.grep -o "\./src.*cpp" Makefile
and grep -o "\./checks.*" Makefile
to obtain the .cpp units and put them into botan_bench.data file./checks/checks.cpp
to not call the x509 unit tests, and removed x509 check, because of conflict between Botan typedef and openssl.Intel(R) Core(TM) i7 CPU Q 720 @ 1.60GHz