I have a psychological tic which makes me reluctant to use large libraries (like GLib or Boost) in lower-level languages like C and C++. In my mind, I think:
Depends on how the linker works. Some linkers are lazy and will include all the code in library. The more efficient linkers will only extract the needed code from a library. I have had experience with both types.
Smaller libraries will have less worries with either type of linker. Worst case with a small library is small amounts of unused code. Many small libraries may increase the build time. The trade off would be build time vs. code space.
An interesting test of the linker is the classic Hello World program:
#include
#include
int main(void)
{
printf("Hello World\n");
return EXIT_SUCCESS;
}
The printf
function has a lot of dependencies due to all the formatting that it may need. A lazy, but fast linker may include a "standard library" to resolve all the symbols. A more efficient library will only include printf
and its dependencies. This makes the linker slower.
The above program can be compared to this one using puts
:
#include
#include
int main(void)
{
puts("Hello World\n");
return EXIT_SUCCESS;
}
Generally, the puts
version should be smaller than the printf
version, because puts
has no formatting needs thus less dependencies. Lazy linkers will generate the same code size as the printf
program.
In summary, library size decisions have more dependencies on the linker. Specifically, the efficiency of the linker. When in doubt, many small libraries will rely less on the efficiency of the linker, but make the build process more complicated and slower.