Does using large libraries inherently make slower code?

前端 未结 17 1541
情深已故
情深已故 2021-02-06 20:40

I have a psychological tic which makes me reluctant to use large libraries (like GLib or Boost) in lower-level languages like C and C++. In my mind, I think:

相关标签:
17条回答
  • 2021-02-06 21:43

    The term I like for frameworks, library sets, and some types of development tools, is platform technologies. Platform technologies have costs beyond impact on code size and performance.

    1. If your project is itself intended to be used as a library or framework, you may end up pushing your platform technology choices on developers that use your library.

    2. If you distribute your project in source form, you may end up pushing platform technology choices on your end users.

    3. If you do not statically link all your chosen frameworks and libraries, you may end up burdening your end users with library versioning issues.

    4. Compile time effects developer productivity. Incremental linking, precompiled headers, proper header dependency management, etc., can help manage compile times, but do not eliminate the compiler performance problems associated with the massive amounts of inline code some platform technologies introduce.

    5. For projects that are distributed as source, compile time affects the end users of the project.

    6. Many platform technologies have their own development environment requirements. These requirements can accumulate making it difficult and time consuming for new developers on a project to be able to replicate the environment needed to allow compiling and debugging.

    7. Using some platform technologies in effect creates a new programming language for the project. This makes it harder for new developers to contribute.

    All projects have platform technology dependencies, but for many projects there are real benefits to keeping these dependencies to a minimum.

    0 讨论(0)
  • 2021-02-06 21:46
    1. The thing to do with performance concerns, in general, is not to entertain them, because to do so is to be guessing that they are a problem, because if you don't know they are, you are guessing, and guessing is the central concept behind "premature optimization". The thing to do with performance problems is, when you have them, and not before, diagnose them. The problems are almost never something you would have guessed. Here's an extended example.

    2. If you do that a fair amount, you will come to recognize the design approaches that tend to cause performance problems, whether in your code or in a library. (Libraries can certainly have performance problems.) When you learn that and apply it to projects then in a sense you are prematurely optimizing, but it has the desired effect anyway, of avoiding problems. If I can summarize what you will probably learn, it is that too many layers of abstraction, and overblown class hierarchies (especially those full of notification-style updating) are what are very often the reasons for performance problems.

    At the same time, I share your circumspection about 3rd-party libraries and such. Too many times I have worked on projects where some 3rd-party package was "leveraged" for "synergy", and then the vendor either went up in smoke or abandoned the product or had it go obsolete because Microsoft changed things in the OS. Then our product that leaned heavily on the 3rd-party package starts not working, requiring a big expenditure on our part while the original programmers are long gone.

    0 讨论(0)
  • 2021-02-06 21:46

    FFTW and ATLAS are two quite large libraries. Oddly enough, they play large roles in the fastest software in the world, applications optimized to run on supercomputers. No, using large libraries doesn't make your code slow, especially when the alternative is implementing FFT or BLAS routines for yourself.

    0 讨论(0)
  • 2021-02-06 21:47

    Large library will, from the code performance perspective:

    • occupy more memory, if it has a runtime binary (most parts of boost don't require runtime binaries, they're "header-only"). While the OS will load only the actually used parts of the library to RAM, it still can load more than you need, because the granularity of what's loaded is equal to page size (4 Kb only on my system, though).
    • take more time to load by dynamic linker, if, again, it needs runtime binaries. Each time your program is loaded, dynamic linker has to match each function you need external library to contain with its actual address in memory. It takes some time, but just a little (however, it matters at a scale of loading many programs, such as startup of desktop environment, but you don't have a choice there).

      And yes, it will take one extra jump and a couple of pointer adjustments at runtime each time you call external function of a shared (dynamically linked) library

    from a developer's performance perspective:

    • add an external dependency. You will be depending on someone else. Even if that library's free software, you'll need extra expense to modify it. Some developers of veeery low-level programs (I'm talking about OS kernels) hate to rely on anyone--that's their professional perk. Thus the rants.

      However, that can be considered a benefit. If other people are gotten used to boost, they will find familiar concepts and terms in your program and will be more effective understanding and modifying it.

    • Bigger libraries usually contain library-specific concepts that take time to understand. Consider Qt. It contains signals and slots and moc-related infrastructure. Compared to the size of the whole Qt, learning them takes a small fraction of time. But if you use a small part of such a big library, that can be an issue.

    0 讨论(0)
  • 2021-02-06 21:47

    fwiw, I work on Microsoft Windows and when we build Windows; build compiled for SIZE are faster than builds compiled for SPEED because you take fewer page fault hits.

    0 讨论(0)
提交回复
热议问题