I have a psychological tic which makes me reluctant to use large libraries (like GLib or Boost) in lower-level languages like C and C++. In my mind, I think:
Bigger doesn't inherently imply slower. Contrary to some of the other answers, there's no inherent difference between libraries stored entirely in headers and libraries stored in object files either.
Header-only libraries can have an indirect advantage. Most template-based libraries have to be header-only (or a lot of the code ends up in headers anyway), and templates do give a lot of opportunities for optimization. Taking code in a typical object-file library and moving it all into headers will not, however, usually have many good effects (and could lead to code bloat).
The real answer for a particular library will usually depend on its overall structure. It's easy to think of "Boost" as something huge. In fact, it's a huge collection of libraries, most of which are individually quite small. You can't say very much (meaningfully) about Boost as a whole, because the individual libraries are written by different people, with different techniques, goals, etc. A few of them (e.g. Format, Assign) really are slower than almost anything you'd be very likely to do on your own. Others (e.g. Pool) provide things you could do yourself, but probably won't, to get at least minor speed improvements. A few (e.g. uBlas) use heavy-duty template magic to run faster than any but a tiny percentage of us can hope to achieve on our own.
There are, of course, quite a few libraries that really are individually large libraries. In quite a few cases, these really are slower than what you'd write yourself. In particular, many (most?) of them attempt to be much more general than almost anything you'd be at all likely to write on your own. While that doesn't necessarily lead to slower code, there's definitely a strong tendency in that direction. Like with a lot of other code, when you're developing libraries commercially, customers tend to be a lot more interested in features than things like size of speed.
Some libraries also devote a lot of space, code (and often at least bits of time) to solving problems you may very well not care about at all. Just for example, years ago I used an image processing library. Its support for 200+ image formats sounded really impressive (and in a way it really was) but I'm pretty sure I never used it to deal with more than about a dozen formats (and I could probably have gotten by supporting only half that many). OTOH, even with all that it was still pretty fast. Supporting fewer markets might have restricted their market to the point that the code would actually have been slower (just for example, it handled JPEGs faster than IJG).