In all the code I see online, programs are always broken up into many smaller files. For all of my projects for school though, I\'ve gotten by by just having one gigantic C sour
is it just for ease of reading?
The main reasons are
Maintainability: In large, monolithic programs like what you describe, there's a risk that changing code in one part of the file can have unintended effects somewhere else. Back at my first job, we were tasked with speeding up code that drove a 3D graphical display. It was a single, monolithic, 5000+-line main
function (not that big in the grand scheme of things, but big enough to be a headache), and every change we made broke an execution path somewhere else. This was badly written code all the way around (goto
s galore, literally hundreds of separate variables with incredibly informative names like nv001x
, program structure that read like old-school BASIC, micro-optimizations that didn't do anything but make the code that much harder to read, brittle as hell) but keeping it all in one file made the bad situation worse. We eventually gave up and told the customer we'd either have to rewrite the whole thing from scratch, or they'd have to buy faster hardware. They wound up buying faster hardware.
Reusability: There's no point in writing the same code over and over again. If you come up with a generally useful bit of code (like, say, an XML parsing library, or a generic container), keep it in its own separately compiled source files, and simply link it in when necessary.
Testability: Breaking functions out into their own separate modules allows you to test those functions in isolation from the rest of the code; you can verify each individual function more easily.
Buildability: Okay, so "buildability" isn't a real word, but rebuilding an entire system from scratch every time you change one or two lines can be time consuming. I've worked on very large systems where complete builds could take upwards of several hours. By breaking up your code, you limit the amount of code that has to be rebuilt. Not to mention that any compiler is going to have some limits on the size of the file it can handle. That graphical driver I mentioned above? The first thing we tried to do to speed it up was to compile it with optimizations turned on (starting with O1). The compiler ate up all available memory, then it ate all the available swap until the kernel panicked and brought down the entire system. We literally could not build that code with any optimization turned on (this was back in the days when 128 MB was a lot of very expensive memory). Had that code been broken up into multiple files (hell, just multiple functions within the same file), we wouldn't have had that problem.
Parallel Development: There isn't an "ability" word for this, but by breaking source up into multiple files and modules, you can parallelize development. I work on one file, you work on another, someone else works on a third, etc. We don't risk stepping on each other's code that way.