I would strongly recommend you to NOT use the practice that you describe (the practice of forbidding binaries in source-control). Actually I would call this an organizational anti-pattern.
The single most important rule is:
You should be able to check out a project on a new machine, and it has to compile out of the box.
If this can be done via NuGet, then fine so. If not, check in the binaries. If there are any legal/license issues, then you should have at least a text file (named how_to_compile.txt
or similar) in your repo that contains all the required information.
Another very strong reason to do it like this is to avoid versioning problems - or do you know
- which exact version of a certain library was in operation some years ago and
- if it REALLY was the actual version that was used in the project and
- probably most important: do you know how to get that exact version?
Some other arguments against the above:
- Checking in binaries greatly facilitates build automation (and does not hinder it). This way the build system can get everything it needs from VCS without further ado. If you do it the other way, then there are always manual steps involved.
- Performance considerations are completely irrelevant as long as you work in an intranet, and only of very minor relevancy when using a web-based repository (I suppose we're talking of no more than, say, 30-40 Megs, which is not really a big deal for today's bandwidths).
- No functionality at all is lost. That's simply not true.
- It's also not true that normal commits etc. are slower. This is only the case when dealing with the large binaries themselves, which usually happens only once.
- And, if you have your binary dependencies checked in, you have at least some control. If you don't, you have none at all. And this surely has a much higher likelihood of errors...