I\'ve just started experimenting with SDL in C++, and I thought checking for memory leaks regularly may be a good habit to form early on.
With this in mind, I\'ve been
Be careful that Valgrind isn't picking up false positives in its measurements.
Many naive implementations of memory analyzers flag lost memory as a leak when it isn't really.
Maybe have a read of some of the papers in the external links section of the Wikipedia article on Purify. I know that the documentation that comes with Purify describes several scenarios where you get false positives when trying to detect memory leaks and then goes on to describe the techniques Purify uses to get around the issues.
BTW I'm not affiliated with IBM in any way. I've just used Purify extensively and will vouch for its effectiveness.
Edit: Here's an excellent introductory article covering memory monitoring. It's Purify specific but the discussion on types of memory errors is very interesting.
HTH.
cheers,
Rob
If you are really worried about memory leaking, you will need to do some calculations.
You need to test your application for like, an hour and then calculate the leaked memory. This way, you get a leaked memory bytes/minute value.
Now, you will need to estimate the average length of the session of your program. For example, for notepad.exe, 15 minutes sounds like a good estimation for me.
If (average session length)*(leaked bytes / minute) > 0.3 * (memory space normally occupied by your process), then you should probably do some more efforts to reduce memory leaks. I just made up 0.3, use common sense to determine your acceptable threshold.
Remember that an important aspect of being a programmer is being a Software Engineer, and very often Engineering is about choosing the least worst option from two or more bad options. Maths always comes handy when you need to measure how bad an option is actually.
Firstable memory leaks are only a serious problem when they grow with time, otherwise the app just looks a little bigger from the outside (obviously there's a limit here too, hence the 'serious'). When you have a leak that grows with time you might be in trouble. How much trouble depends on the circumstances though. If you know where the memory is going and can make sure that you'll always have enough memory to run the program and everything else on that machine you are still somewhat fine. If you don't know where the memory is going however, i wouldn't ship the program and keep digging.
With SDL on Linux in particular, there seem to be some leaking in the underlying X windows library. There's nothing much you can do about those (unless you want to try to fix the library itself, which is probably not for the faint hearted).
You can use valgrind's suppression mechanism (see --suppressions and --gen-suppressions in the valgrind man page) to tell it not to bother you with these errors.
In general we do have to be a little more lenient with third party libraries; while we should absolutely not accept memory leaks in our own code, and the presence of memory leaks should be a factor when choosing between alternative third party libraries, sometimes there's no choice but to ignore them (though it may be a good idea to report them to the library maintainer).
It does look like SDL developers don't use Valgrind, but I basically only care about those 120 bytes lost.
With this in mind, I've been running my 'Hello world' programs through Valgrind to catch any leaks, and although I've removed everything except the most basic SDL_Init() and SDL_Quit() statements, Valgrind still reports 120 bytes lost and 77k still reachable.
Well, with Valgrind, "still reachable memory" is often not really leaked memory, especially in such a simple program. I can bet safely that there is basically no allocation in SDL_Quit(), so the "leaks" are just structures allocated once by SDL_Init().
Try adding useful work and seeing if those amounts increase; try making a loop of useful work (like creating and destroying some SDL structure) and see if the amount of leaks grows with the amount of iterations. In the latter case, you should check the stack traces of the leaks and fix them.
Otherwise, those 77k leaks count as "memory which should be freed at program end, but for which they rely on the OS to free it.
So, actually, I'm more worried right now by those 120 bytes, if they are not false positives, and they are usually few. False positives with Valgrind are mostly cases where usage of uninitialized memory is intended (for instance because it is actually padding).
You have to be careful with the definition of "memory leak". Something which is allocated once on first use, and freed on program exit, will sometimes be shown up by a leak-detector, because it started counting before that first use. But it's not a leak (although it may be bad design, since it may be some kind of global).
To see whether a given chunk of code leaks, you might reasonably run it once, then clear the leak-detector, then run it again (this of course requires programmatic control of the leak detector). Things which "leak" once per run of the program usually don't matter. Things which "leak" every time they're executed usually do matter eventually.
I've rarely found it too difficult to hit zero on this metric, which is equivalent to observing creeping memory usage as opposed to lost blocks. I had one library where it got so fiddly, with caches and UI furniture and whatnot, that I just ran my test suite three times over, and ignored any "leaks" which didn't occur in multiples of three blocks. I still caught all or almost all the real leaks, and analysed the tricky reports once I'd got the low-hanging fruit out of the way. Of course the weaknesses of using the test suite for this purpose are (1) you can only use the parts of it that don't require a new process, and (2) most of the leaks you find are the fault of the test code, not the library code...