I, and I think many others, have had great success using smart pointers to wrap up unsafe memory operations in C++, using things like RAII, et cetera. However, wrapping memo
Static code analysis tools like splint or Gimpel PC-Lint may help here -- you can even make these moderately "preventative" by wiring them into your automatic "continuous-integration" style build server. (You do have one of those, right? :grin:)
There are other (some more expensive) variants on this theme too...
If you are coding in Win32 you might be able to use structured exception handling to accomplish something similar. You could do something like this:
foo() {
myType pFoo = 0;
__try
{
pFoo = malloc(sizeof myType);
// do some stuff
}
__finally
{
free pFoo;
}
}
While not quite as easy as RAII, you can collect all of your cleanup code in one place and guarantee that it is executed.
Ok, so here are your options. Ideally, you combine them to get better result. In case of C, paranoia is fine.
Compile-time:
Runtime:
You cannot do smart pointers in C because it does not provide necessary syntax, but you can avoid leaks with practice. Write the resource release code immediately after you allocated it. So whenever you write a malloc
, you should write the corresponding free
immediately in a cleanup section.
In C I see the 'GOTO cleanup' pattern a lot:
int foo()
{
int *resource = malloc(1000);
int retVal = 0;
//...
if (time_to_exit())
{
retVal = 123;
goto cleanup;
}
cleanup:
free(resource);
return retVal;
}
In C we also use a lot of contexts which allocate stuff, the same rule can be applied for that too:
int initializeStuff(Stuff *stuff)
{
stuff->resource = malloc(sizeof(Resource));
if (!stuff->resource)
{
return -1; ///< Fail.
}
return 0; ///< Success.
}
void cleanupStuff(Stuff *stuff)
{
free(stuff->resource);
}
This is analogous to the object constructors and destructors. As long as you don't give away the allocated resources to other objects it won't leak and pointers won't dangle.
It's not difficult to write a custom allocator that tracks allocations and writes leaking blocks atexit
.
If you need to give away pointers to the allocated resources you can create wrapper contexts for it and each object owns a wrapper context instead of the resource. These wrappers share the resource and a counter object, which tracks the usage and frees the objects when no one uses it. This is how C++11's shared_ptr
and weak_ptr
works. It's written in more detail here: How does weak_ptr work?
The question is a bit old, but I figured I would take the time to link to my smart pointer library for GNU compilers (GCC, Clang, ICC, MinGW, ...).
This implementation relies on the cleanup variable attribute, a GNU extension, to automatically free the memory when going out of scope, and as such, is not ISO C99, but C99 with GNU extensions.
Example:
simple1.c:
#include <stdio.h>
#include <csptr/smart_ptr.h>
int main(void) {
smart int *some_int = unique_ptr(int, 1);
printf("%p = %d\n", some_int, *some_int);
// some_int is destroyed here
return 0;
}
Compilation & Valgrind session:
$ gcc -std=gnu99 -o simple1 simple1.c -lcsptr
$ valgrind ./simple1
==3407== Memcheck, a memory error detector
==3407== Copyright (C) 2002-2013, and GNU GPL\'d, by Julian Seward et al.
==3407== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info
==3407== Command: ./simple1
==3407==
0x53db068 = 1
==3407==
==3407== HEAP SUMMARY:
==3407== in use at exit: 0 bytes in 0 blocks
==3407== total heap usage: 1 allocs, 1 frees, 48 bytes allocated
==3407==
==3407== All heap blocks were freed -- no leaks are possible
==3407==
==3407== For counts of detected and suppressed errors, rerun with: -v
==3407== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Another approach that you might want to consider is the pooled memory approach that Apache uses. This works exceptionally well if you have dynamic memory usage that is associated with a request or other short-lived object. You can create a pool in your request structure and make sure that you always allocate memory from the pool and then free the pool when you are done processing the request. It doesn't sound nearly as powerful as it is once you have used it a little. It is almost as nice as RAII.