Bjarne Stroustrup writes in his C++ Style and Technique FAQ, emphasis mine:
Because C++ supports an alternative that is almost always better: The "resource acquisition is initialization" technique (TC++PL3 section 14.4). The basic idea is to represent a resource by a local object, so that the local object's destructor will release the resource. That way, the programmer cannot forget to release the resource. For example:
class File_handle { FILE* p; public: File_handle(const char* n, const char* a) { p = fopen(n,a); if (p==0) throw Open_error(errno); } File_handle(FILE* pp) { p = pp; if (p==0) throw Open_error(errno); } ~File_handle() { fclose(p); } operator FILE*() { return p; } // ... }; void f(const char* fn) { File_handle f(fn,"rw"); // open fn for reading and writing // use file through f }
In a system, we need a "resource handle" class for each resource. However, we don't have to have an "finally" clause for each acquisition of a resource. In realistic systems, there are far more resource acquisitions than kinds of resources, so the "resource acquisition is initialization" technique leads to less code than use of a "finally" construct.
Note that Bjarne writes "almost always better" and not "always better". Now for my question: What situation would a finally
construct be better than using the alternative construct (RAII) in C++?
The only reason I can think of that a finally block would be "better" is when it takes less code to accomplish the same thing. For example, if you have a resource that, for some reason doesn't use RIIA, you would either need to write a class to wrap the resource and free it in the destructor, or use a finally block (if it existed).
Compare:
class RAII_Wrapper
{
Resource *resource;
public:
RAII_Wrapper() : resource(aquire_resource()) {}
~RAII_Wrapper() {
free_resource(resource);
delete resource;
}
Resource *getResource() const {
return resource;
}
};
void Process()
{
RAII_Resource wrapper;
do_something(wrapper.resource);
}
versus:
void Process()
{
try {
Resource *resource = aquire_resource();
do_something(resource);
}
finally {
free_resource(resource);
delete resource;
}
}
Most people (including me) would still argue that the first version is better, because it doesn't force you to use the try...finally block. You also only need to write the class once, not duplicate the code in every function that uses the resource.
Edit: Like litb mentioned, you should use an auto_ptr instead of deleting the pointers manually, which would simplify both cases.
The difference between them is that destructors emphasise reuse of the cleanup solution by associating it with the type being used, whereas try/finally emphasises one-off cleanup routines. So try/finally is more immediately convenient when you have a unique one-off cleanup requirement associated with the point of use, rather than a reusable cleanup solution that can be associated with a type you're using.
I haven't tried this (haven't downloaded a recent gcc for months), but it should be true: with the addition of lambdas to the language, C++ can now have the effective equivalent of finally
, just by writing a function called try_finally
. Obvious usage:
try_finally([]
{
// attempt to do things in here, perhaps throwing...
},
[]
{
// this always runs, even if the above block throws...
}
Of course, you have to write try_finally
, but only once and then you're good to go. Lambdas enable new control structures.
Something like:
template <class TTry, class TFinally>
void try_finally(const TTry &tr, const TFinally &fi)
{
try
{
tr();
}
catch (...)
{
fi();
throw;
}
fi();
}
And there is no link at all between the presence of a GC and a preference for try/finally instead of destructors. C++/CLI has destructors and GC. They're orthogonal choices. Try/finally and destructors are slightly different solutions to the same problem, both deterministic, needed for non-fungible resources.
C++ function objects emphasise reusability but make one-off anonymous functions painful. By adding lambdas, anonymous code blocks are now easy to make, and this avoids C++'s traditional emphasis on "forced reusability" expressed through named types.
Finally would be better when connecting with C code. It can be a pain to have to wrap existing C functionality in RAII.
I think that scope guard does a good job at handling the one-off cases that finally handles well, while being better in the more general sense because it handles more than one flow path well.
The main use I'd find for finally
would be when dealing with C code as others pointed out where a C resource might only be used once or twice in code and not really worth wrapping into a RAII-conforming structure. That said, with lambdas, it seems easy enough to just invoke some custom logic through a dtor invoking a function object we specify in the function itself.
The other use case I'd find is for exotic miscellaneous code that should execute regardless of whether we're in a normal or exceptional execution path, like printing a timestamp or something on exiting a function no matter what. That's such a rare case though for me that it seems like overkill to have a language feature just for it, and it's still so easy to do now with lambdas without having to write a separate class just for this purpose.
For the most part I'd find very limited use cases for it now in ways that doesn't seem to really justify such a big change to the language. My little pipe dream though is some way to tell inside an object's dtor whether the object is being destroyed through a normal execution path or an exceptional one.
That would simplify scope guards to no longer require a commit/dismiss
call to accept the changes without automatically rolling them back when the scope guard is destroyed. The idea is to allow this:
ScopeGuard guard(...);
// Cause external side effects.
...
// If we managed to reach this point without facing an exception,
// dismiss/commit the changes so that the guard won't undo them
// on destruction.
guard.dismiss();
To simply become this:
ScopeGuard guard(...);
// Cause external side effects.
...
I always found the need to dismiss scope guards a little bit awkward as well as error-prone, since I've sometimes forgotten to dismiss them only to have them undo all the changes, having me scratching my head for a moment as to why my operation seemed to do nothing at all until I realized, "oops, I forgot to dismiss the scope guard.". It's a minor thing but mostly I would find it so much more elegant to eliminate the need for explicit scope guard dismissal which would be possible if they could just tell, inside their destructors, whether they're being destroyed through normal execution paths (at which point the side effects should be kept) or exceptional ones (at which point the side effects should be undone).
It's the most minor thing but in the hardest area of exception-safety to get right: rolling back external side effects. I couldn't ask for more from C++ when it comes to just destroying local resources properly. It's already quite ideal for that purpose. But rolling back external side effects has always been difficult in any language that allows them to occur in the first place, and any little teeny bit of help to make that easier like this is something I'd always appreciate.
Edit after six answers.
What about this one:
class Exception : public Exception { public: virtual bool isException() { return true; } };
class NoException : public Exception { public: bool isException() { return false; } };
Object *myObject = 0;
try
{
try
{
myObject = new Object(); // Create an object (Might throw exception)
}
catch (Exception &e)
{
// Do something with exception (Might throw if unhandled)
}
throw NoException();
}
catch (Exception &e)
{
delete myObject;
if (e.isException()) throw e;
}
来源:https://stackoverflow.com/questions/385039/are-there-cases-where-a-finally-construct-would-be-useful-in-c