My program is written in C for Linux, and has many functions with different patterns for return values:
1) one or two return n
on success and -1
For can't fail deterministic. Yes/no responses using a more specific (bool) return type can help maintain consistency. Going further for higher level interfaces one may want to think about returning or updating a systems specific messaging/result detail structure.
My preference for 0 to always be a success is based on the following ideas:
Zero enables some basic classing for organizing failures by negative vs positive values such as total failure vs conditioned success. I don't recommend this generally as it tends to be a bit too shallow to be useful and might lead to dangerous behaviorial assumptions.
When success is zero one can make a bunch of orthogonal calls and check for group success in a single condition later simply by comparing the return code of the group..
rc = 0; rc += func1(); rc += func2(); rc += func3(); if (rc == 0) success!
Most importantly zero from my experience seems to be a consistent indication of success when working with standard libraries and third-party systems.
Much of the C standard library uses the strategy to only return true (or 1) on success and false (or 0) on failure, and store the result in a passed in location. More specific error codes than "it failed" is stored in the special variable errno.
Something like this int add(int* result, int a, int b)
which stores a+b in *result and returns 1 (or returns 0 and sets errno to a suitable value if e.g. a+b happens to be larger than maxint).
One condition I can think of where your above methodology can fail is a function that can return any value including -1, say a function to add two signed numbers.
In that case testing for -1 will surely be a bad idea.
In case something fails, I would better set a global error condition flag provided by the C standard in form of errno
and use that to handle error.
Although, C++ standard library provides exceptions which takes off much hardwork for error handling.
Not an actual answer to your question, but some random comments you might find interesting:
it's normally obvious when to use case (1), but it gets ugly when unsigned types are involved - return (size_t)-1
still works, but it ain't pretty
if you're using C99, there's nothing wrong with using _Bool
; imo, it's a lot cleaner than just using an int
I use return NULL
instead of return 0
in pointer contexts (peronal preference), but I rarely check for it as I find it more natural to just treat the pointer as a boolean; a common case would look like this:
struct foo *foo = create_foo();
if(!foo) /* handle error */;
I try to avoid case (2); using EXIT_SUCCESS
and EXIT_FAILURE
might be feasible, but imo this approach only makes sense if there are more than two possible outcomes and you'll have to use an enum
anyway
for more complicated programs, it might make sense to implement your own error handling scheme; there are some fairly advanced implementations using setjmp()
/longjmp()
around, but I prefer something errno
-like with different variables for different types of errors
So how can I add clarity to my approach when deciding upon return types here?
Pick one pattern per return type and stick with it, or you'll drive yourself crazy. Model your pattern on the conventions that have long been established for the platform:
If you are making lots of system calls, than any integer-returning function should return -1
on failure.
If you are not making system calls, you are free to follow the convention of the C control structures that nonzero means success and zero means failure. (I don't know why you dislike bool
.)
If a function returns a pointer, failure should be indicated by returning NULL
.
If a function returns a floating-point number, failure should be indicated by returning a NaN.
If a function returns a full range of signed and unsigned integers, you probably should not be coding success or failure in the return value.
Testing of return values is a bane to C programmers. If failure is rare and you can write a central handler, consider using an exception macro package that can indicate failures using longjmp
.
That is a matter of preference, but what I have noticed is the inconsistency. Consider this using a pre C99 compiler
#define SUCCESS 1 #define ERROR 0
then any function that returns an int, return either one or the other to minimize confusion and stick to it religiously. Again, depending on, and taking into account of the development team, stick to their standard.
In pre C99 compilers, an int of zero is false, and anything greater than zero is to be true. That is dependant on what standard is your compiler, if it's C99, use the stdbool's _Bool type.
The big advantage of C is you can use your personal style, but where team effort is required, stick to the team's standard that is laid out and follow it religiously, even after you leave that job, another programmer will be thankful of you.
And keep consistent.
Hope this helps, Best regards, Tom.