I am learning C++ by reading Stroustrup\'s \"Principles and Practice Using C++\".
In the section about pre- and post-conditions there is the following example of functio
What comes to my mind is a signed overflow. It is undefined behavior but might yield a negative value.
Try std::numeric_limits<int>::max()
and 2
.
Yes if suppose you are using 16 bit computer so int = 2B Max value +32767 so in following
{
length = 500, width = 100;
if (length<=0 || width <=0) error("area() pre-condition");
int a = length*width; // a = 500 * 100 = 50000
if (a<=0) error("area() post-condition");
return a;
}
now final value will be a = -17233
because it gets into -ve value.
so second condition gets false.
Its all depends on range.
Since C++11 there is a boolean value you can test:
std::numeric_limits<int>::is_modulo
If this value is true
then signed arithmetic behaves in a wraparound fashion, and there is no undefined behaviour in the original code. A negative value could indeed be produced and so the test in the original code is meaningful.
For further discussion of is_modulo
see here
The answer is that his precondition-check is incomplete. Even though it is too restrictive.
He failed to include a check that the product can be represented instead of resulting in UB:
int area(int length, int width) {
// calculate area of a rectangle
assert(length >= 0 && width >= 0 && (!width
|| std::numeric_limits<int>::max() / width >= length));
int a = length * width;
assert(a >= 0); // Not strictly neccessary - the math is easy enough
return a;
}
INT_MAX will fail to fulfill the post-condition when used for both length and width for all conforming compilers.
One might be tempted to say that, since the standard guarantees that INT_MAX
>=32767, then INT_MAX*INT_MAX
will always be greater than INT_MAX
and thus not representable in an int
which is defined as being able to hold a maximun value of INT_MAX
.
It is a good argument and it is actually what happens most often, you will get an overflow with most compilers.
But to cover all bases we need to be aware that the C++ standard states :
3.4.3
1 undefined behavior
behavior,upon use of a nonportable or erroneous program construct or of erroneous data,for which this International Standard imposes no requirements2 NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).
3 EXAMPLE An example of undefined behavior is the behavior on integer overflow.
So it is a bit more serious than not getting the right value for the area. When INT_MAX
is used for both length and width (or any other combination with a result which is not representable) there is no guarantee of what the compiled program will do. Anything can happen; from the likely ones like overflows or crashes to the unlikely ones like disk formats.
Are there such possible values for integer that pre-conditions is ok but post-condition not?
Yes there's a number of input values, that can cause the post condition to fail. If e.g.
int a = length*width;
overflows the positive int
range (std::numeric_limits<int>::max()
) and the compiler implementation yields a negative value for this case.
As others noted in their answers, the situation that length*width
goes out of bounds from ]0-std::numeric_limits<int>::max()[
is actually undefined behavior, and the post condition renders merely useless, because any value might need to be expected for a
.
The key point to fix this, is given in @Deduplicator's answer, the pre-condition needs to be improved.
As a lance for Bjarne Stroustrup's reasonings to give that example:
I assume he wanted to point out that such undefined behavior might lead to unexpected negative values in the post-condition and surprising results for a naive assumption checked with the pre-condition.