(Note: I am not asking about the definitions of pre-increment vs. post-increment, or how they are used in C/C++. Therefore, I do not think this is a duplicate question.)
For C
Let's look at Kernighan & Ritchie original justification (original K&R page 42 and 43):
The unusual aspects is that ++ and -- may be used either as prefix or as postfix. (...) In the context where no value is wanted (..) choose prefix or postfix according to taste. But htere are situations where one or the other is specifically called for.
The text continues with some examples that use increments within index, with the explicit goal of writing "more compact" code. So the reason behind these operators is convenience of more compact code.
The three examples given (squeeze()
, getline()
and strcat()
) use only postfix within expressions using indexing. The authors compare the code with a longer version that doesn't use embedded increments. This confirms that focus is on compactness.
K&R highlight on page 102, the use of these operators in combination with pointer dereferencing (eg *--p
and *p--
). No further example is given, but again, they make clear that the benefit is compactness.
For C++
Bjarne Stroustrup wanted to have C compatibility, so C++ inherited prefix and postfix increment and decrement.
But there's more on it: in his book "The design and evolution of C++", Stroustrup explains that initially, he planned have only one overload for both, postfix and prefix, in user defined classes:
Several people, notably Brian Kernighan, pointed out that this restriction was unnatural from a C perspective and prevented users from defining a class that could be used as replacement for an ordinary pointer.
Which caused him to find the current signature difference to differentiate prefix and postfix.
By the way, without these operators C++ would not be C++ but C_plus_1 ;-)
Incrementing and decrementing by 1 were widely supported in hardware at the time: a single opcode, and fast. This because "incrementing by 1" and "decrementing by 1" were a very common operation in code (true to this day).
The post- and predecrement forms only affected the place where this opcode got inserted in the generated machine code. Conceptually, this mimics "increase/decrease before or after using the result". In a single statement
i++;
the 'before/after' concept is not used (and so it does the same as ++i;
), but in
printf ("%d", ++i);
it is. That distinction is as important nowadays as it was when the language C was designed (this particular idiom was copied from its precursor named "B").
From The Development of the C Language
This feature [PDP-7's "`auto-increment' memory cells"] probably suggested such operators to Thompson [Ken Thompson, who designed "B", the precursor of C]; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger motivation for the innovation was probably his observation that the translation of ++x was smaller than that of x=x+1.
Thanks to @dyp for mentioning this document.
When you count down from n
it is very important whether is pre-decrement or post-decrement
#include <stdio.h>
void foopre(int n) {
printf("pre");
while (--n) printf(" %d", n);
puts("");
}
void foopost(int n) {
printf("post");
while (n--) printf(" %d", n);
puts("");
}
int main(void) {
foopre(5);
foopost(5);
return 0;
}
See the code running at ideone.
Consider the following loop:
for(uint i=5; i-- > 0;)
{
//do something with i,
// e.g. call a function that _requires_ an unsigned parameter.
}
You can't replicate this loop with a pre-decrement operation without moving the decrement operation outside of the for(...) construct, and it's just better to have your initialization, interation and check all in one place.
A much larger issue is this: one can over-load the increment operators (all 4) for a class. But then the operators are critically different: the post operators usually result in a temporary copy of the class instance being made, where as the pre-operators do not. That is a huge difference in semantics.
To get an answer that goes beyond speculation, most probably you have to ask Dennis Ritchie et al personally.
Adding to the answer already given, I'd like to add two possible reasons I came up with:
lazyness / conserving space:
you might be able to save a few keystrokes / bytes in the input file using the appropriate version in constructs like while(--i)
vs. while(i--)
. (take a look at pmg s answer to see, why both make a difference, if you didn't see it in the first run)
esthetics
For reasons of symmetry having just one version either pre- or postincrement / decrement might feel like missing something.
EDIT: added sparing a few bytes in the input file in the speculation section providing, now providing a pretty nice "historic" reason as well.
Anyways the main point in putting together the list was giving examples of possible explanations not being too historic, but still holding today.
Of course I am not sure, but I think asking for a "historic" reason other than personal taste is starting from a presumtion not neccesarily true.
The PDP-11 had a single instruction that corresponded to *p++
, and another for *--p
(or possibly the other way round).