I understand one of the big deals about constants is that you don\'t have to go through and update code where that constant is used all over the place. Thats great, but let\'s s
Another possibility, depending on the language, is that the compiler may be able to do some optimizations in a multi-threading environment that would not otherwise be possible. Like if you say:
int b=x*f;
int c=y*f;
In a multi-threaded environment, if f is a variable, the compiler may have to generate a reload of f before the second operation. If f is a constant and the compiler knows it's still in the register, it wouldn't need to reload.
A lot of this depends on how good your optimizer is.
A good optimizer will replace const references with literal values during compilation. This saves processor cycles since the generated machine code is working with immediate values instead of having to load a value from memory.
Some optimizers will recognize that a value is not modified after it is declared, and convert it to a constant for you. Do not rely on this behavior.
Also, whenever possible, your code should enforce the assumptions you've made during development. If a "variable" should never be changed, declaring it as a constant will help ensure that neither yourself nor any other developers that come along later will unwittingly modify a "constant".
It's possible that the compiler could reduce code size .. for example, in package System
you'll find (on an x86 machine)
type Bit_Order is (High_Order_First, Low_Order_First);
Default_Bit_Order : constant Bit_Order := Low_Order_First;
so that given
case System.Default_Bit_Order is
when System.High_Order_First =>
-- Big-endian processing
when System.Low_Order_First =>
-- Little-endian processing
end case;
the compiler can completely eliminate the 'wrong' branch while your code retains its portability. With GNAT you need to use non-default optimisation for this to happen: -O2
I think.
Both branches have to be compilable -- this is an optimisation, not #ifdef
processing.
Certainly for a language like C and Ada, the compiler will put the value from the constant directly into the assembler instruction, meaning that no swapping of registers or reading from memory will be necessary over and above what is required to run the program. This means two things: the main one is speed (probably not so noticeably in many applications, unless they are embedded) and the second is memory usage (of both the program's final binary size and its runtime memory footprint).
This behaviour will come down to the language and the compiler, as the language will dictate any assumptions made by you (the programmer) and therefore the constraints to the efficiency of the language; and the compiler because its behaviour could change from treating your constant like any other variable to pre-processing your code and optimising the binary speed and footprint as much as possible.
Secondly, as was stated by itsmatt and jball, it enables you to logically consider the item as a constant configuration item rather than a 'variable'; especially in higher level programming languages and interpreted languages.
One benefit is that if it is truly a constant value, you wouldn't accidentally change it. The constant keeps you from screwing stuff up later. As a variable, anyone could come along and change it later when updating the code.
It should be clear which values can never change and which values can change. Constants reinforce your intention.
Real world example from my youth:
We didn't have a built-in PI value on a system I was working on so someone created a VARIABLE called PI and somewhere along the way, someone (me) modified that value to 3.14519 (or something there abouts) in a procedure I wrote... not the approximation of pi 3.14159). It had been set elsewhere in the code, and all sorts of calculations were off from that point forward. Had that PI been a constant (it was changed later, of course), I wouldn't have been able to make that error... at least not easily.
As you point out, you may not gain any direct benefit from changing the unchanging variables into explicit constants. However, in analysing a program one check that is made is to look at the point of declaration of a variable to the point of reference of the variable. So metrics such as variables that are being accessed before declaration, or set and reset before reference etc can indicate probably bugs.
By explicitly declaring constants as constants the compiler and analysis tools know that you do not intend to reset the variable at any time.
This is also true for other developers that work with your code. They may inadvertently set a variable and you may not be aware of it. Declaring constants will avoid these types of errors.