What does the f
after the numbers indicate? Is this from C or Objective-C? Is there any difference in not adding this to a constant number?
CGRec
It tells the computer that this is a floating point number (I assume you are talking about c/c++ here). If there is no f after the number, it is considered a double or an integer (depending on if there is a decimal or not).
3.0f -> float
3.0 -> double
3 -> integer
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)
CGRect frame = CGRectMake(0, 0, 320, 50);
uses ints which will be automatically converted to floats.
In this case, there's no (practical) difference between the two.
When in doubt check the assembler output. For instance write a small, minimal snippet ie like this
#import <Cocoa/Cocoa.h>
void test() {
CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
NSLog(@"%f", r.size.width);
}
Then compile it to assembler with the -S
option.
gcc -S test.m
Save the assembler output in the test.s
file and remove .0f
from the constants and repeat the compile command. Then do a diff
of the new test.s
and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.
It's a C thing - floating point literals are double precision (double) by default. Adding an f suffix makes them single precision (float).
You can use ints to specify the values here and in this case it will make no difference, but using the correct type is a good habit to get into - consistency is a good thing in general, and if you need to change these values later you'll know at first glance what type they are.
It usually tells the compiler that the value is a float, i.e. a floating point integer. This means that it can store integers, decimal values and exponentials, e.g. 1
, 0.4
or 1.2e+22
.
A floating point literal in your source code is parsed as a double. Assigning it to a variable that is of type float will lose precision. A lot of precision, you're throwing away 7 significant digits. The "f" postfix let's you tell the compiler: "I know what I'm doing, this is intentional. Don't bug me about it".
The odds of producing a bug isn't that small btw. Many a program has keeled over on an ill-conceived floating point comparison or assuming that 0.1 is exactly representable.