I see this all the time:
CGFloat someCGFloat = 1.2f;
Why is the \'f\' used? If the CGFloat
is defined as float
, the v
1.0 by default is double, if the right value is 1.2 there is an implicit cast, and the value gets casted from double to float (the cast isn't a runtime operation). In this case it's not important to call it 1.2f. Programmers mostly abuse it, but there are cases where it's really important.
For example:
float var= 1.0e-45;
NSLog(@"%d",var==1.0e-45);
This prints zero, because 1.0e-45 is too small to be stored into a single precision floating point variable, so it becomes equal to zero. Writing var==1.0e-45f changes the result.
Using format specifiers is important mostly when writing expressions, and since the left value is a float you expect that also the expression gets treated as a float, but that's not what happens.
A more striking case is when using the l format specifier on a number that gets shifted so much to become zero, and get surprised about the result:
long var= 1<<32; // I assume that an int takes 4 bytes and a long 8 bytes
The result is zero, and writing 1l<<32 completely changes the result.