I see this all the time:
CGFloat someCGFloat = 1.2f;
Why is the \'f\' used? If the CGFloat
is defined as float
, the v
In your snippet, just an assignment, you don't need the 'f' suffix, and in fact you shouldn't use it. If CGFloat is single precision (like in iOS) then your value will be stored single precision with or without the 'f' and if CGFloat is double precision (like on Mac OS) then you'll be unnecessarily creating a single precision value to be stored double precision.
On the other hand, if you're doing arithmetic you should be careful to use 'f' or not use it as appropriate. If you're working with single precision and include a literal like '1.2' without the 'f' then the compiler will promote the other operands to double precision. And if you're working with double precision and you include the 'f' then (like the assignment on Mac OS) you'll be creating a single precision value only to have it immediately converted to double.
1.0 by default is double, if the right value is 1.2 there is an implicit cast, and the value gets casted from double to float (the cast isn't a runtime operation). In this case it's not important to call it 1.2f. Programmers mostly abuse it, but there are cases where it's really important.
For example:
float var= 1.0e-45;
NSLog(@"%d",var==1.0e-45);
This prints zero, because 1.0e-45 is too small to be stored into a single precision floating point variable, so it becomes equal to zero. Writing var==1.0e-45f changes the result.
Using format specifiers is important mostly when writing expressions, and since the left value is a float you expect that also the expression gets treated as a float, but that's not what happens.
A more striking case is when using the l format specifier on a number that gets shifted so much to become zero, and get surprised about the result:
long var= 1<<32; // I assume that an int takes 4 bytes and a long 8 bytes
The result is zero, and writing 1l<<32 completely changes the result.