I am trying to learn basic C programming by following a textbook and I must be missing something about either the data types, rounding, and/or order of operations because wh
Integer division: the integer value after dividing two ints gets stored in a floating point format. Eg:
float a = 3/5;
here an integer division will occur between 3
and 5
resulting in 0
. Now, if you try to store this 0
in a float
variable, it will be stored as 0.00
.
min = sec/secpermin;
Should be
min = (float)sec/secpermin;
or
min = sec/(float)secpermin;
or, as @alk pointed out, you could also do this instead:
min = sec;
min /= secpermin; // min=min/secpermin; here, typecasting is not required as
// numerator (min) is already float.
Alternatively, you can make them all float
, and while printing typecast them as int
.