Ideally the following code would take a float in IEEE 754 representation and convert it into hexadecimal
void convert() //gets the float input from user and turns it into hexadecimal
{
float f;
printf("Enter float: ");
scanf("%f", &f);
printf("hex is %x", f);
}
I'm not too sure what's going wrong. It's converting the number into a hexadecimal number, but a very wrong one.
123.1443 gives 40000000
43.3 gives 60000000
8 gives 0
so it's doing something, I'm just not too sure what.
Help would be appreciated
When you pass a float
as an argument to a variadic function (like printf()
), it is promoted to a double
, which is twice as large as a float
(at least on most platforms).
One way to get around this would be to cast the float
to an unsigned int
when passing it as an argument to printf()
:
printf("hex is %x", *(unsigned int*)&f);
This is also more correct, since printf()
uses the format specifiers to determine how large each argument is.
Technically, this solution violates the strict aliasing rule. You can get around this by copying the bytes of the float
into an unsigned int
and then passing that to printf()
:
unsigned int ui;
memcpy(&ui, &f, sizeof (ui));
printf("hex is %x", ui);
Both of these solutions are based on the assumption that sizeof(int) == sizeof(float)
, which is the case on many 32-bit systems, but isn't necessarily the case.
When supported, use %a to convert floating point to a standard hexadecimal format. Here is the only document I could find that listed the %a option.
Otherwise you must pull the bits of the floating point value into an integer type of known size. If you know, for example, that both float and int are 32 bits, you can do a quick cast:
printf( "%08X" , *(unsigned int*)&aFloat );
If you want to be less dependent on size, you can use a union:
union {
float f;
//char c[16]; // make this large enough for any floating point value
char c[sizeof(float)]; // Edit: changed to this
} u;
u.f = aFloat;
for ( i = 0 ; i < sizeof(float) ; ++i ) printf( "%02X" , u.c[i] & 0x00FF );
The order of the loop would depend on the architecture endianness. This example is big endian.
Either way, the floating point format may not be portable to other architectures. The %a option is intended to be.
HEX to Float
I spend quite a long time trying to figure out how to convert a HEX input from a serial connection formatted as IEE754 float into float. Now I got it. Just wanted to share in case it could help somebody else.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
uint16_t tab_reg[64] //declare input value recieved from serial connection
union IntFloat { int32_t i; float f; }; //Declare combined datatype for HEX to FLOAT conversion
union IntFloat val;
int i;
char buff[50]; //Declare buffer for string
i=0;
//tab_reg[i]=0x508C; //to test the code without a data stream,
//tab_reg[i+1]=0x4369; //you may uncomment these two lines.
printf("Raw1: %X\n",tab_reg[i]); //Print raw input values for debug
printf("Raw2: %X\n",tab_reg[i+1]); //Print raw input values for debug
rs = sprintf(buff,"0X%X%X", tab_reg[i+1], tab_reg[i]); //I need to swap the words, as the response is with the opposite endianness.
printf("HEX: %s",buff); //Show the word-swapped string
val.i = atof(buff); //Convert string to float :-)
printf("\nFloat: %f\n", val.f); //show the value in float
}
Output:
Raw1: 508C
Raw2: 436A
HEX: 0X436A508C
Float: 234.314636
This approach always worked pretty fine to me:
union converter{
float f_val;
unsigned int u_val;
};
union converter a;
a.f_val = 123.1443f;
printf("my hex value %x \n", a.u_val);
Stupidly simple example:
unsigned char* floatToHex(float val){
unsigned char* hexVals = malloc(sizeof(float));
hexVals[0] = ((unsigned char*)&val)[0];
hexVals[1] = ((unsigned char*)&val)[1];
hexVals[2] = ((unsigned char*)&val)[2];
hexVals[3] = ((unsigned char*)&val)[3];
return hexVals;
}
Pretty obvious solution when I figured it out. No bit masking, memcpy, or other tricks necessary.
In the above example, it was for a specific purpose and I knew floats were 32 bits. A better solution if you're unsure of the system:
unsigned char* floatToHex(float val){
unsigned char* hexVals = malloc(sizeof(float));
for(int i = 0; i < sizeof(float); i++){
hexVals[i] = ((unsigned char*)&val)[i];
}
return hexVals;
}
How about this:?
int main(void){
float f = 28834.38282;
char *x = (char *)&f;
printf("%f = ", f);
for(i=0; i<sizeof(float); i++){
printf("%02X ", *x++ & 0x0000FF);
}
printf("\n");
}
https://github.com/aliemresk/ConvertD2H/blob/master/main.c
Convert Hex to Double Convert Double to Hex
this codes working IEEE 754 floating format.
What finally worked for me (convoluted as it seems):
#include <stdio.h>
int main((int argc, char** argv)
{
float flt = 1234.56789;
FILE *fout;
fout = fopen("outFileName.txt","w");
fprintf(fout, "%08x\n", *((unsigned long *)&flt);
/* or */
printf("%08x\n", *((unsigned long *)&flt);
}
来源:https://stackoverflow.com/questions/2941095/convert-ieee-754-float-to-hex-with-c-printf