Convert Sign-Bit, Exponent and Mantissa to float?

孤街浪徒 提交于 2021-01-04 05:55:17

问题


I have the Sign Bit, Exponent and Mantissa (as shown in the code below). I'm trying to take this value and turn it into the float. The goal of this is to get 59.98 (it'll read as 59.9799995)

uint32_t FullBinaryValue = (Converted[0] << 24) | (Converted[1] << 16) |
                            (Converted[2] << 8) | (Converted[3]);

unsigned int sign_bit = (FullBinaryValue & 0x80000000);
unsigned int exponent = (FullBinaryValue & 0x7F800000) >> 23;
unsigned int mantissa = (FullBinaryValue & 0x7FFFFF);

What I originally tried doing is just placing them bit by bit, where they should be as so:

float number = (sign_bit << 32) | (exponent << 24) | (mantissa);

But this gives me 2.22192742e+009.

I was then going to use the formula: 1.mantissa + 2^(exponent-127) but you can't put a decimal place in a binary number.

Then I tried grabbing each individual value for (exponent, characteristic, post mantissa) and I got

Characteristic: 0x3B (Decimal: 59)
Mantissa: 0x6FEB85 (Decimal: 7334789)
Exponent: 0x5 (Decimal: 5) This is after subtracting it from 127

I was then going to take these numbers and just retrofit it into a printf. But I don't know how to convert the Mantissa hexadecimal into how it's supposed to be (powered to a negative exponent).

Any ideas on how to convert these three variables (sign bit, exponent, and mantissa) into a floating number?

EDIT FOR PAUL R Here is the code in Minimal, Complete and Verifable format. I added the uint8_t Converted[4] there just because it is the value I end up with and it makes it runnable.

#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

int main(int argc, char *argv[])
{
    uint8_t Converted[4];
    Converted[0] = 0x42;
    Converted[1] = 0x6f;
    Converted[2] = 0xEB;
    Converted[3] = 0x85;

    uint32_t FullBinaryValue = (Converted[0] << 24) | (Converted[1] << 16) |
                                (Converted[2] << 8) | (Converted[3]);

    unsigned int sign_bit = (FullBinaryValue & 0x80000000);
    unsigned int exponent = (FullBinaryValue & 0x7F800000) >> 23;
    unsigned int mantissa = (FullBinaryValue & 0x7FFFFF);

    float number = (sign_bit) | (exponent << 23) | (mantissa);

    return 0;
}

回答1:


The problem is that the expression float number = (sign_bit << 32) | (exponent << 24) | (mantissa); first computes an unsigned int and then casts that value to float. Casting between fundamental types will preserve the value rather than the memory representation. What you are trying to do is reinterpret the memory representation as a different type. You can use reinterpret_cast.

Try this instead :

uint32_t FullBinaryValue = (Converted[0] << 24) | (Converted[1] << 16) |
                           (Converted[2] << 8) | (Converted[3]);


float number = reinterpret_cast<float&>(FullBinaryValue);


来源:https://stackoverflow.com/questions/45512768/convert-sign-bit-exponent-and-mantissa-to-float

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!