Denormalized Numbers - IEEE 754 Floating Point
So I'm trying to learn more about Denormalized numbers as defined in the IEEE 754 standard for Floating Point numbers. I've already read several articles thanks to Google search results, and I've gone through several StackOverFlow posts. However I still have some questions unanswered. First off, just to review my understanding of what a Denormalized float is: Numbers which have fewer bits of precision, and are smaller (in magnitude) than normalized numbers Essentially, a denormalized float has the ability to represent the SMALLEST (in magnitude) number that is possible to be represented with