I\'m optimizing a macro in VBA which had none of the data types declared, so everything was clumsily treated by the compiler as a variant. I\'m dealing with scientific measu
The best way is to declare the variable as a Single
or a Double
depending on the precision you need. The data type Single
utilizes 4 Bytes and has the range of -3.402823E38 to 1.401298E45. Double
uses 8 Bytes.
You can declare as follows:
Dim decAsdf as Single
or
Dim decAsdf as Double
Here is an example which displays a message box with the value of the variable after calculation. All you have to do is put it in a module and run it.
Sub doubleDataTypeExample()
Dim doubleTest As Double
doubleTest = 0.0000045 * 0.005 * 0.01
MsgBox "doubleTest = " & doubleTest
End Sub
You can't declare a variable as Decimal
- you have to use Variant
(you can use CDec
to populate it with a Decimal
type though).
To declare a variable as a Decimal
, first declare it as a Variant
and then convert to Decimal
with CDec
. The type would be Variant/Decimal
in the watch window:
Considering that programming floating point arithmetic is not what one has studied during Maths classes at school, one should always try to avoid common pitfalls by converting to decimal whenever possible.
In the example below, we see that the expression:
0.1 + 0.11 = 0.21
is either True
or False
, depending on whether the collectibles (0.1,0.11) are declared as Double
or as Decimal
:
Public Sub TestMe()
Dim preciseA As Variant: preciseA = CDec(0.1)
Dim preciseB As Variant: preciseB = CDec(0.11)
Dim notPreciseA As Double: notPreciseA = 0.1
Dim notPreciseB As Double: notPreciseB = 0.11
Debug.Print preciseA + preciseB
Debug.Print preciseA + preciseB = 0.21 'True
Debug.Print notPreciseA + notPreciseB
Debug.Print notPreciseA + notPreciseB = 0.21 'False
End Sub