R 3.1.0 is out and one of the new features is the following:
type.convert()
(and hence by defaultread.table()
) returns a ch
In version 3.1.1, there is this change listed in the News file:
type.convert()
,read.table()
and similarread.*()
functions get a newnumerals
argument, specifying how numeric input is converted when its conversion to double precision loses accuracy. The defaultnumerals = "allow.loss"
allows accuracy loss, as in R versions before 3.1.0.
Much of post-release discussion about the original change, including the decisions to revert the default behavior with an additional warning, can be found in a thread on the developers' email list.
For version 3.1.0, code will have to be modified to get the old behavior. Switching to 3.1.1 is another strategy.
The mention of this change for version 3.1.0 (from the same News file) says
type.convert()
(and hence by defaultread.table()
) returns a character vector or factor when representing a numeric input as a double would lose accuracy. Similarly for complex inputs.If a file contains numeric data with unrepresentable numbers of decimal places that are intended to be read as numeric, specify
colClasses
inread.table()
to be"numeric"
.
Note: original answer was written when the applicable version with the fix was 3.1.0 patched. The answer has been updated now that 3.1.1 has been released.