How to load big double numbers in a PySpark DataFrame and persist it back without changing the numeric format to scientific notation or precision?
问题 I have a CSV like that: COL,VAL TEST,100000000.12345679 TEST2,200000000.1234 TEST3,9999.1234679123 I want to load it having the column VAL as a numeric type (due to other requirements of the project) and then persist it back to another CSV as per structure below: +-----+------------------+ | COL| VAL| +-----+------------------+ | TEST|100000000.12345679| |TEST2| 200000000.1234| |TEST3| 9999.1234679123| +-----+------------------+ The problem I'm facing is that whenever I load it, the numbers