I tried to concat() two parquet file with pandas in python .
It can work , but when I try to write and save the Data frame to a parquet file ,it display the error :
I experienced a related order-of-magnitude problem when writing dask DataFrames with datetime64[ns] columns to AWS S3 and crawling them into Athena tables.
The problem was that subsequent Athena queries showed the datetime fields as year >57000 instead of 2020. I managed to use the following fix:
df.to_parquet(path, times="int96")
Which forwards the kwarg **{"times": "int96"}
into fastparquet.writer.write().
I checked the resulting parquet file using package parquet-tools. It indeed shows the datetime columns as INT96 storage format. On Athena (which is based on Presto) the int96 format is well supported and does not have the order of magnitude problem.
Reference: https://github.com/dask/fastparquet/blob/master/fastparquet/writer.py, function write()
, kwarg times
.
(dask 2.30.0 ; fastparquet 0.4.1 ; pandas 1.1.4)