I\'m trying to create an external table on csv files with Aws Athena with the code below but the line TBLPROPERTIES (\"skip.header.line.count\"=\"1\")
doesn\'t
When this question was asked there was no support for skipping headers, and when it was later introduced it was only for the OpenCSVSerDe, not for LazySimpleSerDe, which is what you get when you specify ROW FORMAT DELIMITED FIELDS …
. I think this is what has caused some confusion about whether or not it works in the answers to this question.
On the AWS Console you can specify it as Serde parameters key-value keypair
While if you apply your infrastructure as code with terraform you can use ser_de_info parameter - "skip.header.line.count" = 1. Example bellow
resource "aws_glue_catalog_table" "banana_datalake_table" {
name = "mapping"
database_name = "banana_datalake"
table_type = "EXTERNAL_TABLE"
owner = "owner"
storage_descriptor {
location = "s3://banana_bucket/"
input_format = "org.apache.hadoop.mapred.TextInputFormat"
output_format = "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
compressed = "false"
number_of_buckets = -1
ser_de_info {
name = "SerDeCsv"
serialization_library = "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"
parameters {
"field.delim" = ","
"skip.header.line.count" = 1 # Skip file headers
}
}
columns {
name = "column_1"
type = "string"
}
columns {
name = "column_2"
type = "string"
}
columns {
name = "column_3"
type = "string"
}
}
}
Just tried the "skip.header.line.count"="1"
and seems to be working fine now.
I recently tried:
TBLPROPERTIES ('skip.header.line.count'='1')
And it works fine now. This issue arose when I had the column header as a string (timestamp) and the records where actual timestamps. My queries would bomb as it would scan the table and find a string instead of timestamp
.
Something like this:
ts
2015-06-14 14:45:19.537
2015-06-14 14:50:20.546
This is a feature that has not yet been implemented. See Abhishek@AWS' response here:
"We are working on it and will report back as soon as we have an outcome. Sorry for this again. This ended up taking longer than what we anticipated."
My workaround has been to preprocess the data before creating the table:
sed -e 1d -e 's/\"//g' file.csv > file-2.csv