Load data into Hive with custom delimiter

隐身守侯 提交于 2019-11-27 04:53:58

问题


I'm trying to create an internal (managed) table in hive that can store my incremental log data. The table goes like this:

CREATE TABLE logs (foo INT, bar STRING, created_date TIMESTAMP)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '<=>'
STORED AS TEXTFILE;

I need to load data into this table periodically.

LOAD DATA INPATH '/user/foo/data/logs' INTO TABLE logs;

But the data is not getting inserted into the table properly. There might be some problem with the delimiter.Can't find why.

Example log line:

120<=>abcdefg<=>2016-01-01 12:14:11

On select * from logs; I get,

120  =>abcdefg  NULL

first attribute is fine, the second contains a part of delimiter but since it's string that is getting inserted and third will be null since it expects date time.

Can anyone please help on how to provide custom delimiters and load data successfully.


回答1:


By default, hive only allows user to use single character as field delimiter. Although there's RegexSerDe to specify multiple-character delimiter, it can be daunting to use, especially for amateurs.

The patch (HIVE-5871) adds a new SerDe named MultiDelimitSerDe. With MultiDelimitSerDe, users can specify a multiple-character field delimiter when creating tables, in a way most similar to typical table creations.

hive> CREATE TABLE logs (foo INT, bar STRING, created_date TIMESTAMP)
    > ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe' 
    > WITH SERDEPROPERTIES ("field.delim"="<=>")
    > STORED AS TEXTFILE;

hive> dfs -put /home/user1/multi_char.txt /user/hive/warehouse/logs/. ;

hive> select * from logs;
OK
120 abcdefg 2016-01-01 12:14:11
Time taken: 1.657 seconds, Fetched: 1 row(s)
hive> 



回答2:


CREATE TABLE logs (foo INT, bar STRING, created_date TIMESTAMP)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe'
WITH SERDEPROPERTIES (
    "field.delim"="<=>",
    "collection.delim"=":",
    "mapkey.delim"="@"
);

load data in table

load data local inpath '/home/kishore/Data/input.txt' overwrite into table logs;



回答3:


I suggest you to go with MultiDelimitSerDe answers mentioned earlier over mine. You can also give a try with RegexSerDe. But You need to have an additional step of parsing it to your datatypes since RegexSerde accepts String by default.

RegexSerDe will come to handy dealing with some log files where the data that is not uniformly arranged with only one single delimiter.

CREATE TABLE logs_tmp  (foo STRING,bar STRING, created_date STRING) 
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' 
WITH SERDEPROPERTIES (
 "input.regex" = "(\\d{3})<=>(\\w+)<=>(\\d{4}-\\d{2}-\\d{2}\\s\\d{2}:\\d{2}:\\d{2})"
) 
STORED AS TEXTFILE;

LOAD DATA LOCAL INPATH 'logs.txt' overwrite into table logs_tmp;

CREATE TABLE logs  (foo INT,bar STRING, created_date TIMESTAMP) ;

INSERT INTO TABLE logs SELECT cast(foo as int) as foo,bar,cast(created_date as TIMESTAMP) as created_date from logs_tmp 

output:

   OK
    Time taken: 0.213 seconds    
    hive> select * from logs;
    120     abcdefg 2016-01-01 12:14:11


来源:https://stackoverflow.com/questions/38825285/load-data-into-hive-with-custom-delimiter

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!