Flume HDFS Sink generates lots of tiny files on HDFS

后端 未结 3 1229
北恋
北恋 2021-01-24 18:14

I have a toy setup sending log4j messages to hdfs using flume. I\'m not able to configure the hdfs sink to avoid many small files. I thought I could configure the hdfs sink to

相关标签:
3条回答
  • 2021-01-24 18:20

    HDFS Sink has a property hdfs.batchSize (default 100) which describes "number of events written to file before it is flushed to HDFS". I think that's your problem here.

    Consider also checking all other properties: HDFS Sink .

    0 讨论(0)
  • 2021-01-24 18:30

    This can possibly happen because of the memory channel and its capacity. I guess its dumping data to HDFS as soon as its capacity becomes full. Did you try using file channel instead of memory ?

    0 讨论(0)
  • It is your typo in conf.

    #sink config
    a1.sinks.i1.type=hdfs
    a1.sinks.i1.hdfs.path=hdfs://localhost:8020/user/myName/flume/events
    #never roll-based on time
    a1.sinks.i1.hdfs.rollInterval=0
    #10MB=10485760
    a1.sinks.il.hdfs.rollSize=10485760
    #never roll base on number of events
    a1.sinks.il.hdfs.rollCount=0
    

    where in the line 'rollSize' and 'rollCount', you put il as i1. Please try to use DEBUG, then you will find like:

    [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.shouldRotate:465)  - rolling: rollSize: 1024, bytes: 1024
    

    Due to il, default value of rollSize 1024 is being used .

    0 讨论(0)
提交回复
热议问题