How can I force Flume-NG to process the backlog of events after a sink failed?

前端 未结 1 1308
死守一世寂寞
死守一世寂寞 2021-02-03 13:36

I\'m trying to setup Flume-NG to collect various kinds of logs from a bunch of servers (mostly running Tomcat instances and Apache Httpd) and dump them into HDFS on a 5-node Had

相关标签:
1条回答
  • 2021-02-03 14:32

    There are a couple of things I see in your configuration that can cause issues:

    1. Your first agent seems to have an avro sink with batch size of 1. You should bump this up to at least 100 or more. This is because the avro source on the second agent would be committing to the channel with batch size of 1. Each commit causes an fsync, causing the file channel performance to be poor. The batch size on the exec source is also 1, causing that channel to be slow as well. You can increase the batch size (or use the Spool Directory Source - more on that later).

    2. You can have multiple HDFS sinks reading from the same channel to improve performance. You should just make sure that each sink writes to a different directory or have different "hdfs.filePrefix", so that multiple HDFS sinks don't try to write to the same files.

    3. Your batch size for the HDFS sink is 20000, which is quite high, and your callTimeout is the default of 10 seconds. You should increase "hdfs.callTimeout" if you want to keep such a huge batch size. I'd recommend reducing the batch size to 1000 or so, and having timeout of about 15-20 seconds. (Note that at the current batch size, each file holds only 2 batches - so reduce the batch size, increase the rollInterval and timeOut)

    If you are using tail -F, I'd recommend trying out the new Spool Directory Source. To use this source, rotate out your log files to a directory, which the Spool Directory Source processes. This source will only process files which are immutable, so you need to rotate the log files out. Using tail -F with exec source has issues, as documented in the Flume User Guide.

    0 讨论(0)
提交回复
热议问题