ELK Stack
- Elasticsearch:分布式搜索和分析引擎,具有高可伸缩、高可靠和易管理等特点。基于 Apache Lucene 构建,能对大容量的数据进行接近实时的存储、搜索和分析操作。通常被用作某些应用的基础搜索引擎,使其具有复杂的搜索功能;
- Logstash:数据收集引擎。它支持动态的从各种数据源搜集数据,并对数据进行过滤、分析、丰富、统一格式等操作,然后存储到用户指定的位置;
- Kibana:数据分析和可视化平台。通常与 Elasticsearch 配合使用,对其中数据进行搜索、分析和以统计图表的方式展示;
- Filebeat:ELK 协议栈的新成员,一个轻量级开源日志文件数据搜集器,基于 Logstash-Forwarder 源代码开发,是对它的替代。在需要采集日志数据的 server 上安装 Filebeat,并指定日志目录或日志文件后,Filebeat 就能读取数据,迅速发送到 Logstash 进行解析,亦或直接发送到 Elasticsearch 进行集中式存储和分析。
目前成熟架构(亿级): Filebeat * n + redis + logstash + elasticsearch + kibana 中小型(本文部署): Filebeat*n +logstash + elasticsearch + kibana
Docker 部署Filebeat
docker-compose.yml
version: '3'
services:
filebeat:
build:
context: .
args:
ELK_VERSION: 7.1.1
user: root
container_name: 'filebeat'
volumes:
- ./config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro # 配置文件,只读
- /var/lib/docker/containers:/var/lib/docker/containers:ro # 采集docker日志数据
- /var/run/docker.sock:/var/run/docker.sock:ro
Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/beats/filebeat:${ELK_VERSION}
config/filebeat.yml
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
filebeat.inputs:
- type: json-file
paths:
- /var/lib/docker/containers/*/*.log
output.logstash:
hosts: ["192.168.31.45:5000"] # 此处修改为logstash监听地址
但这里我们发现,日志文件的命名方式是使用containerId来命名的,因此无法区分日志的容器所对应的镜像,因此我们需要在各容器docker-compose文件添加labels信息.
version: "3"
services:
nginx:
image: nginx
container_name: nginx
labels:
service: nginx
ports:
- 80:80
logging:
driver: json-file
options:
labels: "service"
日志输出如下,即可区分不同的容器创建不同的es库
{"log":"172.18.0.1 - - [05/Jul/2019:06:33:55 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.29.0\" \"-\"\n","stream":"stdout","attrs":{"service":"nginx"},"time":"2019-07-05T06:33:55.973727477Z"}
Docker部署ELK
说实话,贴代码配置其实看上去会挺繁琐的,也不可能每个配置项都讲解到,看不懂的配置项或者docker-compose知识点,需要自己去恶补。 让我们先来看一下文件目录结构
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ └── Dockerfile
├── kibana
│ ├── config
│ │ └── kibana.yml
│ └── Dockerfile
└── logstash
├── config
│ └── logstash.yml
├── Dockerfile
└── pipeline
└── logstash.conf
docker-compose.yml
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: 7.1.1
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: 7.1.1
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms512m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: 7.1.1
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
esdata:
elasticsearch
elasticsearch/config/elasticsearch.yml
cluster.name: docker-cluster
node.name: master
node.master: true
node.data: true
network.host: 0.0.0.0
network.publish_host: 192.168.31.45 # 这里是我内网ip
cluster.initial_master_nodes:
- master
http.cors.enabled: true
http.cors.allow-origin: "*"
elasticsearch/Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
kibana
kibana/config/kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
kibana/Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
logstash
logstash/config/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
logstash/pipeline/logstash.conf
input {
beats {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
logstash/Dockerfile
ARG ELK_VERSION
FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
至此部署成功,需要修改的地方有elasticsearch.yml的内网ip,logstash.conf新增filter
来源:oschina
链接:https://my.oschina.net/u/4396177/blog/3475775