文章目录
安装使用ELK6.0
1. Elasticsearch安装准备工作
准备3台机器,这样才能完成分布式集群的实验,当然能有更多机器更好:
- 192.168.1.17
- 192.168.1.18
- 192.168.1.20
角色划分:
- 3台机器全部安装jdk1.8,因为elasticsearch是java开发的
- 3台全部安装elasticsearch (后续都简称为es)
- 192.168.1.18作为主节点
- 192.168.1.17以及192.168.1.18作为数据节点
- 主节点上需要安装kibana
- 在192.168.1.17上安装 logstash
ELK版本信息:
- Elasticsearch-6.0.0
- logstash-6.0.0
- kibana-6.0.0
- filebeat-6.0.0
然后三台机器都得关闭防火墙或清空防火墙规则。
配置三台机器的hosts文件内容如下:
$ vim /etc/hosts
192.168.1.17 master-node
192.168.1.10 lb-node1
192.168.1.11 lb-node2
三台主机安装es
## 清华大学的源,下载比较快
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/elasticsearch-6.0.0.rpm
rpm -ivh elasticsearch-6.0.0.rpm
配置es
[root@master-node ~]# ll /etc/elasticsearch/
total 16
-rw-rw---- 1 root elasticsearch 2870 Nov 11 2017 elasticsearch.yml
-rw-rw---- 1 root elasticsearch 2678 Nov 11 2017 jvm.options
-rw-rw---- 1 root elasticsearch 5091 Nov 11 2017 log4j2.properties
[root@master-node ~]# ll /etc/sysconfig/elasticsearch
-rw-rw---- 1 root elasticsearch 1593 Nov 11 2017 /etc/sysconfig/elasticsearch
[root@master-node ~]#
elasticsearch.yml 文件用于配置集群节点等相关信息的,elasticsearch 文件则是配置服务本身相关的配置,例如某个配置文件的路径以及java的一些路径配置什么的。
[root@master-node ~]# grep '^[a-Z]' /etc/elasticsearch/elasticsearch.yml
cluster.name: master-node # 集群中的名称
node.name: master # 该节点名称
node.master: true # 意思是该节点为主节点
node.data false # 表示这不是数据节点
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0 # 监听全部ip,在实际环境中应设置为一个安全的ip
http.port: 9200 # es服务的端口号
bootstrap.mlockall: true #不使用swap分区,锁住内存
discovery.zen.ping.unicast.hosts: ["192.168.1.17", "192.168.1.18","192.168.1.21"] # 配置自动发现
[root@master-node ~]#
然后将配置文件发送到另外两台机器上去,并修改以下配置:
[root@master-node ~]# scp -pr /etc/elasticsearch/elasticsearch.yml data-node1:/etc/elasticsearch/elasticsearch.yml
[root@master-node ~]# scp -pr /etc/elasticsearch/elasticsearch.yml data-node2:/etc/elasticsearch/elasticsearch.yml
[root@lb-node2 ~]# vim /etc/elasticsearch/elasticsearch.yml
node.name: lb-node2
node.master: false
node.data: true
path.data: /data/es-data #数据存放路径
配置完成后,回到到主节点上,启动es服务。9300端口是集群通信用的,9200则是数据传输时用的:
[root@master-node ~]# systemctl start elasticsearch.service
[root@master-node ~]# netstat -lntp|grep java
tcp6 0 0 :::9200 :::* LISTEN 50034/java
tcp6 0 0 :::9300 :::* LISTEN 50034/java
[root@master-node ~]#
## 如果启动有问题请查看日志
[root@master-node ~]# ls /var/log/elasticsearch/
[root@master-node ~]# tail -n50 /var/log/messages
主节点启动完成之后,再启动其他节点的es服务。
curl查看es集群情况
[root@master-node ~]# curl '192.168.1.17:9200/_cluster/health?pretty'
{
"cluster_name" : "master-node",
"status" : "green", # 为green则代表健康没问题,如果是yellow或者red则是集群有问题
"timed_out" : false, # 是否有超时
"number_of_nodes" : 3, # 集群中的节点数量
"number_of_data_nodes" : 2, # 集群中data节点的数量
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
[root@master-node ~]#
查看集群的详细信息:
[root@master-node ~]# curl '192.168.1.17:9200/_cluster/state?pretty'
{
"cluster_name" : "master-node",
"compressed_size_in_bytes" : 346,
"version" : 6,
"state_uuid" : "xgMwKKfxTWmpXWC-0RCYLw",
"master_node" : "ojBw2Bu7SQqfZ4GjSQ8z1A",
"blocks" : { },
"nodes" : {
"yuNZNzj5SPu9UOAf3xcapg" : {
"name" : "lb-node2",
"ephemeral_id" : "W7tc0A-BRfONEuY5VGrlFQ",
"transport_address" : "192.168.1.11:9300",
"attributes" : { }
},
....
....
[root@master-node ~]#
检查没有问题后,我们的es集群就搭建完成了。
启动报错:
tail -n100 /var/log/message
- main ERROR Could not register mbeans java.security.AccessControlException: access denied (“javax.management.MBeanTrustPermission” “register”)
改变elasticsearch文件夹所有者到当前用户
chown -R elasearch.elasticsearch /etc/elasticsearch
2. 搭建kibana和logstash服务器
elasticsearch显示出来的也是一堆字符串,我们希望这些信息能以图形化的方式显示出来,那就需要安装kibana来为我们展示这些数据了。
master上安装kibana
[root@master-node ~]# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/kibana-6.0.0-x86_64.rpm
[root@master-node ~]# rpm -ivh kibana-6.0.0-x86_64.rpm
安装完成后,对kibana进行配置:
[root@master-node ~]# ll /etc/kibana/
total 8
-rw-r--r-- 1 root root 4649 Nov 11 2017 kibana.yml
[root@master-node ~]#
[root@master-node ~]# grep '^[a-Z]' /etc/kibana/kibana.yml
server.port: 5601 # 配置kibana的端口
server.host: 192.168.1.17 # 配置监听ip
elasticsearch.url: "http://192.168.1.17:9200" # 配置es服务器的ip,如果是集群则配置该集群中主节点的ip
logging.dest: /var/log/kibana/kibana.log # 配置kibana的日志文件路径,不然默认是messages里记录日志
[root@master-node ~]#
# 创建日志文件
[root@master-node ~]# mkdir -p /var/log/kibana
[root@master-node ~]# touch /var/log/kibana/kibana.log
[root@master-node ~]# chmod 777 /var/log/kibana/kibana.log
# 启动kibana
[root@master-node ~]# systemctl restart kibana
[root@master-node ~]# netstat -lntp|grep 5601
tcp 0 0 192.168.1.17:5601 0.0.0.0:* LISTEN 51762/node
[root@master-node ~]#
注:由于kibana是使用node.js开发的,所以进程名称为node
然后在浏览器里进行访问测试
如:http://192.168…17:5601/ ,由于我们并没有安装x-pack,所以此时是没有用户名和密码的,可以直接访问的:
到此我们的kibana就安装完成了
在数据节点上安装logstash,并测试收集系统日志(实践Rsyslog)
目前logstash不支持JDK1.9
- 安装logstash
[root@lb-node1 ~]# wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/logstash-6.0.0.rpm
[root@lb-node1 ~]# rpm -ivh logstash-6.0.0.rpm
用户组进行授权启动
[root@lb-node1 ~]# groupadd elsearch
[root@lb-node1 ~]# useradd elsearch -g elsearch -p elsearch
[root@lb-node1 ~]# chown -R elsearch:elsearch /etc/elasticsearch
- 先不要启动服务,先配置logstash收集syslog日志:
[root@lb-node1 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
stdout {
codec => rubydebug
}
}
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK # 为OK则代表配置文件没有问题
[root@lb-node1 /usr/share/logstash/bin]#
命令说明:
- –path.settings 用于指定logstash的配置文件所在的目录
- -f 指定需要被检测的配置文件的路径
- –config.test_and_exit 指定检测完之后就退出,不然就会直接启动了
报错解决:在虚拟机的设置中,将处理器的处理器核心数量改成2,重新执行启动命令后,能够正常运行。若还是未能执行成功,可进一步将处理器数量也改成2
- 配置kibana服务器的ip以及配置的监听端口:
[root@lb-node1 ~]# vim /etc/rsyslog.conf
#### RULES ####
*.* @@192.168.1.10:10514
- 重启rsyslog
[root@lb-node1 ~]# systemctl restart rsyslog.service
指定配置文件,启动logstash:
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
# 这时终端会停留在这里,因为我们在配置文件中定义的是将信息输出到当前终端
- 打开新终端检查一下10514端口是否已被监听:
[root@lb-node1 ~]# netstat -lntp |grep 10514
tcp6 0 0 :::10514 :::* LISTEN 10234/java
[root@lb-node1 ~]#
然后在别的机器ssh登录到这台机器上,测试一下有没有日志输出:
root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"severity" => 6,
"pid" => "10460",
"program" => "sshd",
"message" => "Accepted password for root from 192.168.1.11 port 42848 ssh2\n",
"type" => "system-syslog",
"priority" => 86,
"logsource" => "lb-node1",
"@timestamp" => 2019-09-06T15:42:50.000Z,
"@version" => "1",
"host" => "192.168.1.10",
"facility" => 10,
"severity_label" => "Informational",
"timestamp" => "Sep 6 11:42:50",
"facility_label" => "security/authorization"
}
.......
可以看到,终端中以JSON的格式打印了收集到的日志,测试成功。
- 配置
logstash
修改测试的配置,这一步我们需要重新改一下配置文件,让收集的日志信息输出到es服务器中,而不是当前终端:
[root@lb-node1 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
elasticsearch {
hosts => ["192.168.1.17:9200"]
index => "system-syslog-%{+YYYY.MM}"
}
}
## 同样的需要检测配置文件有没有错:
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
- 没问题后,启动logstash服务,并检查进程以及监听端口:
[root@lb-node1 ~]# systemctl status logstash.service
[root@lb-node1 ~]# systemctl status logstash.service
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2019-09-06 11:55:01 EDT; 4s ago
Main PID: 11104 (java)
CGroup: /system.slice/logstash.service
└─11104 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancy...
Sep 06 11:55:01 lb-node1 systemd[1]: Started logstash.
Sep 06 11:55:01 lb-node1 systemd[1]: Starting logstash...
[root@lb-node1 ~]# netstat -aux|grep logstash
# 进程正常,但是9600以及10514端口却没有被监听
问题解决:查看logstash的日志看看有没有错误信息的输出,但是发现没有记录日志信息,那就只能转而去查看tail -n50 /var/log/messages
的日志,发现错误信息如下:
这是因为权限不够,既然是权限不够,那就设置权限即可:
[root@lb-node1 ~]# chmod logstash /var/log/logstash/logstash-plain.log
chmod: invalid mode: ‘logstash’
Try 'chmod --help' for more information.
[root@lb-node1 ~]# chown logstash /var/log/logstash/logstash-plain.log
[root@lb-node1 ~]# ll /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 logstash root 10505 Sep 6 11:47 /var/log/logstash/logstash-plain.log
[root@lb-node1 ~]# ll !$
ll /var/log/logstash/logstash-plain.log
-rw-r--r-- 1 logstash root 11865 Sep 6 12:03 /var/log/logstash/logstash-plain.log
[root@lb-node1 ~]#
设置完权限重启服务之后,发现还是没有监听端口,查看logstash-plain.log文件记录的错误日志信息如下:必须是可写目录。它不可写
依旧是权限的问题,这是因为之前我们以root的身份在终端启动过logstash,所以产生的相关文件的属组属主都是root
[root@lb-node1 ~]# chown -R logstash /var/lib/logstash
[root@lb-node1 ~]# ll !$
ll /var/lib/logstash
total 4
drwxr-xr-x 2 logstash root 6 Sep 6 11:02 dead_letter_queue
drwxr-xr-x 2 logstash root 6 Sep 6 11:02 queue
-rw-r--r-- 1 logstash root 36 Sep 6 11:35 uuid
## 端口正常监听了,这样我们的logstash服务就启动成功了
[root@lb-node1 ~]# netstat -lntp|grep 9600
tcp6 0 0 127.0.0.1:9600 :::* LISTEN 15414/java
[root@lb-node1 ~]# netstat -lntp|grep 10514
tcp6 0 0 :::10514 :::* LISTEN 15414/java
[root@lb-node1 ~]#
## 但是可以看到,logstash的监听ip是127.0.0.1这个本地ip,本地ip无法远程通信,所以需要修改一下配置文件,配置一下监听的ip:
[root@lb-node1 ~]# vim /etc/logstash/logstash.yml
...
http.host: "192.168.1.10"
[root@lb-node1 ~]# systemctl restart logstash
[root@lb-node1 ~]# netstat -lntp|grep 9600
tcp6 0 0 192.168.1.10:9600 :::* LISTEN 15414/java
[root@lb-node1 ~]#
kibana上查看日志
回到kibana服务器上查看日志,执行以下命令可以获取索引信息:
[root@master-node ~]# curl '192.168.1.17:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2019.09 U4F6hzzYRLuzGQkq15jmIA 5 1 28 0 429.5kb 214.7kb
green open .kibana ol9lN_JkQaiNSI11KWUkAA 1 1 1 0 7.3kb 3.6kb
[root@master-node ~]#
## 如上,可以看到,在logstash配置文件中定义的system-syslog索引成功获取到了,证明配置没问题,logstash与es通信正常。
获取指定索引详细信息:
[root@master-node ~]# curl -XGET '192.168.1.17:9200/system-syslog-2019.09?pretty'
如果日后需要删除索引的话,使用以下命令可以删除指定索引:
curl -XDELETE 'localhost:9200/system-syslog-20189.09'
es与logstash能够正常通信后就可以去配置kibana了,浏览器访问192.168.1.17:5601,到kibana页面上配置索引:
- 我们也可以使用通配符,进行批量匹配:
- 如果es服务器正常返回信息,但是 “Discover” 页面却依旧显示无法查找到日志信息的话,就使用另一种方式,进入设置删除掉索引:
- 重新添加索引,但是这次不要选择 @timestampe但是这种方式只能看到数据,没有可视化的柱状图了:
以上这就是如何使用logstash收集系统日志,输出到es服务器上,并在kibana的页面上进行查看
logstash收集nginx日志实战
和收集syslog一样,首先需要编辑配置文件,这一步在logstash服务器上完成
[root@lb-node1 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/tmp/elk_access.log"
start_position => "beginning" #设定改成 "beginning",logstash 进程就从头开始读取,有点类似 cat,但是读到最后一行不会终止,而是继续变成 tail -F
type => "nginx"
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
}
geoip {
source => "clientip"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["192.168.1.17:9200"]
index => "nginx-test-%{+YYYY.MM.dd}"
}
}
检测配置文件是否有错
[root@lb-node1 /usr/share/logstash/bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[root@lb-node1 /usr/share/logstash/bin]#
进入nginx虚拟主机配置文件所在的目录中,新建一个虚拟主机配置文件:
[root@master-node conf.d]# vim elk.conf
server {
listen 80;
server_name 192.168.1.17;
location / {
proxy_pass http://192.168.1.17:5601;
#proxy_set_header Host $host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /tmp/elk_access.log main2;
}
## 配置nginx的主配置文件,因为需要配置日志格式,在 log_format 那一行的下面增加以下内容:
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';
## 重启nginx,访问日志生成了
[root@master-node conf.d]# ll /tmp/elk_access.log
-rw-r--r-- 1 root root 51095 Sep 7 01:46 /tmp/elk_access.log
[root@master-node conf.d]#
重启logstash服务,生成nginx日志的索引:
systemctl restart logstash
[root@master-node ~]# curl '192.168.1.17:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2019.09 U4F6hzzYRLuzGQkq15jmIA 5 1 5262 0 2.6mb 1.1mb
green open nginx-test-2019.09.06 -jVW9nG3RQC3RmMS0nYc6g 5 1 47 0 611.3kb 313.5kb
green open .kibana ol9lN_JkQaiNSI11KWUkAA 1 1 4 0 43.4kb 23.3kb
[root@master-node ~]#
那么这时就可以到kibana上配置该索引
使用beats采集日志
beats是ELK体系中新增的一个工具,它属于一个轻量的日志采集器,以上我们使用的日志采集工具是logstash,但是logstash占用的资源比较大,没有beats轻量,所以官方也推荐使用beats来作为日志采集工具。而且beats可扩展,支持自定义构建。
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.0.0/filebeat-6.0.0-x86_64.rpm
rpm -ivh filebeat-6.0.0-x86_64.rpm
安装完成之后编辑配置文件:
[root@lb-node2 /]# vim /etc/filebeat/filebeat.yml
- type: log
#enabled: false 这一句要注释掉
paths:
- /var/log/messages # 指定需要收集的日志文件的路径
#output.elasticsearch: # 先将这几句注释掉
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
output.console: # 指定在终端上输出日志信息(用来测试filebeat能否正常收集日志数据)
enable: true
测试可以正常收集日志数据,再次修改配置文件,将filebeat作为一个服务启动:
[root@lb-node2 /]# vim /etc/filebeat/filebeat.yml
#output.console: # 关闭在控制台输出
# enable: true
# 把上面这两句的注释去掉
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.77.128:9200"] # 并配置es服务器的ip地址
- 启动filebeat服务
[root@lb-node2 /]# systemctl start filebeat.service
[root@lb-node2 /]# ps -ef |grep filebeat|grep -v grep
root 10654 1 0 21:04 ? 00:00:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat
[root@lb-node2 /]#
- 启动成功后可以在elasticsearch上新增了一个以filebeat-6.0.0开头的索引,代表filesbeat和es能够正常通信了
- 到kibana上配置该索引
来源:CSDN
作者:李在奋斗
链接:https://blog.csdn.net/qq_31725371/article/details/103569507