架构说明

app-server(filebeat) -> kafka -> logstash -> elasticsearch -> kibana

服务器用途说明

系统基础环境# cat /etc/redhat-release CentOS release 6.5 (Final)# uname -r2.6.32-431.el6.x86_64192.168.162.51    logstash01192.168.162.53    logstash02192.168.162.55    logstash03192.168.162.56    logstash04192.168.162.57    logstash05192.168.162.58    elasticsearch01192.168.162.61    elasticsearch02192.168.162.62    elasticsearch03192.168.162.63    elasticsearch04192.168.162.64    elasticsearch05192.168.162.66    kibana192.168.128.144   kafka01192.168.128.145   kafka02192.168.128.146   kafka03192.168.138.75    filebeat,weblogic

下载各种需要的软件包

 为6.0-beta2版本的

elasticsearch-6.0.0-beta2.rpmfilebeat-6.0.0-beta2-x86_64.rpmgrafana-4.4.3-1.x86_64.rpmheartbeat-6.0.0-beta2-x86_64.rpminfluxdb-1.3.5.x86_64.rpmjdk-8u144-linux-x64.rpmkafka_2.12-0.11.0.0.tgzkibana-6.0.0-beta2-x86_64.rpmlogstash-6.0.0-beta2.rpm

部署安装Filebeat

安装Filebeat

在应用服务器上安装filebeat

# yum localinstall filebeat-6.0.0-beta2-x86_64.rpm -y

安装完成之后,filebeat 通过RPM安装的目录:

# ls /usr/share/filebeat/bin  kibana  module  NOTICE  README.md  scripts

配置文件为

/etc/filebeat/filebeat.yml

配置Filebeat

#=========================== Filebeat prospectors =============================filebeat.prospectors:- type: log  enabled: true  paths:    - /data1/logs/apphfpay_8086_domain/apphfpay.yiguanjinrong.yg.*  multiline.pattern: '^(19|20)\d\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]) [012][0-9]:[0-6][0-9]:[0-6][0-9]'  multiline.negate: true  multiline.match: afterfilebeat.config.modules:  path: ${path.config}/modules.d/*.yml  reload.enabled: false#----------------------------- Kafka output ---------------------------------output.kafka:  hosts: ['192.168.128.144:9092','192.168.128.145:9092','192.168.128.146:9092']  topic: 'credit'

启动Filebeat,并查看日志有无报错信息

#/etc/init.d/filebeat start

日志文件

/var/log/filebeat/filebeat

安装部署Kafka和Zookeeper

分别设置三台kafka服务器的主机名

# host=kafka01  && hostname $host && echo "192.168.128.144" $host >>/etc/hosts# host=kafka02  && hostname $host && echo "192.168.128.145" $host >>/etc/hosts# host=kafka03  && hostname $host && echo "192.168.128.146" $host >>/etc/hosts

安装java

# yum localinstall jdk-8u144-linux-x64.rpm -y

解压kafka压缩包并将压缩包移动到 /usr/local/kafka

# tar fzx kafka_2.12-0.11.0.0.tgz# mv kafka_2.12-0.11.0.0 /usr/local/kafka

配置Kafka和Zookeeper

# pwd/usr/local/kafka/config# lsconnect-console-sink.properties    connect-log4j.properties       server.propertiesconnect-console-source.properties  connect-standalone.properties  tools-log4j.propertiesconnect-distributed.properties     consumer.properties            zookeeper.propertiesconnect-file-sink.properties       log4j.propertiesconnect-file-source.properties     producer.properties

修改配置文件

# grep -Ev "^$|^#" server.properties broker.id=1delete.topic.enable=truelisteners=PLAINTEXT://192.168.128.144:9092num.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/data1/kafka-logsnum.partitions=12num.recovery.threads.per.data.dir=1log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181zookeeper.connection.timeout.ms=6000# grep -Ev "^$|^#" consumer.propertieszookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181zookeeper.connection.timeout.ms=6000group.id=test-consumer-group# grep -Ev "^$|^#" producer.properties bootstrap.servers=192.168.128.144:9092,192.168.128.145:9092,192.168.128.146:9092compression.type=none

启动Zookeeper And Kafka

首先检测配置有没有问题

启动zookeeper,查看有无报错# /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties 启动kafka,查看有无报错# /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties 如果没有报错,如果需要zookeeper和kafka 那就先启动zookeeper 在启动kafka (当然也可以写一个启动脚本)# nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &# nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &

检查启动情况 默认开启的端口为 2181(zookeeper) 和 9202(kafka)

创建topic

# bin/kafka-topics.sh --create --zookeeper zk01.yiguanjinrong.yg:2181 --replication-factor 1 --partition 1 --topic testCreated topic "test".

查看创建的topic

# bin/kafka-topics.sh --list --zookeeper zk01.yiguanjinrong.yg:2181test

模拟客户端发送消息

# bin/kafka-console-producer.sh --broker-list 192.168.128.144:9092 --topic test确定后 输入一些内容,然后确定

模拟客户端接收信息(如果能正常接收到信息说明kafka部署正常)

# bin/kafka-console-consumer.sh --bootstrap-server 192.168.128.144:9202 --topic test --from-beginning

删除topic

# bin/kafka-topics.sh --delete --zookeeper zk01.yiguanjinrong.yg:2181 --topic test

安装部署Logstash

安装Logstash

# yum localinstall jdk-8u144-linux-x64.rpm -y# yum localinstall logstash-6.0.0-beta2.rpm -y

logstash的安装目录和配置文件的目录(默认没有配置文件)分别为

# /usr/share/logstash/   安装完成之后,并没有把bin目录添加到环境变量中# /etc/logstash/conf.d/

Logstash的配置文件信息

# cat /etc/logstash/conf.d/logstash.conf input {  kafka {    bootstrap_servers => "192.168.128.144:9092,192.168.128.145:9092,192.168.128.145:9092"    topics => ["credit"]    group_id => "test-consumer-group"    codec => "plain"    consumer_threads => 1    decorate_events => true  }}output {  elasticsearch {    hosts => ["192.168.162.58:9200","192.168.162.61:9200","192.168.162.62:9200","192.168.162.63:9200","192.168.162.64:9200"]    index => "logs-%{+YYYY.MM.dd}"    workers => 1  }}

检查配置文件是否正确

# /usr/share/logstash/bin/logstash -t --path.settings /etc/logstash/  --verboseSending Logstash's logs to /var/log/logstash which is now configured via log4j2.propertiesConfiguration OK

由于logstash 默认没有启动脚本,但是已经给出创建方法

查看脚本使用帮助

# bin/system-install --helpUsage: system-install [OPTIONSFILE] [STARTUPTYPE] [VERSION]NOTE: These arguments are ordered, and co-dependentOPTIONSFILE: Full path to a startup.options fileOPTIONSFILE is required if STARTUPTYPE is specified, but otherwise looks firstin /usr/share/logstash/config/startup.options and then /etc/logstash/startup.optionsLast match winsSTARTUPTYPE: e.g. sysv, upstart, systemd, etc.OPTIONSFILE is required to specify a STARTUPTYPE.VERSION: The specified version of STARTUPTYPE to use.  The default is usuallypreferred here, so it can safely be omitted.Both OPTIONSFILE & STARTUPTYPE are required to specify a VERSION.# /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv创建之后文件为: /etc/init.d/logstash  要注意修改日志目录地址,建议把log放置在 /var/log/logstash# mkdir -p /var/log/logstash && chown logstash.logstash -R /var/log/logstash

下面是需要修改的部分

start() {  # Ensure the log directory is setup correctly.  if [ ! -d "/var/log/logstash" ]; then     mkdir "/var/log/logstash"    chown "$user":"$group" -R "/var/log/logstash"    chmod 755 "/var/log/logstash"  fi  # Setup any environmental stuff beforehand  ulimit -n ${limit_open_files}  # Run the program!  nice -n "$nice" \  chroot --userspec "$user":"$group" "$chroot" sh -c "    ulimit -n ${limit_open_files}    cd \"$chdir\"    exec \"$program\" $args  " >> /var/log/logstash/logstash-stdout.log 2>> /var/log/logstash/logstash-stderr.log &  # Generate the pidfile from here. If we instead made the forked process  # generate it there will be a race condition between the pidfile writing  # and a process possibly asking for status.  echo $! > $pidfile  emit "$name started"  return 0}

启动Logstash,并查看日志有无报错

# /etc/init.d/logstash start

安装部署Elasticsearch群集

安装Elasticsearch

# yum localinstall jdk-8u144-linux-x64.rpm -y# yum localinstall elasticsearch-6.0.0-beta2.rpm -y

配置Elasticsearch

安装路径# /usr/share/elasticsearch/配置文件# /etc/elasticsearch/elasticsearch.yml

Elasticsearch 配置文件信息

# cat elasticsearch.yml | grep -Ev "^$|^#"cluster.name: elasticsearchnode.name: es01  #其他节点修改相应的节点名path.data: /data1/elasticsearchpath.logs: /var/log/elasticsearchbootstrap.system_call_filter: falsenetwork.host: 192.168.162.58 #其他节点修改地址信息http.port: 9200discovery.zen.ping.unicast.hosts: ["192.168.162.58", "192.168.162.61", "192.168.162.62", "192.168.162.63", "192.168.162.64"]discovery.zen.minimum_master_nodes: 3node.master: truenode.data: truetransport.tcp.compress: true

启动Elasticsearch

# mkdir -p /var/log/elasticsearch && chown elasticsearch.elasticsearch -R /var/log/elasticsearch# /etc/init.d/elasticsearch start

安装部署Kibana

安装Kibana

# yum localinstall kibana-6.0.0-beta2-x86_64.rpm -y

Kibana配置文件信息

# cat /etc/kibana/kibana.yml | grep -Ev "^$|^#"server.port: 5601server.host: "192.168.162.66"elasticsearch.url: "http://192.168.162.58:9200"  #elasticsearch群集地址,任意一个es节点的地址即可kibana.index: ".kibana"pid.file: /var/run/kibana/kibana.pid

启动Kibana

修改kibana启动脚本

# mkdir -p /var/run/kibana# chown kibana.kibana -R /var/run/kibana

修改kibana启动脚本部分

start() {  # Ensure the log directory is setup correctly.  [ ! -d "/var/log/kibana/" ] && mkdir "/var/log/kibana/"  chown "$user":"$group"  "/var/log/kibana/"  chmod 755 "/var/log/kibana/"  # Setup any environmental stuff beforehand  # Run the program!  chroot --userspec "$user":"$group" "$chroot" sh -c "    cd \"$chdir\"    exec \"$program\" $args  " >> /var/log/kibana/kibana.stdout 2>> /var/log/kibana/kibana.stderr &  # Generate the pidfile from here. If we instead made the forked process  # generate it there will be a race condition between the pidfile writing  # and a process possibly asking for status.  echo $! > $pidfile  emit "$name started"  return 0}启动 #/etc/init.d/kibana start

现在可以访问Kibana页面 Http://192.168.162.66:5601