博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【centos】安装ELK之插件topbeat
阅读量:5093 次
发布时间:2019-06-13

本文共 11872 字,大约阅读时间需要 39 分钟。

部署环境:

  • centos 6.X
  • jdk 1.7
  • elasticsearch 2.3.1 https://www.elastic.co/downloads/elasticsearch
  • logstash 2.3.1 https://www.elastic.co/downloads/logstash
  • Kibana 4.5.0 https://www.elastic.co/downloads/kibana
  • topbeat 1.2.1 https://www.elastic.co/downloads/beats/topbeat

 

下载资源:

  topbeat :https://www.elastic.co/downloads/beats/topbeat 比如下载的是tar.gz包;根据自己习惯,可以下载rpm,或者zip 都可以。

 配置topbeat.yml文件:

################### Topbeat Configuration Example ###################################################### Input ############################################input:  # In seconds, defines how often to read server statistics  period: 10  # Regular expression to match the processes that are monitored  # By default, all the processes are monitored  procs: [".*"]  # Statistics to collect (all enabled by default)  stats:    # per system statistics, by default is true    system: true    # per process statistics, by default is true    process: true    # file system information, by default is true    filesystem: true    # cpu usage per core, by default is false    cpu_per_core: false############################################################################################################ Libbeat Config ################################### Base config file used by all other beats for using libbeat features############################# Output ########################################### Configure what outputs to use when sending the data collected by the beat.# Multiple outputs may be used.output:  ### Elasticsearch as output  elasticsearch:    # Array of hosts to connect to.    # Scheme and port can be left out and will be set to the default (http and 9200)    # In case you specify and additional path, the scheme is required: http://localhost:9200/path    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200    # hosts: ["localhost:9200"]      hosts: ["192.168.87.8:9200"]    # Optional protocol and basic auth credentials.    #protocol: "https"    #username: "admin"    #password: "s3cr3t"    # Number of workers per Elasticsearch host.    #worker: 1    # Optional index name. The default is "topbeat" and generates    # [topbeat-]YYYY.MM.DD keys.    #index: "topbeat"    # A template is used to set the mapping in Elasticsearch    # By default template loading is disabled and no template is loaded.    # These settings can be adjusted to load your own template or overwrite existing ones    #template:      # Template name. By default the template name is topbeat.      #name: "topbeat"      # Path to template file      #path: "topbeat.template.json"      # Overwrite existing template      #overwrite: false    # Optional HTTP Path    #path: "/elasticsearch"    # Proxy server url    #proxy_url: http://proxy:3128    # The number of times a particular Elasticsearch index operation is attempted. If    # the indexing operation doesn't succeed after this many retries, the events are    # dropped. The default is 3.    #max_retries: 3    # The maximum number of events to bulk in a single Elasticsearch bulk API index request.    # The default is 50.    #bulk_max_size: 50    # Configure http request timeout before failing an request to Elasticsearch.    #timeout: 90    # The number of seconds to wait for new events between two bulk API index requests.    # If `bulk_max_size` is reached before this interval expires, addition bulk index    # requests are made.    #flush_interval: 1    # Boolean that sets if the topology is kept in Elasticsearch. The default is    # false. This option makes sense only for Packetbeat.    #save_topology: false    # The time to live in seconds for the topology information that is stored in    # Elasticsearch. The default is 15 seconds.    #topology_expire: 15    # tls configuration. By default is off.    #tls:      # List of root certificates for HTTPS server verifications      #certificate_authorities: ["/etc/pki/root/ca.pem"]      # Certificate for TLS client authentication      #certificate: "/etc/pki/client/cert.pem"      # Client Certificate Key      #certificate_key: "/etc/pki/client/cert.key"      # Controls whether the client verifies server certificates and host name.      # If insecure is set to true, all server host names and certificates will be      # accepted. In this mode TLS based connections are susceptible to      # man-in-the-middle attacks. Use only for testing.      #insecure: true      # Configure cipher suites to be used for TLS connections      #cipher_suites: []      # Configure curve types for ECDHE based cipher suites      #curve_types: []      # Configure minimum TLS version allowed for connection to logstash      #min_version: 1.0      # Configure maximum TLS version allowed for connection to logstash      #max_version: 1.2  ### Logstash as output  #logstash:    # The Logstash hosts    #hosts: ["localhost:5044"]    # Number of workers per Logstash host.    #worker: 1    # Set gzip compression level.    #compression_level: 3    # Optional load balance the events between the Logstash hosts    #loadbalance: true    # Optional index name. The default index name depends on the each beat.    # For Packetbeat, the default is set to packetbeat, for Topbeat    # top topbeat and for Filebeat to filebeat.    #index: topbeat    # Optional TLS. By default is off.    #tls:      # List of root certificates for HTTPS server verifications      #certificate_authorities: ["/etc/pki/root/ca.pem"]      # Certificate for TLS client authentication      #certificate: "/etc/pki/client/cert.pem"      # Client Certificate Key      #certificate_key: "/etc/pki/client/cert.key"      # Controls whether the client verifies server certificates and host name.      # If insecure is set to true, all server host names and certificates will be      # accepted. In this mode TLS based connections are susceptible to      # man-in-the-middle attacks. Use only for testing.      #insecure: true      # Configure cipher suites to be used for TLS connections      #cipher_suites: []      # Configure curve types for ECDHE based cipher suites      #curve_types: []  ### File as output  #file:    # Path to the directory where to save the generated files. The option is mandatory.    #path: "/tmp/topbeat"    # Name of the generated files. The default is `topbeat` and it generates files: `topbeat`, `topbeat.1`, `topbeat.2`, etc.    #filename: topbeat    # Maximum size in kilobytes of each file. When this size is reached, the files are    # rotated. The default value is 10 MB.    #rotate_every_kb: 10000    # Maximum number of files under path. When this number of files is reached, the    # oldest file is deleted and the rest are shifted from last to first. The default    # is 7 files.    #number_of_files: 7  ### Console output  # console:    # Pretty print json event    #pretty: false############################# Shipper #########################################shipper:  # The name of the shipper that publishes the network data. It can be used to group  # all the transactions sent by a single shipper in the web interface.  # If this options is not defined, the hostname is used.  #name:  # The tags of the shipper are included in their own field with each  # transaction published. Tags make it easy to group servers by different  # logical properties.  #tags: ["service-X", "web-tier"]  # Uncomment the following if you want to ignore transactions created  # by the server on which the shipper is installed. This option is useful  # to remove duplicates if shippers are installed on multiple servers.  #ignore_outgoing: true  # How often (in seconds) shippers are publishing their IPs to the topology map.  # The default is 10 seconds.  #refresh_topology_freq: 10  # Expiration time (in seconds) of the IPs published by a shipper to the topology map.  # All the IPs will be deleted afterwards. Note, that the value must be higher than  # refresh_topology_freq. The default is 15 seconds.  #topology_expire: 15  # Internal queue size for single events in processing pipeline  #queue_size: 1000  # Configure local GeoIP database support.  # If no paths are not configured geoip is disabled.  #geoip:    #paths:    #  - "/usr/share/GeoIP/GeoLiteCity.dat"    #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"############################# Logging ########################################## There are three options for the log ouput: syslog, file, stderr.# Under Windos systems, the log files are per default sent to the file output,# under all other system per default to syslog.logging:  # Send all logging output to syslog. On Windows default is false, otherwise  # default is true.  #to_syslog: true  # Write all logging output to files. Beats automatically rotate files if rotateeverybytes  # limit is reached.  #to_files: false  # To enable logging to files, to_files option has to be set to true  files:    # The directory where the log files will written to.    #path: /var/log/mybeat    # The name of the files where the logs are written to.    #name: mybeat    # Configure log file size limit. If limit is reached, log file will be    # automatically rotated    rotateeverybytes: 10485760 # = 10MB    # Number of rotated log files to keep. Oldest files will be deleted first.    #keepfiles: 7  # Enable debug output for selected components. To enable all selectors use ["*"]  # Other available selectors are beat, publish, service  # Multiple selectors can be chained.  #selectors: [ ]  # Sets log level. The default log level is error.  # Available log levels are: critical, error, warning, info, debug  #level: error
View Code

其中,

  

input:  # In seconds, defines how often to read server statistics   # 间隔多少秒统计服务器数据  period: 10  # Regular expression to match the processes that are monitored  # By default, all the processes are monitored  # 此处可以过滤不需要的进程,采集全部进行 使用*号  procs: [".*"]  # Statistics to collect (all enabled by default)  stats:    # per system statistics, by default is true    system: true    # per process statistics, by default is true    process: true    # file system information, by default is true    filesystem: true    # cpu usage per core, by default is false    # 每个进程占用的CPU,默认未打开统计     cpu_per_core: false

 

如果修改了topbeat.yml文件,需要重启一下topbeat服务的;

[root@td-cigtest-1 topbeat]# ps -ef|grep topbeatroot     55861 52838  0 10:14 pts/1    00:00:52 ./topbeat -c topbeat.ymlroot     61753 52838  0 12:05 pts/1    00:00:00 grep topbeat[root@td-cigtest-1 topbeat]# kill 55861[root@td-cigtest-1 topbeat]# vim topbeat.yml [root@td-cigtest-1 topbeat]# ./topbeat -c topbeat.yml &[1] 61878[root@td-cigtest-1 topbeat]# ps -ef|grep topbeatroot     61878 52838  0 12:07 pts/1    00:00:00 ./topbeat -c topbeat.ymlroot     61894 52838  0 12:07 pts/1    00:00:00 grep topbeat[root@td-cigtest-1 topbeat]#

然后,新配置就生效了。 

 

上传ElasticSearch数据模板(topbeat.template.json为你自己topbeat中的文件地址,官方给的是/etc/..... )

curl -XPUT 'http://192.168.87.8:9200/_template/topbeat' -d@/usr/local/topbeat/topbeat.template.json

2016-4-15补充:

  template设定也有多种方法。最简单的就是和存储数据一样POST上去。长期的办法,就是写成json文件放在配置路径里。其中,default配置放在/etc/elasticsearch/下,其他配置放在/etc/elasticsearch/templates/下。 

我个人的配置,是放在了/usr/local/elasticsearch/templates/下;然后重启ES服务就可以生效了。

Q:ES没有安装成服务,查了一下资料说要用一个可以安装插件。暂时没有尝试成功,这个插件有点问题,不再支持ES2.X版本了。

 

启动topbeat服务:

[root@td-cigtest-1 topbeat]# lstopbeat  topbeat.template.json  topbeat.yml[root@td-cigtest-1 topbeat]# pwd/usr/local/topbeat[root@td-cigtest-1 topbeat]# curl -XPUT 'http://192.168.87.8:9200/_template/topbeat' -d@/usr/local/topbeat/topbeat.template.json{
"acknowledged":true}[root@td-cigtest-1 topbeat]# ./topbeat -c topbeat.yml&[2] 9298[root@td-cigtest-1 topbeat]#

 测试是否正常:

curl -XGET 'http://192.168.87.8:9200/topbeat-*/_search?pretty'

 

这种方式,是直接把数据提交到了ES中心,如果想先提交到logstash的话,参考https://www.elastic.co/guide/en/beats/topbeat/current/config-topbeat-logstash.html 

 

参考:

https://www.elastic.co/guide/en/beats/topbeat/current/topbeat-template.html

转载于:https://www.cnblogs.com/hager/p/5392205.html

你可能感兴趣的文章
B树和B+树的总结
查看>>
【ubuntu】配置zsh
查看>>
怎样用通用pe工具箱制作U盘启动盘
查看>>
在ubuntu16.04-32bits 下编译vlc和vlc-qt开源项目
查看>>
转:你真的懂iOS的autorelease吗?
查看>>
Linux学习——磁盘分区管理
查看>>
H5C302
查看>>
给 Android 开发者的 RxJava 详解
查看>>
设计:抽象类类还是接口
查看>>
解决 jsp:include 引用文件时出现乱码的问题
查看>>
712. Minimum ASCII Delete Sum for Two Strings
查看>>
C++ json解析
查看>>
ARM处理器的9种模式详解
查看>>
PHP 客户端IP
查看>>
Inside the Linux Operating System[1]
查看>>
Linux shell (一)
查看>>
First Missing Positive
查看>>
Flex 页面空白或Error #2032: 流错误处理办法
查看>>
消息中间件选型
查看>>
常用的正则表达式
查看>>