Docker 安装logstash 7.3.0 及挂在宿主机对应目录
Posted daiyuenlong110
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Docker 安装logstash 7.3.0 及挂在宿主机对应目录相关的知识,希望对你有一定的参考价值。
1. 安装镜像和启动镜像
docker run -d --name=logstash logstash:7.10.1
2. 创建创建所需挂载文件
#创建配置文件目录
mkdir -p /opt/docker/logstash/config/conf.d
#拷贝已启动的容器中的文件到宿主机,用于重启挂载
docker cp logstash:/usr/share/logstash/config /opt/docker/logstash/
docker cp logstash:/usr/share/logstash/data /opt/docker/logstash/
docker cp logstash:/usr/share/logstash/pipeline/opt/docker/logstash/
3.修改logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://127.0.0.1:9200" ]
#path.config: /usr/share/logstash/config/conf.d/*.conf
path.logs: /usr/share/logstash/logs
4.文件夹授权
chmod -R 777 /opt/docker/logstash
5.命名行启动
docker run \\
--name logstash \\
--restart=always \\
-p 5044:5044 \\
-p 9600:9600 \\
-e ES_JAVA_OPTS="-Duser.timezone=Asia/Shanghai" \\
-v /opt/docker/logstash/config:/usr/share/logstash/config \\
-v /opt/docker/logstash/data:/usr/share/logstash/data \\
-v /opt/docker/logstash/pipeline:/usr/share/logstash/pipeline \\
-d logstash:7.3.0
docker run \\
--name logstash\\ 将容器命名为 logstash
--restart=always \\ 容器自动重启
-p 5044:5044 \\ 将容器的5044端口映射到宿主机5044端口 logstash的启动端口
-p 9600:9600 \\ 将容器的9600端口映射到宿主机9600 端口,api端口
-e ES_JAVA_OPTS="-Duser.timezone=Asia/Shanghai" \\ 设置时区
-v /opt/docker/logstash/config:/usr/share/logstash/config \\
-v /opt/docker/logstash/data:/usr/share/logstash/data \\
-v /opt/docker/logstash/pipeline:/usr/share/logstash/pipeline \\ 挂载
-d logstash:6.8.12 后台运行容器,并返回容器ID
在文件目录/opt/docker/logstash/config/conf.d
下创建jdbc.conf
文件
input
jdbc
# 连接数据库
jdbc_connection_string => "jdbc:mysql://xxx:3306/dianpingdb?serverTimezone=Asia/Shanghai&characterEncoding=utf8&useSSL=false"
jdbc_user => "root"
jdbc_password => "root"
# 连接数据库的驱动包
jdbc_driver_library => "/usr/share/logstash/config/jars/mysql-connector-java-5.1.34.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
codec => plain charset => "UTF-8"
# 数据追踪
# 追踪的字段
#tracking_column => "update_time"
# 上次追踪的元数据存放位置
#last_run_metadata_path => "/usr/share/logstash/config/lastrun/logstash_jdbc_last_run"
# 设置时区
jdbc_default_timezone => "Asia/Shanghai"
# sql 文件地址
# statement_filepath => ""
# sql
statement => "select a.id,a.name,a.tags,concat(a.latitude,',',a.longitude) as location,a.remark_score,a.price_per_man,a.category_id,b.name as category_name,a.seller_id,c.remark_score as seller_remark_score,c.disabled_flag as seller_disabled_flag from shop a inner join category b on a.category_id = b.id inner join seller c on c.id = a.seller_id"
# 是否清除 last_run_metadata_path 的记录,如果为真那么每次都相当于从头开始查询所有的数据库记录
#clean_run =>false
# 这是控制定时的,重复执行导入任务的时间间隔,第一位是分钟 不设置就是1分钟执行一次
schedule => "* * * * *"
output
elasticsearch
# 要导入到的Elasticsearch所在的主机
hosts => ["xxx:9200","xxx:9200"]
# 要导入到的Elasticsearch的索引的名称
index => "shop"
# 类型名称(类似数据库表名)
document_type => "_doc"
# 主键名称(类似数据库表名)
document_id => "%id"
stdout
# JSON 格式输出
codec => json_lines
- 然后关闭logstash.yml 中
path.config: /usr/share/logstash/config/conf.d/*.conf
的注释
重启容器
常见问题
An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely :error_message=>""\\xC2" from ASCII-8BIT to UTF-8", :error_class=>“LogStash::Json::GeneratorError”, :backtrace=>["/usr/share/logstash/logstash-core/lib/logsta
问题:编码错误
对 是字符串的 各个字段设置字符集:
在jdbc.conf 的jdbc 中
columns_charset =>
"id"=> "UTF-8"
"nickname"=> "UTF-8"
"avatarName"=> "UTF-8"
"desc"=> "UTF-8"
"contacts"=> "UTF-8"
"province"=> "UTF-8"
"city"=> "UTF-8"
"district"=> "UTF-8"
"address"=> "UTF-8"
以上是关于Docker 安装logstash 7.3.0 及挂在宿主机对应目录的主要内容,如果未能解决你的问题,请参考以下文章
ELK收集tomcat和nginx日志(分别用了filebeat和logstash收集)
Docker安装部署ELK教程 (Elasticsearch+Kibana+Logstash+Filebeat)