Raspberry PI 3+ 上的 InfluxDB 1.7.x 高 CPU 使用率
Posted
技术标签:
【中文标题】Raspberry PI 3+ 上的 InfluxDB 1.7.x 高 CPU 使用率【英文标题】:InfluxDB 1.7.x high CPU usage on Rasperry PI 3+ 【发布时间】:2019-11-17 10:26:41 【问题描述】:我已将我的 Raspbery PI 3+ 设置为运行 Grafana(使用 InfluxDB 和 Telegraf)来收集家庭网络的网络统计信息。我从我的 Sonic Wall、一个“智能托管”HP 交换机和两个 Cisco 交换机中读取数据。有一些关于 ping 时间和数据包丢失的指标,并且该机器还托管了我的 Unifi 接入点管理器。
这已经运行了大约 6 个月。在过去的几天里,InfluxDB 生病了。 Grafana 在尝试查询 InfluxDB 时开始显示 501 错误。我重新启动了 Pi,它又回来了……但大约 12 小时后,我又被 501 卡住了。
我看到 InfluxDB 正在与 CPU 挂钩。从来没有高 CPU 使用率,但现在我一直在 200% 到 250% 左右。这令人费解,因为(据我所知)数据库上的查询负载没有理由发生变化。
我认为当我升级到 InfluxDB 1.7.7 时情况变得更糟,但我不知道我之前的版本是什么。此外,我很难从 InfluxDB 收集任何信息,因为它会在启动后立即锁定 CPU 使用率,并且主机变得无响应。
如何诊断 InfluxDB 的高 CPU 使用率?
这里是 htop
显示 influxd 使用超过 350% 的 CPU:
-------------------------------------------------------------------------------
2019-07-07 13:25:02
-------------------------------------------------------------------------------
1 [|||||||||||||||||||||||| 25.5%] Tasks: 36, 147 thr; 6 running
2 [|||||||||||||||||||||||||||| 29.5%] Load average: 3.43 3.84 3.78
3 [|||||||||||||||||||||||| 25.6%] Uptime: 00:47:19
4 [|||||||||||||||||||||||||||||||||||||||||||||||||| 54.7%]
Mem[|||||||||||||||||||||||||||||||||| 136M/926M]
Swp[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||99.9M/100.0M]
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
4306 influxdb 20 0 1019M 48344 30068 R 121. 5.1 0:07.04 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4310 influxdb 20 0 1019M 48344 30068 S 16.4 5.1 0:00.43 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4309 influxdb 20 0 1019M 48344 30068 S 11.8 5.1 0:00.34 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4311 influxdb 20 0 1019M 48344 30068 S 7.2 5.1 0:00.37 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
559 telegraf 20 0 832M 18420 7440 S 2.6 1.9 3:08.39 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
4270 pi 20 0 6372 3060 2072 R 2.6 0.3 0:01.06 htop
116 root 20 0 29168 3012 2780 S 2.6 0.3 0:41.88 /lib/systemd/systemd-journald
4307 influxdb 20 0 1019M 48344 30068 S 2.0 5.1 0:00.04 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4312 influxdb 20 0 1019M 48344 30068 S 1.3 5.1 0:00.24 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
1066 telegraf 20 0 832M 18420 7440 R 1.3 1.9 0:09.25 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1057 telegraf 20 0 832M 18420 7440 S 0.7 1.9 0:11.60 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
340 mongodb 20 0 232M 2492 1760 S 0.7 0.3 0:35.16 /usr/bin/mongod --config /etc/mongodb.conf
1234 telegraf 20 0 832M 18420 7440 S 0.7 1.9 0:07.61 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1239 telegraf 20 0 832M 18420 7440 S 0.7 1.9 0:08.03 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
451 mongodb 20 0 232M 2492 1760 S 0.7 0.3 0:14.52 /usr/bin/mongod --config /etc/mongodb.conf
345 root 20 0 23756 1036 556 S 0.7 0.1 0:11.47 /usr/sbin/rsyslogd -n
381 root 20 0 23756 1036 556 S 0.7 0.1 0:05.25 /usr/sbin/rsyslogd -n
659 unifi 20 0 1112M 20080 1832 S 0.7 2.1 0:15.78 unifi -cwd /usr/lib/unifi -home /usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre -cp /usr/share/java/commons-daemon.jar:/usr/lib/unifi/lib/ac
445 mongodb 20 0 232M 2492 1760 S 0.7 0.3 0:05.27 /usr/bin/mongod --config /etc/mongodb.conf
721 www-data 20 0 224M 384 332 S 0.7 0.0 0:01.90 /usr/sbin/apache2 -k start
684 www-data 20 0 224M 384 332 S 0.7 0.0 0:01.90 /usr/sbin/apache2 -k start
756 unifi 20 0 1112M 20080 1832 S 0.7 2.1 0:02.29 unifi -cwd /usr/lib/unifi -home /usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre -cp /usr/share/java/commons-daemon.jar:/usr/lib/unifi/lib/ac
765 grafana 20 0 924M 13820 3420 S 0.7 1.5 0:00.45 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid cfg:default.paths.logs=/var/log/
671 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:11.24 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
3627 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:01.78 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
740 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:07.68 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
663 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:20.88 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1081 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:14.85 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1248 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:12.42 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
666 root 20 0 916M 5004 1464 S 0.0 0.5 0:00.35 /usr/bin/containerd
4181 grafana 20 0 924M 13820 3420 S 0.0 1.5 0:00.03 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid cfg:default.paths.logs=/var/log/
1241 telegraf 20 0 832M 18420 7440 S 0.0 1.9 0:07.99 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
667 root 20 0 916M 5004 1464 S 0.0 0.5 0:00.43 /usr/bin/containerd
F1Help F2Setup F3SearchF4FilterF5Tree F6SortByF7Nice -F8Nice +F9Kill F10Quit
-------------------------------------------------------------------------------
2019-07-07 13:25:02
-------------------------------------------------------------------------------
1 [|||||||||||||||||||||||||| 28.0%] Tasks: 36, 147 thr; 3 running
2 [|||||||||||||||||||||||||||||||||||| 39.5%] Load average: 3.57 3.85 3.79
3 [|||||||||||||||||||||||||||||||||||||||||||||||||| 53.9%] Uptime: 00:47:45
4 [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 76.3%]
Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 310M/926M]
Swp[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||99.4M/100.0M]
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
4306 influxdb 20 0 1972M 314M 123M S 189. 34.0 1:08.90 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4316 influxdb 20 0 1972M 314M 123M R 99.5 34.0 0:14.78 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4313 influxdb 20 0 1972M 314M 123M S 35.6 34.0 0:09.87 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
4314 influxdb 20 0 1972M 314M 123M S 27.7 34.0 0:10.05 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
559 telegraf 20 0 832M 19016 7712 S 4.0 2.0 3:10.10 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
740 telegraf 20 0 832M 19016 7712 S 3.3 2.0 0:07.75 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
4270 pi 20 0 6372 3060 2072 R 2.0 0.3 0:01.62 htop
340 mongodb 20 0 232M 3192 2460 S 1.3 0.3 0:35.51 /usr/bin/mongod --config /etc/mongodb.conf
663 telegraf 20 0 832M 19016 7712 S 0.7 2.0 0:21.13 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
451 mongodb 20 0 232M 3192 2460 S 0.7 0.3 0:14.66 /usr/bin/mongod --config /etc/mongodb.conf
4307 influxdb 20 0 1972M 314M 123M S 0.7 34.0 0:00.20 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
445 mongodb 20 0 232M 3192 2460 S 0.7 0.3 0:05.32 /usr/bin/mongod --config /etc/mongodb.conf
1248 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:12.55 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1250 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:12.64 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
664 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:09.70 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1241 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:08.22 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
542 root 20 0 929M 7600 1052 S 0.0 0.8 0:04.49 /usr/bin/dockerd -H unix://
3131 pi 20 0 11664 920 644 S 0.0 0.1 0:00.30 sshd: pi@pts/0
764 unifi 20 0 1112M 20212 1832 R 0.0 2.1 0:04.79 unifi -cwd /usr/lib/unifi -home /usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre -cp /usr/share/java/commons-daemon.jar:/usr/lib/unifi/lib/ac
2910 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:04.93 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1057 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:11.69 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1234 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:07.79 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1236 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:13.93 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
671 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:11.35 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
1066 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:09.36 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
116 root 20 0 29168 3012 2780 S 0.0 0.3 0:42.06 /lib/systemd/systemd-journald
1239 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:08.07 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
3627 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:01.80 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
676 root 20 0 929M 7600 1052 S 0.0 0.8 0:00.47 /usr/bin/dockerd -H unix://
659 unifi 20 0 1112M 20212 1832 S 0.0 2.1 0:15.84 unifi -cwd /usr/lib/unifi -home /usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre -cp /usr/share/java/commons-daemon.jar:/usr/lib/unifi/lib/ac
1081 telegraf 20 0 832M 19016 7712 S 0.0 2.0 0:14.87 /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d
345 root 20 0 23756 1036 556 S 0.0 0.1 0:11.52 /usr/sbin/rsyslogd -n
543 grafana 20 0 924M 13820 3420 S 0.0 1.5 0:06.82 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid cfg:default.paths.logs=/var/log/
在这种状态下,我什至无法运行 influx CLI:
$ influx
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection refused
Please check your connection settings and ensure 'influxd' is running.
我发现 influxdb 现在使用 journald,所以日志是由sudo journalctl -u influxdb.service
给出的。我已经用我目前发现的内容更新了这个问题。
原来influxdb没有写日志文件;它使用日志。
转储日志表明该服务正在快速启动,开始进行一些压缩,然后内存不足。发生这种情况时,它会关闭……然后重新启动。
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.096464Z lvl=info msg="Compacting file" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0000 op_name=ts
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.096497Z lvl=info msg="Compacting file" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0000 op_name=ts
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.096198Z lvl=info msg="TSM compaction (start)" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0001 op_
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.097520Z lvl=info msg="Beginning compaction" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0001 op_na
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.097611Z lvl=info msg="Compacting file" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0001 op_name=ts
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.097652Z lvl=info msg="Compacting file" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0001 op_name=ts
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.097691Z lvl=info msg="Compacting file" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0001 op_name=ts
Jul 14 02:31:43 twang influxd[4139]: ts=2019-07-14T01:31:43.097726Z lvl=info msg="Compacting file" log_id=0GcXWe5l000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcXZGU0001 op_name=ts
:
:
:
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.256884Z lvl=info msg="TSM compaction (start)" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.288481Z lvl=info msg="Beginning compaction" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_na
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.290445Z lvl=info msg="Compacting file" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_name=ts
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.292220Z lvl=info msg="Compacting file" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_name=ts
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.293889Z lvl=info msg="Compacting file" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_name=ts
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.295738Z lvl=info msg="Compacting file" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_name=ts
Jul 14 01:55:08 twang influxd[1756]: ts=2019-07-14T00:55:08.297635Z lvl=info msg="Compacting file" log_id=0GcVQfaG000 engine=tsm1 tsm1_strategy=full tsm1_optimize=false trace_id=0GcVTIt0000 op_name=ts
Jul 14 01:55:11 twang influxd[1756]: [httpd] ::1 - username [14/Jul/2019:01:55:10 +0100] "POST /write?consistency=any&db=telegraf HTTP/1.1" 204 0 "-" "telegraf" 07902d7a-a5d2-11e9-8001-b827eb6b4e27 11
Jul 14 01:55:11 twang influxd[1756]: [httpd] ::1 - - [14/Jul/2019:01:55:10 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.11.1" 079a21ac-a5d2-11e9-8002-b827eb6b4e27 1683504
Jul 14 01:55:12 twang influxd[1756]: [httpd] ::1 - username [14/Jul/2019:01:55:11 +0100] "POST /write?consistency=any&db=telegraf HTTP/1.1" 204 0 "-" "telegraf" 08451343-a5d2-11e9-8003-b827eb6b4e27 17
Jul 14 01:55:12 twang influxd[1756]: [httpd] ::1 - - [14/Jul/2019:01:55:11 +0100] "POST /write?db=telegraf HTTP/1.1" 204 0 "-" "Telegraf/1.11.1" 089bdbca-a5d2-11e9-8004-b827eb6b4e27 1182542
Jul 14 01:55:17 twang influxd[1756]: runtime: out of memory: cannot allocate 8192-byte block (540016640 in use)
Jul 14 01:55:17 twang influxd[1756]: fatal error: out of memory
Jul 14 01:55:17 twang influxd[1756]: runtime: out of memory: cannot allocate 8192-byte block (540016640 in use)
Jul 14 01:55:17 twang influxd[1756]: fatal error: out of memory
现在,我必须想办法让自己摆脱这个烂摊子。有什么猜测吗?
【问题讨论】:
你多久写一次influxdb?您可能必须一次消除一个可能出现的问题。也许从备份数据库开始并从一个新的数据库开始,如果它仍然很慢,也许你的程序用太多的写入来破坏它?另一种可能是它被过多的读取所破坏。检查 influxdb 日志以查看导致强度的原因。 @SamuelToh,当我查看仪表板时,我没有 Grafana 正在读取的程序; Telegraf 正在从我的网络设备中读取状态。它配置为每分钟运行一次,用于 5 台设备,每台设备可能有 40 个数据点。这只是每分钟 200 个数据点。 您可能对高基数数据有问题(= 所有标签值都不适合内存)。检查每个使用的标签的基数。 【参考方案1】:我做了与 djsomi 回复中提到的相同的操作,但是,对于 influxDB v1.8,我最终得到了
-
降低 influxdb 的最大 cpu(请参阅 docs)
并增加健康检查睡眠
我得到了服务正在使用的 exec 文件
sudo systemctl status influxdb
然后编辑脚本
sudo vim /usr/lib/influxdb/scripts/influxd-systemd-start.sh
通过添加标记线
GOMAXPROCS=1 /usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS & ## <- added GOMAXPROCS=1
...
while [ "$result" != "200" ]; do
sleep 6 ## <- increased here
result=$(curl -k -s -o /dev/null $url -w %http_code)
max_attempts=$(($max_attempts-1))
if [ $max_attempts -le 0 ]; then
echo "Failed to reach influxdb $PROTOCOL endpoint at $url"
exit 1
fi
done
【讨论】:
【参考方案2】:出于某种奇怪的原因,我的解决方案是停止服务,手动启动 influxd,让它运行一段时间,然后高 cpu 负载就消失了。 :D
【讨论】:
以上是关于Raspberry PI 3+ 上的 InfluxDB 1.7.x 高 CPU 使用率的主要内容,如果未能解决你的问题,请参考以下文章
Raspberry PI 3+ 上的 InfluxDB 1.7.x 高 CPU 使用率
UART (PL011) Raspberry Pi 3 上的 Windows 10 IoT UWP
从Android Things设备上的应用程序中选择WiFi网络(Raspberry Pi 3)
在树莓派2代B型/3代 上安装Fedora23 - Installing Fedora 23 on Raspberry Pi 2 model B or Raspberry Pi 3