Loki 日志系统分布式部署实践七 promtail 安装
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Loki 日志系统分布式部署实践七 promtail 安装相关的知识,希望对你有一定的参考价值。
参考技术A promtail 是loki的日志收集agent,也是类似于 promtheus 的服务发现机制,应该是最云原生的日志agent了。赞一个生成 promtail 配置文件:
配置参考: https://grafana.com/docs/loki/latest/clients/promtail/configuration/
====================================================================================================================================
备注:
docker 阶段将匹配并解析此格式的日志行:
自动将时间提取到日志 timestamp 中,stream 传输到标签中,并将日志字段输出到 output 中,因为 docker以这种方式包装您的应用程序日志,这将对其进行解包,以便仅对日志内容进行进一步的管道处理 。
Docker 阶段只是如下定义的包装:
kubernetes_sd_config:
kubernetes_sd_config 发现规则跟 prometheus 一样的包括:
对于直接从端点列表中发现的所有目标(未从基础 pod 中另外推断出的那些目标),将附加以下标签:
__meta_kubernetes_endpoint_hostname: Hostname of the endpoint.
__meta_kubernetes_endpoint_node_name: Name of the node hosting the endpoint.
__meta_kubernetes_endpoint_ready: Set to true or false for the endpoint’s ready state.
__meta_kubernetes_endpoint_port_name: Name of the endpoint port.
__meta_kubernetes_endpoint_port_protocol: Protocol of the endpoint port.
__meta_kubernetes_endpoint_address_target_kind: Kind of the endpoint address target.
__meta_kubernetes_endpoint_address_target_name: Name of the endpoint address target.
注意:如果端点属于服务,则会附加角色:服务发现的所有标签。
注意:对于由 Pod 支持的所有目标,将附加角色的所有标签:Pod 发现的所有标签。
修改为 hostNetwork:
Promtail 公开了几个 URL,可用于了解其服务发现的工作方式:
查看默认的任务:
查看配置详情:
试运行 Promtail:
注意:还可以添加标签
发现 promtail 性能比 filebeat 好,promtail 还收集了所有的 .log 日志, filebeat 只收集了一部分:
Loki日志系统分布式部署实践之 Cassandra
系列文章
1. 说明
Loki 支持文件系统、对象存储、NoSQL,因为对象存储大多都要使用公有云,所以暂时使用 Cassandra 作为存储,目前的实现里它支持 index 和 chunk
2. 基本知识
Cassandra 是一个开源的、分布式、无中心节点、弹性可扩展、高可用、容错、一致性协调、面向列的 NoSQL 数据库
2.1 架构层级
Cluster - Data center(s) - Rack(s) - Server(s) - Node (more accurately, a vnode)
-
Node(节点):一个运行 cassandra 的实例 -
Rack(机架):一组 nodes 的集合 -
DataCenter(数据中心):一组 racks 的集合 -
Cluster(集群):映射到拥有一个完整令牌圆环所有节点的集合
2.2 Cassandra 一致性
Apache Cassandra[1]依赖于 Amazon Dynamo 分布式存储键值系统的多种技术。Dynamo 系统中的每个节点都有三个主要组件:
-
请求对分区数据集进行协调,客户端连接到某一节点发起读写请求时,该节点充当客户端应用与集群中拥有相应数据节点间的桥梁,称为协调者,以根据集群配置确定环(ring)中的哪个节点应当获取这个请求 -
环成员(Ring membership)和故障检测 -
本地持久性(存储)引擎
复制策略
注意:所有生产部署都应使用 NetworkTopologyStrategy,而 SimpleStrategy 复制策略仅对尚不知道集群数据中心布局的集群进行测试有用
-
NetworkTopologyStrategy -
SimpleStrategy
一致性级别
-
ONE 只有单个副本必须响应 -
TWO 两个副本必须响应 -
THREE 三个副本必须响应 -
QUORUM 多数副本( n / 2 + 1)必须响应 -
ALL 所有副本都必须响应 -
LOCAL_QUORUM 本地数据中心(协调器所在的任何数据中心)中的大多数副本都必须响应 -
EACH_QUORUM 每个数据中心中的大多数副本必须响应 -
LOCAL_ONE 只有单个副本必须响应。在多数据中心群集中,这也保证读取请求不会发送到远程数据中心中的副本 -
ANY 单个副本可以响应,或者协调器可以存储提示。如果存储了提示,则协调器稍后将尝试重播提示并将突变传递给副本。仅写操作接受此一致性级别。
复制因子和一致性级别
Cassandra 提供了可调节的一致性,允许我们选定需要的一致性水平与可用性水平,在二者间找到平衡点。因为客户端可以控制在更新到达多少个副本之前,必须阻塞系统。这是通过设置副本因子(replication factor)来调节与之相对的一致性级别。通过副本因子(replication factor),你可以决定准备牺牲多少性能来换取一致性。副本因子是你要求更新在集群中传播到的节点数(注意,更新包括所有增加、删除和更新操作)。
客户端每次操作还必须设置一个一致性级别(consistency level)参数,这个参数决定了多少个副本写入成功才可以认定写操作是成功的,或者读取过程中读到多少个副本正确就可以认定是读成功的。如果需要的话,你可以设定一致性级别和副本因子相等,从而达到一个较高的一致性水平,不过这样就必须付出同步阻塞操作的代价,只有所有节点都被更新完成才能成功返回一次更新。而实际上,Cassandra 一般都不会这么来用。而如果一个客户端设置一致性级别低于副本因子的话,即使有节点宕机了,仍然可以写成功。
3.安装
3.1 下载包
# wget -O cassandra-operator-v7.1.0.tar.gz https://github.com/instaclustr/cassandra-operator/archive/v7.1.0.tar.gz
# tar -zxf cassandra-operator-v7.1.0.tar.gz
# cd cassandra-operator-7.1.0/
-
部署Cassandra CRD [2]
# kubectl apply -f deploy/crds.yaml
customresourcedefinition.apiextensions.k8s.io/cassandrabackups.cassandraoperator.instaclustr.com created
customresourcedefinition.apiextensions.k8s.io/cassandraclusters.cassandraoperator.instaclustr.com created
customresourcedefinition.apiextensions.k8s.io/cassandradatacenters.cassandraoperator.instaclustr.com created
3.2 部署 Cassandra Operator
# vi deploy/bundle.yaml
# 对里面的资源都加上 namespace: grafana,因为我这里需要和 loki 配合使用。podsecuritypolicy 是集群级别的,可以不用加
containers:
- name: cassandra-operator
#image: "gcr.io/cassandra-operator/cassandra-operator:latest"
image: "ops-harbor.hupu.io/k8s/cassandra-operator:v7.1.0"
# kubectl apply -f deploy/bundle.yaml
serviceaccount/cassandra created
role.rbac.authorization.k8s.io/cassandra created
rolebinding.rbac.authorization.k8s.io/cassandra created
podsecuritypolicy.policy/cassandra created
serviceaccount/cassandra-performance created
role.rbac.authorization.k8s.io/cassandra-performance created
rolebinding.rbac.authorization.k8s.io/cassandra-performance created
podsecuritypolicy.policy/cassandra-performance created
configmap/cassandra-operator-default-config created
deployment.apps/cassandra-operator created
podsecuritypolicy.policy/cassandra-operator created
rolebinding.rbac.authorization.k8s.io/cassandra-operator created
role.rbac.authorization.k8s.io/cassandra-operator created
serviceaccount/cassandra-operator created
# kubectl get pod -n grafana -l name=cassandra-operator
NAME READY STATUS RESTARTS AGE
cassandra-operator-6f685694c5-l7m27 1/1 Running 0 40s
查看日志:
# kubectl logs -f -n grafana $(kubectl get pod -n grafana -l name=cassandra-operator -o name)
3.3 部署 Cassandra Cluster
Cassandra Cluster参考1[3]
Cassandra Cluster参考2[4]
Cassandra operator 支持将自定义 ConfigMap 通过 ConfigMapVolumeSource 装载到 cassandra 容器中:
注意:所有 Cassandra 和 JVM 配置都存在于容器内的 /etc/Cassandra 下
$ ls -l /etc/cassandra/
total 48
-rw-r--r-- 1 cassandra cassandra 19 Nov 9 13:29 cassandra-env.sh
drwxr-xr-x 2 cassandra cassandra 4096 Nov 28 05:39 cassandra-env.sh.d
-rw-r--r-- 1 cassandra cassandra 19 Nov 9 13:29 cassandra-exporter.conf
-rw-r--r-- 1 cassandra cassandra 70 Nov 28 05:39 cassandra-rackdc.properties
drwxr-xr-x 2 cassandra cassandra 4096 Nov 28 05:39 cassandra.yaml.d
-rw-r--r-- 1 cassandra cassandra 82 Nov 9 13:29 jvm-jmx.options
-rw-r--r-- 1 cassandra cassandra 143 Nov 9 13:29 jvm-operator.options
-rw-r--r-- 1 cassandra cassandra 600 Nov 9 13:29 jvm.options
drwxr-xr-x 2 cassandra cassandra 4096 Nov 28 05:39 jvm.options.d
-rw-r--r-- 1 cassandra cassandra 1239 Nov 9 13:29 logback-tools.xml
-rw-r--r-- 1 cassandra cassandra 538 Nov 9 13:29 logback.xml
drwxr-xr-x 2 cassandra cassandra 4096 Nov 9 13:31 logback.xml.d
$ ls -l /etc/cassandra/cassandra-env.sh.d
total 4
-rw-r--r-- 1 cassandra cassandra 130 Nov 28 05:39 001-cassandra-exporter.sh
$ ls -l /etc/cassandra/cassandra.yaml.d
total 16
-rw-r--r-- 1 cassandra cassandra 187 Nov 9 13:29 001-directories.yaml
-rw-r--r-- 1 cassandra cassandra 404 Nov 28 05:39 001-operator-overrides.yaml
-rw-r--r-- 1 cassandra cassandra 59 Nov 28 05:39 004-broadcast_rpc_address.yaml
-rw-r--r-- 1 cassandra cassandra 29 Nov 28 05:39 cassandra-config.yaml
$ ls -l /etc/cassandra/jvm.options.d
total 4
-rw-r--r-- 1 cassandra cassandra 416 Nov 28 05:39 001-jvm-memory-gc.options
/$ ls -l /etc/cassandra/logback.xml.d
total 0
3.4 自定义 Cassandra 配置
# vi cassandra-config.yaml
# 空闲连接超时,默认为禁用
#native_transport_idle_timeout_in_ms: 60000
# 协调器应等待读取操作完成的时间
read_request_timeout_in_ms: 30000
# 协调器应等待 seq 或 index 扫描完成的时间
range_request_timeout_in_ms: 30000
# 协调器应等待多长时间才能完成写操作
write_request_timeout_in_ms: 30000
# 协调器应等待多长时间才能完成计数器写入
counter_write_request_timeout_in_ms: 30000
# 协调器应继续重试与同一行的其他提议相抵触的 CAS 操作的时间
cas_contention_timeout_in_ms: 30000
# 协调器应等待多长时间才能完成截断操作(这可能会更长,因为除非禁用了auto_snapshot,否则我们需要先刷新,以便可以在删除数据之前进行快照。)
truncate_request_timeout_in_ms: 60000
# 其他杂项操作的默认超时
request_timeout_in_ms: 30000
# 记录慢速查询的时间
slow_query_log_timeout_in_ms: 5000
# 默认情况下,此选项已被注释掉,启用节点之间的操作超时信息交换,以准确测量请求超时。如果禁用,副本将假定协调员将请求立即转发给他们,这意味着在过载情况下,我们将浪费大量额外的时间来处理已超时的请求。
# 警告:通常假定用户在其集群上设置了 NTP,并且时钟适度同步,因为这是最后写入获胜者的总体正确性的要求。
#cross_node_timeout: true
# 默认情况下,此选项已被注释掉。
#internode_application_send_queue_capacity_in_bytes: 4194304
# 用于稳定块缓存和缓冲池的最大内存。其中的 32MB 保留用于缓冲池,其余部分用作保存未压缩的稳定块的缓存。默认为堆的 1/4 或 512MB 中的较小者。该池是堆外分配的,因此除了分配给堆的内存之外。缓存还有堆内存开销,每个块大约 128 字节(如果使用默认的 64k 块大小,则为保留大小的 0.2%)。仅在需要时才分配内存。
file_cache_size_in_mb: 2048
# kubectl create configmap cassandra-new --from-file=cassandra-config.yaml -n grafana
-
方法一:
# vi cassandra-operator-7.1.0/examples/example-datacenter.yaml
spec:
userConfigMapVolumeSource:
# the name of the ConfigMap
name: cassandra-new
# ConfigMap keys -> file paths (relative to /etc/cassandra)
items:
- key: cassandra-config.yaml
path: cassandra.yaml.d/cassandra-config.yaml
-
方法二:
# kubectl edit CassandraDataCenter -n grafana cassandra-dc1
spec:
userConfigMapVolumeSource:
# the name of the ConfigMap
name: cassandra-new
# ConfigMap keys -> file paths (relative to /etc/cassandra)
items:
- key: cassandra-config.yaml
path: cassandra.yaml.d/cassandra-config.yaml
注意:也可以自定义 JVM 选项,通过在 jvm.options.d/gc.options 下指定一个选项文件来提供 默认的 JVM 选项:
# kubectl get cm -n grafana cassandra-cassandra-dc1-dc1-operator-config -o yaml
data:
jvm_options_d_001_jvm_memory_gc_options: |
-Xms1073741824
-Xmx1073741824
-Xmn4194304
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSWaitDuration=10000
-XX:+CMSParallelInitialMarkEnabled
-XX:+CMSEdenChunksRecordAlways
-XX:+CMSClassUnloadingEnabled
-XX:+HeapDumpOnOutOfMemoryError
-XX:+CrashOnOutOfMemoryError
注意:这会全局覆盖默认的,而不是额外添加
# vi gc.options
-XX:+UseG1GC
-XX:ParallelGCThreads=8
-XX:MaxGCPauseMillis=200
#-Xms8g
#-Xmx8g
#-Xmn4g
-XX:+UseContainerSupport
#-XX:InitialRAMPercentage=15.0
#-XX:MinRAMPercentage=15.0
#-XX:MaxRAMPercentage=75.0
# kubectl delete configmap cassandra-new -n grafana
# kubectl create configmap cassandra-new --from-file=gc.options --from-file=cassandra-config.yaml -n grafana
# vi cassandra-operator-7.1.0/examples/example-datacenter.yaml
spec:
userConfigMapVolumeSource:
# the name of the ConfigMap
name: cassandra-new
# ConfigMap keys -> file paths (relative to /etc/cassandra)
items:
- key: cassandra-config.yaml
path: cassandra.yaml.d/cassandra-config.yaml
- key: gc.options
path: jvm.options.d/gc.options
修改 CassandraDataCenter 示例:
# vi examples/example-datacenter.yaml
apiVersion: cassandraoperator.instaclustr.com/v1alpha1
kind: CassandraDataCenter
metadata:
name: cassandra-dc1
namespace: grafana
labels:
app: cassandra
datacenter: dc1
cluster: cassandra-dc1
spec:
initImage: ops-harbor.hupu.io/base/alpine:v3.10
# 通过 cassandra-operator-metrics ServiceMonitor 来实现,镜像中内置了 cassandra-exporter
prometheusSupport: true
optimizeKernelParams: true
serviceAccountName: cassandra-performance
nodes: 3
racks:
- name: rack1
# 实际效果为 nodeSelector
labels:
failure-domain.beta.kubernetes.io/zone: cn-hangzhou-g
# 容忍
tolerations:
- key: "app"
operator: "Equal"
value: "cassandra"
effect: "NoSchedule"
# 亲和性
affinity:
# Pod 反亲和
podAntiAffinity:
# Pod 硬反亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cassandra
topologyKey: "kubernetes.io/hostname"
# Pod 软反亲和
#preferredDuringSchedulingIgnoredDuringExecution:
#- podAffinityTerm:
# labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - cassandra
# topologyKey: kubernetes.io/hostname
# weight: 100
# 节点亲和性
#nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: system
# operator: NotIn
# values:
# - management
# - key: app
# operator: In
# values:
# - cassandra
# #preferredDuringSchedulingIgnoredDuringExecution:
# #- weight: 60
# # preference:
# # matchExpressions:
# # - {key: zone, operator: In, values: ["shanghai2", "shanghai3", "shanghai4"]}
# #- weight: 40
# # preference:
# # matchFields:
# # - {key: ssd, operator: Exists, values: ["sanxing", "dongzhi"]}
#racks:
# - name: "west1-b"
# labels:
# failure-domain.beta.kubernetes.io/zone: europe-west1-b
# - name: "west1-c"
# labels:
# failure-domain.beta.kubernetes.io/zone: europe-west1-c
# - name: "west1-a"
# labels:
# failure-domain.beta.kubernetes.io/zone: europe-west1-a
#cassandraImage: "gcr.io/cassandra-operator/cassandra-3.11.6:latest"
cassandraImage: "ops-harbor.hupu.io/k8s/cassandra-3.11.9:latest"
#sidecarImage: "gcr.io/cassandra-operator/instaclustr-icarus:latest"
sidecarImage: "ops-harbor.hupu.io/k8s/instaclustr-icarus:latest"
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
# 不生效,官方已经注释掉了该字段:https://github.com/liwang0513/cassandra-operator/commit/b4f8b596013e5cbeaf222957b5aaa9b52a91efd7
podManagementPolicy: Parallel
# 用于存放 loki 日志时,发现经常超过 2GB 导致 OOM,平均内存在 4GB 左右,CPU 1.5 核心左右
readinessProbe:
exec:
command:
- /usr/bin/cql-readiness-probe
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: 32Gi
cpu: "16"
requests:
memory: 4Gi
cpu: "2"
sidecarResources:
limits:
memory: 512Mi
requests:
memory: 512Mi
dataVolumeClaimSpec:
storageClassName: alicloud-disk-efficiency-cn-hangzhou-g
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2048Gi
# 将自定义 ConfigMap 通过 ConfigMapVolumeSource 装载到 cassandra 容器中达到修改 Cassandra.yaml 参数目的
userConfigMapVolumeSource:
# the name of the ConfigMap
name: cassandra-new
type: array
# ConfigMap keys -> file paths (relative to /etc/cassandra)
items:
- key: cassandra-config.yaml
path: cassandra.yaml.d/cassandra-config.yaml
- key: gc.options
path: jvm.options.d/gc.options
# userSecretVolumeSource:
# secretName: test-cassandra-dc-ssl
#
# sidecarSecretVolumeSource:
# secretName: test-cassandra-dc-ssl-sidecar
cassandraAuth:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
roleManager: CassandraRoleManager
# operatorLabels:
# prometheusService:
# cassandratestdclabel: testdc
# nodesService:
# mynodesservicelabel: labelvalue1
# statefulSet:
# mystatefullabel: labelvalue2
# podTemplate:
# mypodlabel: label1
# myanotherpod: label2
#
# operatorAnnotations:
# prometheusService:
# p1 : pv1
# nodesService:
# n1: nv1
# n2: nv2
# statefulSet:
# s1: sv1
# s2: sv2
# podTemplate:
# pt1: ptv1
# pt2: ptv2
# Needed to run on AKS
fsGroup: 999
# kubectl apply -f examples/example-datacenter.yaml
cassandradatacenter.cassandraoperator.instaclustr.com/cassandra-dc1 created
# kubectl get pod -n grafana -l cassandra-operator.instaclustr.com/cluster=cassandra-dc1
NAME READY STATUS RESTARTS AGE
cassandra-cassandra-dc1-dc1-rack1-0 2/2 Running 0 16m
cassandra-cassandra-dc1-dc1-rack1-1 2/2 Running 0 14m
cassandra-cassandra-dc1-dc1-rack1-2 2/2 Running 0 12m
# kubectl get svc -n grafana |grep cassandra
cassandra-cassandra-dc1-dc1-nodes ClusterIP None <none> 9042/TCP,7199/TCP 176m
cassandra-cassandra-dc1-dc1-prometheus ClusterIP None <none> 9500/TCP 176m
cassandra-cassandra-dc1-dc1-seeds ClusterIP None <none> 7000/TCP 176m
cassandra-operator-metrics ClusterIP 172.21.5.56 <none> 8383/TCP,8686/TCP 4h5m
3.5 验证集群健康
# kubectl exec cassandra-cassandra-dc1-dc1-rack1-0 -c cassandra -n grafana -- nodetool status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.41.180.90 85.66 KiB 256 67.9% 0c59d821-e2ad-4040-98e5-363f7029f3ca rack1
UN 10.41.182.254 81.05 KiB 256 66.3% 3fb8f3f7-2305-4829-99bc-bc62ce30af56 rack1
UN 10.41.190.194 94.99 KiB 256 65.7% 4f70b0e2-6edb-4b51-8472-fe497fa22b88 rack1
3.6 测试查看下数据
# kubectl exec cassandra-cassandra-dc1-dc1-rack1-0 -c cassandra -n grafana -- cqlsh -e "SELECT now() FROM system.local;" cassandra-cassandra-dc1-dc1-nodes -ucassandra -pcassandra
system.now()
--------------------------------------
c05823f0-2e08-11eb-a1bc-4ddca19f6dc1
(1 rows)
3.7 cassandra 部分操作
-
查看用户下信息
# describe cluster;
Cluster: cassandra-dc1
Partitioner: Murmur3Partitioner
-
查看所有 keyspace
# describe keyspaces;
system_schema system_auth system loki system_distributed system_traces
-
查看 keyspace 内容
# describe keyspace loki;
-
创建 keyspace
# CREATE KEYSPACE loki WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
-
Replication Factor : 复制因数
一条新数据应该被复制到多少个节点。常用奇数。比如我们项目组设置的 replication_factor=3 Replica placement strategy : 复制策略。默认的是 SimpleStrategy. 如果是单机架、单数据中心的模式,保持使用 SimpleStrtegy 即可 或对于多数据中心策略:
# CREATE KEYSPACE loki WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'dc1' : 1 };
-
删除 keyspace
# DROP KEYSPACE loki;
-
查看详情:
# SELECT * FROM system_schema.keyspaces;
keyspace_name | durable_writes | replication
--------------------+----------------+-------------------------------------------------------------------------------------
system_auth | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'}
system_schema | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'}
system_distributed | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
system | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'}
loki | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
system_traces | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'}
(6 rows)
-
修改复制策略
# ALTER KEYSPACE loki WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'replication_factor': '3'};
-
使用 keyspace
# use loki;
-
查示所有表
# describe tables;
# desc tables;
-
查看表结构
# describe columnfamaliy abc;
# desc table stocks
-
创建表
# create table abc ( id int primary key, name varchar, age int );
-
表删除
#drop table user;
4. 错误处理
4.1 错误 1
# kubectl logs -f cassandra-cassandra-dc1-dc1-rack1-0 -c cassandra
INFO [main] Server.java:159 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO [main] CassandraDaemon.java:564 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO [main] CassandraDaemon.java:650 Startup complete
WARN [OptionalTasks:1] CassandraRoleManager.java:377 CassandraRoleManager skipped default role setup: some nodes were not ready
INFO [OptionalTasks:1] CassandraRoleManager.java:416 Setup task failed with error, rescheduling
WARN [OptionalTasks:1] CassandraRoleManager.java:377 CassandraRoleManager skipped default role setup: some nodes were not ready
INFO [OptionalTasks:1] CassandraRoleManager.java:416 Setup task failed with error, rescheduling
解决:当重新部署的时候无法建立成员关系,因为 statefulset 只能单个创建,而不能并行创建导致
# kubectl get pod -n grafana
NAME READY STATUS RESTARTS AGE
cassandra-cassandra-dc1-dc1-rack1-0 1/2 Running 0 6m12s
cassandra-operator-6f685694c5-l7m27 1/1 Running 0 4d7h
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-cassandra-cassandra-dc1-dc1-rack1-0 Bound disk-29d3cfdd-dc5a-457e-bae1-6b72dcc34c37 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 4h12m
data-volume-cassandra-cassandra-dc1-dc1-rack1-1 Bound disk-9a8621f6-3f8b-428e-b69d-72cde007c7cf 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 4h6m
data-volume-cassandra-cassandra-dc1-dc1-rack1-2 Bound disk-1971e0c4-fdf5-4adf-85fa-c1e9e53b7658 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 4h5m
data-volume-cassandra-cassandra-dc1-dc1-rack1-3 Bound disk-5be7e523-a3cc-4b32-9149-6a3ab5e44ed2 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 4h3m
data-volume-cassandra-cassandra-dc1-dc1-rack1-4 Bound disk-4a4d235b-871f-45ff-be57-c4ed7c9b4ad2 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 4h2m
data-volume-cassandra-cassandra-dc1-dc1-rack1-5 Bound disk-b9c45b99-f169-413b-b8dc-65b97d205264 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 4h
data-volume-cassandra-cassandra-dc1-dc1-rack1-6 Bound disk-c2bf3596-a986-4099-b746-316ddaf36c8f 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 72m
data-volume-cassandra-cassandra-dc1-dc1-rack1-7 Bound disk-89fae7ec-9f5a-4b2f-9191-631f66ac71b8 2Ti RWO alicloud-disk-efficiency-cn-hangzhou-g 57m
参考官方文档,好像注释了 podManagementPolicy: Parallel 部分功能:
@@ -242,7 +241,7 @@ private V1beta2StatefulSet generateStatefulSet(DataCenterKey dataCenterKey, V1Co
)
.spec(new V1beta2StatefulSetSpec()
.serviceName("cassandra")
.podManagementPolicy("Parallel")
//.podManagementPolicy("Parallel")
.replicas(dataCenter.getSpec().getReplicas().intValue())
.selector(new V1LabelSelector().putMatchLabelsItem("cassandra-datacenter", dataCenterKey.name))
.template(new V1PodTemplateSpec()
目前只能临时解决:删除 PVC 重新创建 cassandradatacenter 目前也提交了 issue:https://github.com/instaclustr/cassandra-operator/issues/397
4.2 错误 2:
# kubectl exec cassandra-cassandra-dc1-dc1-rack1-0 -c cassandra -n grafana -- cqlsh -e "ALTER KEYSPACE loki WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'replication_factor': '3'};" cassandra-cassandra-dc1-dc1-nodes -ucassandra -pcassandra
<stdin>:1:ConfigurationException: replication_factor is an option for SimpleStrategy, not NetworkTopologyStrategy
解决:replication_factor 不兼容 NetworkTopologyStrategy 策略,NetworkTopologyStrategy 允许您为每个 DC 定义不同的复制。正确的查询应为:
# CREATE KEYSPACE NTSkeyspace WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 };
无法指定 initContainer 镜像问题:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned grafana/cassandra-cassandra-dc1-dc1-rack1-0 to cn-hangzhou.10.41.128.145
Normal Pulling 45s kubelet, cn-hangzhou.10.41.128.145 Pulling image "busybox:latest"
# egrep -r busybox cassandra-operator-7.1.0/
cassandra-operator-7.1.0/pkg/controller/cassandradatacenter/statefulset.go: var image = "busybox:latest"
解决:
cassandra-operator#379[5]
added initImage field into spec for init container, up to now, it was always busybox:latest. It defaults to this image if that field is empty.
# vi cassandra-operator-7.1.0/examples/example-datacenter.yaml
spec:
initImage: ops-harbor.hupu.io/base/alpine:v3.10
参考资料
Apache Cassandra:https://cassandra.apache.org/doc/latest/architecture/dynamo.htm
[2]部署Cassandra CRD:https://github.com/instaclustr/cassandra-operator/wiki/Installation-and-deployment
[3]Cassandra Cluster参考1:https://github.com/instaclustr/cassandra-operator/wiki/Custom-configuration
[4]Cassandra Cluster参考2:https://cassandra.apache.org/doc/latest/configuration/index.html
[5]cassandra-operator#379:https://github.com/instaclustr/cassandra-operator/issues/379
点个赞+在看,少个 bug 以上是关于Loki 日志系统分布式部署实践七 promtail 安装的主要内容,如果未能解决你的问题,请参考以下文章