docker containerd cri-o 接入kata-containers

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了docker containerd cri-o 接入kata-containers相关的知识,希望对你有一定的参考价值。

kata-containers 环境需求

# 必须开启虚拟化,如果是kvm 下面虚拟机虚拟机化 请升级内核到4版本以上同时开启
# 首先检查 KVM host(宿主机/母机)上的kvm_intel模块是否打开了嵌套虚拟机功能(默认是开启的):
[root@ceph-2-52 ~]# modinfo kvm_intel | grep nested
parm:           nested_early_check:bool
parm:           nested:bool
[root@ceph-2-52 ~]#  cat /sys/module/kvm_intel/parameters/nested
Y
# 如果上面的显示结果不是 Y 的话需要开启 nested:
[root@ceph-2-52 ~]# modprobe -r kvm-intel
[root@ceph-2-52 ~]#  modprobe kvm-intel nested=1
[root@ceph-2-52 ~]# cat /sys/module/kvm_intel/parameters/nested
Y
#然后创建虚拟机即可
#使用qemu打开一个虚拟机在启动命令上加上“-cpu host”或“-cpu qemu64,+vmx”
#默认情况下,系统并不支持nested

#查看当前系统是否支持nested
systool -m kvm_intel -v  | grep -i nested
nested              = "N"
#或者这样查看
cat /sys/module/kvm_intel/parameters/nested
N

#第一步升级内核,用4以上内核做测试,升级内核很简单,下载编译好的内核rpm包,这里是下载地址,安装,然后修改#grub.conf默认引导内核为新内核
yum -y update
yum -y install yum-plugin-fastestmirror
yum install -y epel-release
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum -y --enablerepo=elrepo-kernel install kernel-ml
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
# 如果启动后内核没加载nested 就执行第二步
vim  /boot/grub2/grub.cfg
linux16 /vmlinuz-5.6.12-1.el7.elrepo.x86_64 root=UUID=f870e0a7-5edc-45a4-942c-3224020ac5b7 ro crashkernel=auto nodmraid biosdevname=0 net.ifnames=0 rhgb quiet kvm-intel.nested=1
#第二步添加引导参数同样很简单,只需要在 kernel 那一行的末端加上 "kvm-intel.nested=1"
# 重启检查
[root@ceph-2-52 ~]# cat /sys/module/kvm_intel/parameters/nested
Y
# 修改以前有的kvm 虚拟机 xml
[root@ceph-2-52 ~]# virsh list --all
setlocale: No such file or directory
 Id    Name                           State
----------------------------------------------------
 4     ubuntu                         running
 -     devops-k8s-06                  shut off
 -     rel8                           shut off
 # 关闭要修改的 kvm 虚拟机
  virsh edit ubuntu
    # 删除旧的cpu 配置 改成下面这样
      <cpu mode=‘custom‘ match=‘exact‘ check=‘partial‘>
    <model fallback=‘forbid‘>core2duo</model>
    <feature policy=‘require‘ name=‘vmx‘/>
    <feature policy=‘require‘ name=‘sse4.1‘/>
  </cpu>
# 重启虚拟机
virsh start ubuntu
# 查看虚拟机是否支持 虚拟化
root@ubuntu-18:~#  lsmod  | grep kvm
kvm_intel             217088  3
kvm                   610304  1 kvm_intel
irqbypass              16384  6 kvm

安装 kata-containers

# 项目地址:https://github.com/kata-containers
# Install Kata Containers on Ubuntu 支持16.04, 18.04 ,19版本还不支持可以自己源码编译
 ARCH=$(arch)
 BRANCH="${BRANCH:-master}"
 sudo sh -c "echo ‘deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/xUbuntu_$(lsb_release -rs)/ /‘ > /etc/apt/sources.list.d/kata-containers.list"
 curl -sL  http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -
 sudo -E apt-get update
 sudo -E apt-get -y install kata-runtime kata-proxy kata-shim
#  Install Kata Containers on CentOS 支持 7  
 source /etc/os-release
 sudo yum -y install yum-utils
 ARCH=$(arch)
 BRANCH="${BRANCH:-master}"
 sudo -E yum-config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/CentOS_${VERSION_ID}/home:katacontainers:releases:${ARCH}:${BRANCH}.repo"
 sudo -E yum -y install kata-runtime kata-proxy kata-shim
# kvm 是否支持kata-containers 检测
  kata-runtime kata-check
root@ubuntu-18:#  kata-runtime kata-check
System is capable of running Kata Containers
System can currently create Kata Containers

docker 集成kata-containers

# Create docker configuration folder
 mkdir -p /etc/docker

# dd the following definitions to /etc/docker/daemon.json
{
  "default-runtime": "kata-runtime",
  "runtimes": {
    "kata-runtime": {
      "path": "/usr/bin/kata-runtime"
    }
  }
}
# Restart the Docker systemd service with the following commands
 sudo systemctl daemon-reload
 sudo systemctl restart docker
 # Run Kata Containers
 docker run busybox uname -a
# 默认 1核 2G 配置
# 删除默认"default-runtime": "kata-runtime",
 docker run --runtime=kata-runtime -ti busybox /bin/sh
 #知道内存 cpu 大小
 docker run -tid --cpus 4 --memory 4096Mb  -ti busybox /bin/sh
# 遗憾的事就与K8S docker 集成会失败,当然如果不使用宿主机网络是没任何问题的。

containerd 集成kata-containers

# 下载所需要的包
 https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz
 https://github.com/containerd/containerd/releases/download/v1.3.4/containerd-1.3.4.linux-amd64.tar.gz
 https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz
# 二进制安装路径 cni /apps/cni/bin 
mkdir -p /apps/cni/bin 
cd /apps/cni/bin 
wget https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz
tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz
rm -f cni-plugins-linux-amd64-v0.8.5.tgz
cd /apps
# containerd 安装路径 /apps/containerd/bin
mkdir -p  /apps/containerd
wget https://github.com/containerd/containerd/releases/download/v1.3.4/containerd-1.3.4.linux-amd64.tar.gz
tar -xvf containerd-1.3.4.linux-amd64.tar.gz
# crictl 二进制安装
cd /apps
wget  https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz
tar -xvf crictl-v1.18.0-linux-amd64.tar.gz
mv crictl /usr/local/bin/crictl
# 创建配置文件存放位置
mkdir -p /apps/containerd/conf
cd /apps/containerd/conf
# 创建配置文件
vim config.toml
[plugins.opt]
path = "/apps/containerd/bin/containerd"    # 二进制文件位置
[plugins.cri]
stream_server_address = "127.0.0.1"
stream_server_port = "10010"
sandbox_image = "docker.io/juestnow/pause-amd64:3.2" # pause 容器改成自己的
max_concurrent_downloads = 20 # 容器下载线程数
  [plugins.cri.containerd]
    snapshotter = "overlayfs" # 容器 Cgroup Driver
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = ""
      runtime_root = ""
    [plugins.cri.containerd.untrusted_workload_runtime]  #k8s 注释使用    annotations:       io.kubernetes.cri.untrusted-workload: "true"
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/bin/kata-runtime"
      runtime_root = ""
    [plugins.cri.containerd.runtimes.kata-runtime] # K8S RuntimeClass 使用
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/bin/kata-runtime"
      runtime_root = ""
  [plugins.cri.cni]
    bin_dir = "/apps/cni/bin"    # cni 二进制地址
    conf_dir = "/etc/cni/net.d" # cni 配置文件位置 仅 单独启动会用到K8S 集成K8S 不会用到
[plugins."io.containerd.runtime.v1.linux"]
  shim = "containerd-shim"
  runtime = "runc"
  runtime_root = ""
  no_shim = false
  shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
  platforms = ["linux/amd64"]
# crictl 配置文件准备
vim /etc/crictl.yaml
------------------------------------------------------------------
  runtime-endpoint: unix:///run/containerd/containerd.sock
  image-endpoint: unix:///run/containerd/containerd.sock
  timeout: 10
  debug: false
# 创建containerd 启动文件 如果安装过docker 请改个启动文件名字
vim /lib/systemd/system/containerd.service
[Unit]
Description=Lightweight Kubernetes
Documentation=https://containerd.io
After=network-online.target

[Service]
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStartPre=-/bin/mkdir -p /run/k8s/containerd # 路径根据自己需求配置
ExecStart=/apps/containerd/bin/containerd          -c /apps/containerd/conf/config.toml          -a /run/containerd/containerd.sock          --state /apps/k8s/run/containerd          --root /apps/k8s/containerd

KillMode=process
Delegate=yes
OOMScoreAdjust=-999
LimitNOFILE=65535     # 容器里面文件打开数配置
LimitNPROC=65535
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target
# 启动containerd
systemctl daemon-reload 
# 启动containerd
systemctl start containerd.service
设置开机启动
systemctl enable containerd.service
# 集成 k8s  修改kubelet
# 二进制方式部署 kubelet
              --container-runtime=remote               --container-runtime-endpoint=unix:///run/containerd/containerd.sock               --containerd=unix:///run/containerd/containerd.sock # kubeadm 方式部署修改
vim /lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock  --containerd=unix:///run/containerd/containerd.sock"
# 修改启动文件kubelet.service [Service] 添加
vim /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=containerd.service   
Requires=containerd.service
[Service]
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/hugetlb/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/blkio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpuset/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/devices/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/net_cls,net_prio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/perf_event/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/freezer/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/memory/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/pids/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/systemd/systemd/system.slice
# 重启kubelet
systemctl daemon-reload 
systemctl restart kubelet.service
crictl ps
# 容器是否启动
root@ubuntu-18:/etc# crictl ps
CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                ATTEMPT             POD ID
7a281bb924de8       docker.io/juestnow/net-tools@sha256:3ef2a9ac571f35fe0d785b9f289e301a5fd668aa72ba0c580f0c7ac2b6f86d6d   About an hour ago   Running             test-ip             0                   a7f5647a94288
8eb3bfbfd2da1       3efc460414d9c653856724597620c005190df0c42472981fbd88612647a1d2de                                       About an hour ago   Running             calico-node         0                   0597017eadf7b
c7c9358bbfbbc       docker.io/juestnow/net-tools@sha256:3ef2a9ac571f35fe0d785b9f289e301a5fd668aa72ba0c580f0c7ac2b6f86d6d   About an hour ago   Running             net-tools           0                   2507ba7a65237
3060c327d1979       3d0acfd4b50041a38c624a3ee2fca2b609675b18b142237032d892f3247a2bca                                       About an hour ago   Running             ingress-system      0                   31df0b46c9f1a
c2454403e21ba       67659abde8d565e10ebc2ea58c6a6062a3ed23f991b7af1dbe84d6c0542d82d7                                       About an hour ago   Running             k8s-ha-master       0                   1bfb59ad2df10

cri-o 集成kata-containers

# 下载所需要的包
 https://github.com/cri-o/cri-o/releases/download/v1.18.0/crio-v1.18.0.tar.gz
# 二进制crio 安装路径 /apps/crio
wget  https://github.com/cri-o/cri-o/releases/download/v1.18.0/crio-v1.18.0.tar.gz
tar -xvf crio-v1.18.0.tar.gz
mv crio-v1.18.0 crio
# cni 配置
mkdir -p /apps/cni
cd crio 
mv  cni-plugins  /apps/cni/bin
# crictl 安装
mv ./bin/crictl /usr/local/bin/crictl
# 配置 crio
# policy.json registries.conf 尽量使用 不然可能有坑存在
mkdir -p /etc/containers/
vim  /etc/containers/policy.json
{
    "default": [
        {
            "type": "insecureAcceptAnything"
        }
    ],
    "transports":
        {
            "docker-daemon":
                {
                    "": [{"type":"insecureAcceptAnything"}]
                }
        }
}
vim  /etc/containers/registries.conf
# This is a system-wide configuration file used to
# keep track of registries for various container backends.
# It adheres to TOML format and does not support recursive
# lists of registries.

# The default location for this configuration file is /etc/containers/registries.conf.

# The only valid categories are: ‘registries.search‘, ‘registries.insecure‘,
# and ‘registries.block‘.

[registries.search]
registries = [‘registry.access.redhat.com‘, ‘docker.io‘, ‘registry.fedoraproject.org‘, ‘quay.io‘, ‘registry.centos.org‘]

# If you need to access insecure registries, add the registry‘s fully-qualified name.
# An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
[registries.insecure]
registries = []

# If you need to block pull access from a registry, uncomment the section below
# and add the registries fully-qualified name.
#
# Docker only
[registries.block]
registries = []
# 编辑crio.conf  对比修改,路径一定要改成自己的
cd /apps/crio/etc/
vim crio.conf
[crio]
root = "/apps/crio/lib/containers/storage"
runroot = "/apps/crio/run/containers/storage"
log_dir = "/var/log/crio/pods"
version_file = "/var/run/crio/version"
[crio.api]
listen = "/var/run/crio/crio.sock"
stream_address = "127.0.0.1"
stream_port = "0"
stream_enable_tls = false
stream_tls_cert = ""
stream_tls_key = ""
stream_tls_ca = ""
grpc_max_send_msg_size = 16777216
grpc_max_recv_msg_size = 16777216
[crio.runtime]
default_ulimits = [
  "nofile=65535:65535",
  "nproc=65535:65535",
  "core=-1:-1"
]
default_runtime = "runc"
no_pivot = false
decryption_keys_path = "/apps/crio/keys/"
conmon = "/apps/crio/bin/conmon"
conmon_cgroup = "system.slice"
conmon_env = [
        "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/apps/crio/bin", 
]
default_env = [
]
selinux = false
seccomp_profile = ""
apparmor_profile = "crio-default"
cgroup_manager = "cgroupfs"
default_capabilities = [
        "CHOWN",
        "DAC_OVERRIDE",
        "FSETID",
        "FOWNER",
        "SETGID",
        "SETUID",
        "SETPCAP",
        "NET_BIND_SERVICE",
        "KILL",
]
default_sysctls = [
]
additional_devices = [
]
hooks_dir = [
        "/apps/crio/containers/oci/hooks.d",
]
default_mounts = [
]
pids_limit = 65535
log_size_max = -1
log_to_journald = false
container_exits_dir = "/var/run/crio/exits"
container_attach_socket_dir = "/var/run/crio"
bind_mount_prefix = ""
read_only = false
log_level = "info"
log_filter = ""
uid_mappings = ""
gid_mappings = ""
ctr_stop_timeout = 30
manage_ns_lifecycle = true
namespaces_dir = "/var/run"
pinns_path = "/apps/crio/bin/pinns"
[crio.runtime.runtimes.runc]
runtime_path = "/apps/crio/bin/runc"
runtime_type = "oci"
runtime_root = "/run/runc"
[crio.runtime.runtimes.kata-runtime]      # RuntimeClass
  runtime_path = "/usr/bin/kata-runtime"
  runtime_type = "oci"
  runtime_root = ""
[crio.image]
default_transport = "docker://"
global_auth_file = ""
pause_image = "docker.io/juestnow/pause-amd64:3.2"
pause_image_auth_file = ""
pause_command = "/pause"
signature_policy = ""
image_volumes = "mkdir"
[crio.network]
network_dir = "/apps/cni/etc/net.d/"
plugin_dirs = [
        "/apps/cni/bin/",
]
[crio.metrics]
enable_metrics = false
metrics_port = 9090
# 创建/apps/crio/containers/oci/hooks.d 不然可能启动失败
mkdir -p /apps/crio/containers/oci/hooks.d
# crictl 配置文件准备
vim /etc/crictl.yaml
------------------------------------------------------------------
runtime-endpoint: unix:///var/run/crio/crio.sock
# crio 启动文件
vim /lib/systemd/system/crio.service
[Unit]
Description=OCI-based implementation of Kubernetes Container Runtime Interface
Documentation=https://github.com/github.com/cri-o/cri-o

[Service]
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/apps/crio/bin/crio-static --config /apps/crio/etc/crio.conf --log-level info
Restart=on-failure
RestartSec=5
LimitNOFILE=1024000
LimitNPROC=1024000
LimitCORE=infinity
LimitMEMLOCK=infinity
KillMode=process
[Install]
WantedBy=multi-user.target
# 启动crio
systemctl daemon-reload 
# 启动crio
systemctl start crio.service
设置开机启动
systemctl enable crio.service
# 集成 k8s  修改kubelet
# 二进制方式部署 kubelet
              --container-runtime=remote               --container-runtime-endpoint=unix:///var/run/crio/crio.sock               --containerd=unix:///var/run/crio/crio.sock # kubeadm 方式部署修改
vim /lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///var/run/crio/crio.sock --containerd=unix:///var/run/crio/crio.sock"
# 修改启动文件kubelet.service [Service] 添加
vim /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=crio.service
Requires=crio.service
[Service]
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/hugetlb/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/blkio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpuset/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/devices/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/net_cls,net_prio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/perf_event/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/freezer/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/memory/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/pids/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/systemd/systemd/system.slice
# 重启kubelet
systemctl daemon-reload 
systemctl restart kubelet.service

测试 K8S 使用kata-containers

# 在部署kata-containers 打 label
 kubectl label nodes ubuntu-18  kata-runtime=yes
# 创建RuntimeClass
cat << EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
  name: kata-runtime  
handler: kata-runtime
EOF
# 创建pod 
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-ip
  labels:
    k8s-app: test-ip
spec:
  selector:
    matchLabels:
      k8s-app: test-ip
  template:
    metadata:
      labels:
        k8s-app: test-ip
    spec:
      runtimeClassName: kata-runtime
      tolerations:
        - effect: NoSchedule
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      containers:
      - name: test-ip
        image: juestnow/net-tools
        command:
          - /bin/sh
          - ‘-c‘
          - set -e -x; tail -f /dev/null             
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 50m
            memory: 20Mi
      dnsConfig:
        options:
          - name: single-request-reopen
      nodeSelector:
        kata-runtime: "yes"
EOF
root@Qist:/mnt/g/work/ipv6/1# kubectl get pod | grep test-ip
test-ip-6c78cb4f6b-jvnlc   1/1     Running   0          117m
oot@ubuntu-18:/apps/crio/etc# crictl ps|  grep test-ip
7a281bb924de8       docker.io/juestnow/net-tools@sha256:3ef2a9ac571f35fe0d785b9f289e301a5fd668aa72ba0c580f0c7ac2b6f86d6d   2 hours ago         Running             test-ip             0                   a7f5647a94288
root@ubuntu-18:/apps/crio/etc# kata-runtime list
ID                                                                 PID         STATUS      BUNDLE
           CREATED                          OWNER
a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08   14468       running     /apps/crio/run/containers/storage/overlay-containers/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/userdata   2020-05-13T02:19:41.806235623Z   #0
7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309   14862       running     /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata   2020-05-13T02:19:51.104187752Z   #0
root     14264     1  0 10:19 ?        00:00:00 /apps/crio/bin/conmon --syslog -c a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08 -n k8s_POD_test-ip-6c78cb4f6b-jvnlc_default_58d6c692-f76e-40fe-9b1b-c3c7194ff098_0 -u a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08 -r /usr/bin/kata-runtime -b /apps/crio/run/containers/storage/overlay-containers/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/userdata --persist-dir /apps/crio/lib/containers/storage/overlay-containers/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/userdata -p /apps/crio/run/containers/storage/overlay-containers/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/userdata/pidfile -P /apps/crio/run/containers/storage/overlay-containers/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/userdata/conmon-pidfile -l /var/log/pods/default_test-ip-6c78cb4f6b-jvnlc_58d6c692-f76e-40fe-9b1b-c3c7194ff098/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08.log --exit-dir /var/run/crio/exits --socket-dir-path /var/run/crio --log-level info --runtime-arg --root=/apps/crio/run/kata-runtime
root     14809     1  0 10:19 ?        00:00:00 /apps/crio/bin/conmon --syslog -c 7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309 -n k8s_test-ip_test-ip-6c78cb4f6b-jvnlc_default_58d6c692-f76e-40fe-9b1b-c3c7194ff098_0 -u 7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309 -r /usr/bin/kata-runtime -b /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata --persist-dir /apps/crio/lib/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata -p /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata/pidfile -P /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata/conmon-pidfile -l /var/log/pods/default_test-ip-6c78cb4f6b-jvnlc_58d6c692-f76e-40fe-9b1b-c3c7194ff098/test-ip/0.log --exit-dir /var/run/crio/exits --socket-dir-path /var/run/crio --log-level info --runtime-arg --root=/apps/crio/run/kata-runtime
root     14306 14264  0 10:19 ?        00:00:06 /usr/libexec/kata-containers/kata-proxy -listen-socket unix:///run/vc/sbs/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/proxy.sock -mux-socket /run/vc/vm/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/kata.sock -sandbox a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08
root     14468 14264  0 10:19 ?        00:00:00 /usr/libexec/kata-containers/kata-shim -agent unix:///run/vc/sbs/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/proxy.sock -container a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08 -exec-id a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08
root     14809     1  0 10:19 ?        00:00:00 /apps/crio/bin/conmon --syslog -c 7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309 -n k8s_test-ip_test-ip-6c78cb4f6b-jvnlc_default_58d6c692-f76e-40fe-9b1b-c3c7194ff098_0 -u 7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309 -r /usr/bin/kata-runtime -b /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata --persist-dir /apps/crio/lib/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata -p /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata/pidfile -P /apps/crio/run/containers/storage/overlay-containers/7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309/userdata/conmon-pidfile -l /var/log/pods/default_test-ip-6c78cb4f6b-jvnlc_58d6c692-f76e-40fe-9b1b-c3c7194ff098/test-ip/0.log --exit-dir /var/run/crio/exits --socket-dir-path /var/run/crio --log-level info --runtime-arg --root=/apps/crio/run/kata-runtime
root     14862 14809  0 10:19 ?        00:00:00 /usr/libexec/kata-containers/kata-shim -agent unix:///run/vc/sbs/a7f5647a942882746cae01f5f8da02d7e366dcd4b85d59ca3463884e34297e08/proxy.sock -container 7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309 -exec-id 7a281bb924de8fc2d75208c67206ad0576907ed3dce1ad54f13df9bb6b215309

以上是关于docker containerd cri-o 接入kata-containers的主要内容,如果未能解决你的问题,请参考以下文章

docker containerd cri-o 接入kata-containers

将Kubernetes的Runtime由Docker修改为containerd

在Linux中安装containerd作为kubernetes的容器运行时

转帖CRI-O 1.0 正式发布

K3s 集群内 containerd 跟 docker 的区别

nerdctl 工具(用于 containerd 但兼容 docker CLI 习惯)