Ubuntu22.04でKubernetesクラスタを作った
Ubuntu22.04のマシン3台でkubeadmを使ってKubernetesのクラスタを作ったのでメモ。
環境
- OS: Ubuntu 22.04.2 LTS
- CRI: containerd 1.6.12
- kubernetes: 1.27.3
背景
Ubuntu 22.04では、Ubuntu 20.04と同じ手順でkubernetesをインストールしてもうまく動作しなかった。
具体的には、kubeadm init
中に以下のようなエラーが出てクラスタの作成ができなかった。
[init] Using Kubernetes version: v1.27.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0716 11:55:37.959602 2738 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
error execution phase addon/coredns: unable to create ConfigMap: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
To see the stack trace of this error execute with --v=5 or higher
# 別の試行
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
error execution phase addon/kube-proxy: unable to create ConfigMap: Post "https://xxxxx.notr.app:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
To see the stack trace of this error execute with --v=5 or higher
kubeadm init時に --skip-phases=addon/kube-proxy
を指定すればとりあえずクラスタを作成できたが、たびたびAPIサーバなどがクラッシュしてしまい、まともに使うことができなかった。
いろいろ調べたところ、どうやらUbuntu 22.04ではUbuntu 20.04と比べて、Cgroupのバージョンバージョンがv1からv2に変更されているため、このような事態が起きているらしいことがわかった。
kubeadmはv1.22以降、何も指定しなければデフォルトでcgroupドライバーとしてsystemd(≒v2?)を使うようになっているようなのでUbuntu 22.04でも問題ないが、どうやらUbuntuの公式リポジトリからインストールしたcontainerdが標準でcgroupドライバーとしてcgroupfs(≒v1?)を使うため、上手く動作しないようだった。
そのため、クラスタ構築時に各ホストで後述するcontainerdのruncの設定をすれば正常に動作するようになった。
手順
swap無効化
sudo swapoff -a
sudo sed -i '/\/swap.img/d' /etc/fstab
sudo rm -rf /swap.img
カーネル系の設定
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
必要なパッケージのインストール
sudo apt-get update
sudo apt-get install -y containerd
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
containerdの設定 (今回のポイント)
sudo mkdir -p /etc/containerd
# containerdのconfigの初期化
containerd config default | sudo tee /etc/containerd/config.toml
# runcのcgroupドライバー
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# sandbox imageのバージョンを推奨のバージョンにあわせる
sudo sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.k8s.io/pause:3.9"#g' /etc/containerd/config.toml
sudo systemctl restart containerd
kubeadmでクラスタ作成
- 1台目ではinit
sudo kubeadm init \
--control-plane-endpoint=xxxxx.notr.app \
--pod-network-cidr=10.30.0.0/16 \
--upload-certs
- 2台目では1台目のクラスタ作成後に表示されるコマンドでjoin
sudo kubeadm join .....
- 全台で以下のコマンドを実行して、ユーザ権限でもkubectlが使えるようにする
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
CNIインストール
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
sed -i 's#"Network": "10.244.0.0/16"#"Network": "10.30.0.0/16"#g' kube-flannel.yml
kubectl apply -f kube-flannel.yml
control-planeでもpodが動くようにする
kubectl taint nodes ${NODE_NAME} node-role.kubernetes.io/control-plane:NoSchedule-
(終わり)
蛇足
kubeadmがデフォルトでcgroupドライバーとしてsystemd(≒v2?)を使うのに、なぜCgroup v1を使っているUbuntu 20.04では特になにもしなくても動くのか気になったため調べてみた。もっと色々知識が必要そう。
事前の確認として、確かにUbuntu 20.04ではCGroup v1、Ubuntu 22.04ではCGroup v2だった。
# Ubuntu 20.04
$ stat -fc %T /sys/fs/cgroup/
tmpfs
# Ubuntu 22.04
$ stat -fc %T /sys/fs/cgroup/
cgroup2fs
まずはkubeletのログ。Ubuntu 20.04の時だけ下記のログが出ていた。
I0717 04:43:00.341325 1910 server.go:634] "Failed to get the kubelet's cgroup. Kubelet system container metrics may be missing." err="cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-1000.slice/session-1.scope"
次に systemctl status
で確認したCGroupのツリー。構造が全然違う。
-
Ubuntu 20.04
CGroup: / ├─802 bpfilter_umh ├─kubepods-burstable-pod5274c761_8930_4974_8e63_39249bafa2a6.slice:cri-containerd:01d50b82> │ └─1433 /pause ├─kubepods-burstable-pod0dbafa2598f87622047fe2f6a5c200b2.slice:cri-containerd:4a2dd8c92ae6> │ └─990 /pause ├─kubepods-besteffort-pod5b8dfc75_34c9_43bc_bb9f_8e58aac71951.slice:cri-containerd:657b3f9> │ └─1494 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-ove> ├─kubepods-burstable-pod0ccf6f798cd6a1e8c01b8c5475a7779a.slice:cri-containerd:86ade32d465b> │ └─973 /pause ├─kubepods-besteffort-pod5b8dfc75_34c9_43bc_bb9f_8e58aac71951.slice:cri-containerd:2f7de43> │ └─1422 /pause ├─kubepods-burstable-pod0dbafa2598f87622047fe2f6a5c200b2.slice:cri-containerd:209119c536a0> │ └─1218 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --autho> ├─kubepods-burstable-podbfffa85becc7ba3fbe6430f533c2a0d7.slice:cri-containerd:791fa34d8398> │ └─1194 etcd --advertise-client-urls=https://xxx.xxx.xxx.xxx:2379 --cert-file=/etc/kubern> ├─user.slice │ └─user-1000.slice │ ├─[email protected] │ │ └─init.scope │ │ ├─1735 /lib/systemd/systemd --user │ │ └─1736 (sd-pam) │ └─session-1.scope │ ├─1519 sshd: suuei [priv] │ ├─1832 sshd: suuei@pts/0 │ ├─1833 -bash │ ├─4438 systemctl status │ └─4439 pager ├─kubepods-burstable-pod37eb2721c8a883a24c9565177cb3e592.slice:cri-containerd:1d7be6169b1a> │ └─1200 kube-apiserver --advertise-address=xxx.xxx.xxx.xxx --allow-privileged=true --auth> ├─init.scope │ └─1 /sbin/init ├─kubepods-burstable-pod0ccf6f798cd6a1e8c01b8c5475a7779a.slice:cri-containerd:36c2fe5213d9> │ └─1216 kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-ma> ├─system.slice │ ├─irqbalance.service │ │ └─683 /usr/sbin/irqbalance --foreground │ ├─containerd.service │ │ ├─ 705 /usr/bin/containerd │ │ ├─ 881 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 86ade32d465ba3072e25c8ab> │ │ ├─ 882 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4a4d0fa663e555d972c1414f> │ │ ├─ 883 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4a2dd8c92ae649a8127e30af> │ │ ├─ 884 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id f82578921bcf720aefd06f5b> │ │ ├─1324 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2f7de43e53b6d13874f8aa5d> │ │ └─1385 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 01d50b82919b151c2d1eb97c> │ ├─systemd-networkd.service
-
Ubuntu 22.04
CGroup: / ├─user.slice │ └─user-1000.slice │ ├─[email protected] │ │ └─init.scope │ │ ├─1306 /lib/systemd/systemd --user │ │ └─1307 (sd-pam) │ └─session-1.scope │ ├─1299 sshd: suuei [priv] │ ├─1388 sshd: suuei@pts/0 │ ├─1389 -bash │ ├─2365 systemctl status │ └─2366 pager ├─init.scope │ └─1 /sbin/init ├─system.slice │ ├─irqbalance.service │ │ └─675 /usr/sbin/irqbalance --foreground │ ├─containerd.service │ │ ├─ 689 /usr/bin/containerd │ │ ├─ 989 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 927aa7813dcb84d8152691af> │ │ ├─ 990 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8fa62c38a7d70ff13c7ffbec> │ │ ├─1011 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 972b24b097524f8e8f9b696c> │ │ ├─1028 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 134c20cc6bf87a0059dc9ece> │ │ ├─1831 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id da0cadbc8af4b58bcb4d5ef2> │ │ └─1860 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5967d4c8554ff4fda26c70e2> │ ├─systemd-networkd.service └─kubepods.slice ├─kubepods-burstable.slice │ ├─kubepods-burstable-podd50cd1eec3843135d69a00ea588eb029.slice │ │ ├─cri-containerd-8fa62c38a7d70ff13c7ffbec66df9b9a7b9383eb533f5a22a4d250f91ac97cf4.sc> │ │ │ └─1076 /pause │ │ └─cri-containerd-f3406862de907aacd9c2997f471047ee7c5f4e4aa208a61ddd0a24fd1b53d7d4.sc> │ │ └─1227 kube-apiserver --advertise-address=xxx.xxx.xxx.xxx --allow-privileged=true > │ ├─kubepods-burstable-pode5b04fc299b64d18ee398fc7678e87ee.slice │ │ ├─cri-containerd-33ea0953a054157192f1eef91f95cbfdb06f69111bcf1ca02a98ea3a0c7a8d6f.sc> │ │ │ └─1230 kube-controller-manager --allocate-node-cidrs=true --authentication-kubecon> │ │ └─cri-containerd-927aa7813dcb84d8152691af0e5ee0e40fc8400420cc1b67ede08a71828943eb.sc> │ │ └─1094 /pause │ ├─kubepods-burstable-pode38326ebb7a95e9f8b9aacb25bc0bf0d.slice │ │ ├─cri-containerd-017fea223cefa2aa4ecca6fca031fa08776c32626433c5a543f8cb0f9d218110.sc> │ │ │ └─1219 etcd --advertise-client-urls=https://xxx.xxx.xxx.xxx:2379 --cert-file=/etc/> │ │ └─cri-containerd-134c20cc6bf87a0059dc9ece879d1da159533bc63874e26ce3665d4d3ff327ec.sc> │ │ └─1084 /pause │ ├─kubepods-burstable-pod1d8c5cf2_39f0_4c44_97ca_28c97b13bd9d.slice │ │ ├─cri-containerd-c0b525600f6452e07b6ea2d48e6d0a1fedee24972cdcf44c58a2d2b7aafb3fbf.sc> │ │ │ └─2204 /opt/bin/flanneld --ip-masq --kube-subnet-mgr │ │ └─cri-containerd-da0cadbc8af4b58bcb4d5ef2980ce37743a093b6d35acda39d422ef67933b144.sc> │ │ └─1878 /pause │ └─kubepods-burstable-podf035104f862bfa573c7acec5a3816f5a.slice │ ├─cri-containerd-972b24b097524f8e8f9b696cdee51a3c944e5c24680d56997d7570fd3c9dda2f.sc> │ │ └─1085 /pause │ └─cri-containerd-8e64d3eebf95812960ee68b2f37c52056f29b4b881b3c75e42d8485cdb682c52.sc> │ └─1228 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf -> └─kubepods-besteffort.slice └─kubepods-besteffort-podb1f8cfe5_9406_491b_a37a_5d8f150d125e.slice ├─cri-containerd-5967d4c8554ff4fda26c70e253e9caa6767bba53d36fddc52c79403caeef366f.sc> │ └─1890 /pause └─cri-containerd-b7f53884795bb34dbf550aff5e68b8a19a013cdfe20e1303aa112b92528dfd97.sc> └─1944 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostna>
systemdがCGroup v1で動いていれば、kubeletのCGroupドライバがSystemdでもCGroup v1で動くようになっている?
Ubuntu 22.04でcontainerdの設定を変えたりしながら色々みてみる。
crictlにcontainerdの設定を入れてコンテナを動かしてみようと思ったが、なかなか面倒くさそう。
とりあえずconfig。
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
EOF
調べてみると、containerd をDockerと同じ方法でいじるツールがあったので使ってみる。とても便利。
mkdir nerdctl
cd nerdctl
wget https://github.com/containerd/nerdctl/releases/download/v1.4.0/nerdctl-1.4.0-linux-amd64.tar.gz
tar xvf nerdctl-1.4.0-linux-amd64.tar.gz
sudo ./nerdctl ps -a
とりあえず containerdの設定をインストール時のまま変えずに(=runcのCGroupのドライバーをsystemdにせずに) nerdctl run nginx &
を10回実行してnginxのコンテナを10個立ち上げてみたところ、こんな感じ。
└─system.slice
├─nerdctl-7133fed8d159231ac5148f47630de29c4d2c31f0e768b8fdb19b6cc2793f4af9.scope
│ ├─3056 nginx: master process nginx -g daemon off;
│ ├─3162 nginx: worker process
│ ├─3163 nginx: worker process
│ ├─3164 nginx: worker process
│ └─3165 nginx: worker process
├─irqbalance.service
│ └─665 /usr/sbin/irqbalance --foreground
├─containerd.service
│ ├─ 682 /usr/bin/containerd
│ ├─1576 /usr/bin/containerd-shim-runc-v2 -namespace default -id c763f3513967a5bb9a5adea9cb0dd6de91>
│ ├─1913 /usr/bin/containerd-shim-runc-v2 -namespace default -id d75f4c8f62ae6c85f91346da459aa80da2>
│ ├─2114 /usr/bin/containerd-shim-runc-v2 -namespace default -id d5509bddda9ef7525b54375c6d378fefea>
│ ├─2289 /usr/bin/containerd-shim-runc-v2 -namespace default -id d761213e763a836c954d57eca1214fa2ff>
│ ├─2437 /usr/bin/containerd-shim-runc-v2 -namespace default -id 775d9f6c013c93e0b65b439531d7b876ac>
│ ├─2585 /usr/bin/containerd-shim-runc-v2 -namespace default -id d8f8f56f3e18bb420b2cde34744da868d6>
│ ├─2750 /usr/bin/containerd-shim-runc-v2 -namespace default -id 60a587a8b5211965cf105793b873d2f267>
│ ├─2893 /usr/bin/containerd-shim-runc-v2 -namespace default -id 2e7912795d2eb1e22c94bd67d337f749b4>
│ ├─3037 /usr/bin/containerd-shim-runc-v2 -namespace default -id 7133fed8d159231ac5148f47630de29c4d>
│ ├─3182 /usr/bin/containerd-shim-runc-v2 -namespace default -id 667e89b1d16f0089d8a4a03a24f0ffd2ef>
│ ├─3328 /usr/bin/containerd-shim-runc-v2 -namespace default -id e30c2bef437299dc7bc12e7f201f03fbac>
│ ├─3471 /usr/bin/containerd-shim-runc-v2 -namespace default -id d50275fed4584b006a10fc3ff04750d57b>
│ └─3617 /usr/bin/containerd-shim-runc-v2 -namespace default -id 2e03071d1009477160989d7939a8fab9e5>
├─systemd-networkd.service
│ └─643 /lib/systemd/systemd-networkd
├─nerdctl-d50275fed4584b006a10fc3ff04750d57bec52c9bc05e9310a618af5781ef4a1.scope
│ ├─3491 nginx: master process nginx -g daemon off;
│ ├─3597 nginx: worker process
│ ├─3598 nginx: worker process
│ ├─3599 nginx: worker process
│ └─3600 nginx: worker process
├─systemd-udevd.service
│ └─430 /lib/systemd/systemd-udevd
├─cron.service
│ └─658 /usr/sbin/cron -f -P
├─polkit.service
│ └─668 /usr/libexec/polkitd --no-debug
├─networkd-dispatcher.service
│ └─667 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
├─multipathd.service
│ └─427 /sbin/multipathd -d -s
├─nerdctl-2e7912795d2eb1e22c94bd67d337f749b404662867871b5824867666cc4bfcae.scope
│ ├─2913 nginx: master process nginx -g daemon off;
│ ├─3016 nginx: worker process
│ ├─3017 nginx: worker process
│ ├─3018 nginx: worker process
│ └─3019 nginx: worker process
├─ModemManager.service
containerdの設定を変えて=runcのCGroupのドライバーをsystemdにして nerdctl run nginx
をするとこんな感じ。
└─system.slice
├─irqbalance.service
│ └─665 /usr/sbin/irqbalance --foreground
├─nerdctl-cdfba8805fe730a611afa38e1598dc8e834fa200186a38ff641f30622a81d197.scope
│ ├─5335 nginx: master process nginx -g daemon off;
│ ├─5448 nginx: worker process
│ ├─5449 nginx: worker process
│ ├─5450 nginx: worker process
│ └─5451 nginx: worker process
├─containerd.service
│ ├─5290 /usr/bin/containerd
│ └─5318 /usr/bin/containerd-shim-runc-v2 -namespace default -id cdfba8805fe730a611afa38e1598dc8e83>
├─systemd-networkd.service
│ └─643 /lib/systemd/systemd-networkd
├─systemd-udevd.service
│ └─430 /lib/systemd/systemd-udevd
どちらでも普通にそのまま動いている。
containerdの設定を戻して kubeadm init を実行。失敗するが、とりあえずそのまま確認。
└─system.slice
├─irqbalance.service
│ └─665 /usr/sbin/irqbalance --foreground
├─containerd.service
│ ├─5660 /usr/bin/containerd
│ ├─5949 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id d4a82818a4a7514a0657c091a3dbbba78c5>
│ ├─6489 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1a4b2e013180ed03bf113559b004f8fe391>
│ ├─6519 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id b21c27c5b0fe82a3b08eb5841b04d5f51c3>
│ └─6520 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id dcf018e7ea649d5f2e1ff498ef627fffb75>
├─systemd-networkd.service
│ └─643 /lib/systemd/systemd-networkd
├─systemd-udevd.service
│ └─430 /lib/systemd/systemd-udevd
├─cron.service
│ └─658 /usr/sbin/cron -f -P
├─polkit.service
│ └─668 /usr/libexec/polkitd --no-debug
├─networkd-dispatcher.service
│ └─667 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
├─kubepods-burstable-pod226208d2c5888c202462be30cfe898e9.slice:cri-containerd:b21c27c5b0fe82a3b08eb>
│ └─6574 /pause
├─multipathd.service
│ └─427 /sbin/multipathd -d -s
├─kubelet.service
│ └─6253 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfi>
├─ModemManager.service
│ └─694 /usr/sbin/ModemManager
├─kubepods-burstable-pod97cac5fc8f5122c2700c3881fff283a1.slice:cri-containerd:7cbc133a225174f84d3d5>
│ └─6653 kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.con>
├─systemd-journald.service
│ └─385 /lib/systemd/systemd-journald
├─unattended-upgrades.service
│ └─711 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-sign>
├─ssh.service
│ └─722 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
├─kubepods-burstable-podc2aeff821355ff3ba3eebbc228d4cfa8.slice:cri-containerd:dcf018e7ea649d5f2e1ff>
│ └─6572 /pause
├─kubepods-burstable-pod97cac5fc8f5122c2700c3881fff283a1.slice:cri-containerd:1a4b2e013180ed03bf113>
$ find /sys/fs/cgroup -name kube*
/sys/fs/cgroup/system.slice/kubepods-burstable-podc2aeff821355ff3ba3eebbc228d4cfa8.slice:cri-containerd:18a889e7da584f0dc71f2c21a5dc8651be253c91ff68ade001801138cfb2b03e
/sys/fs/cgroup/system.slice/kubepods-burstable-pod226208d2c5888c202462be30cfe898e9.slice:cri-containerd:b21c27c5b0fe82a3b08eb5841b04d5f51c3a886d288e6d3330cfc1bdda7a3e12
/sys/fs/cgroup/system.slice/kubelet.service
/sys/fs/cgroup/system.slice/kubepods-burstable-pod97cac5fc8f5122c2700c3881fff283a1.slice:cri-containerd:7cbc133a225174f84d3d5cdd32594052c6c0e6c361f2e92378900ad390204f7a
/sys/fs/cgroup/system.slice/kubepods-burstable-pod97cac5fc8f5122c2700c3881fff283a1.slice:cri-containerd:1a4b2e013180ed03bf113559b004f8fe39195e0ac27d416f811abba4f6024ce6
/sys/fs/cgroup/system.slice/kubepods-burstable-pod74089e695dcc76637fa5860eb768fd95.slice:cri-containerd:dce6ca0b4f9b8b1353cfc75fd584eb122762f75aab56e4c7a9960f22ea88a6db
/sys/fs/cgroup/system.slice/kubepods-burstable-podc2aeff821355ff3ba3eebbc228d4cfa8.slice:cri-containerd:5eb04dc4b3942b0e2fa86c714e2dc5b75ee36125e805784b4f669a93c67f1ca0
/sys/fs/cgroup/system.slice/kubepods-burstable-pod226208d2c5888c202462be30cfe898e9.slice:cri-containerd:7c71b5a032f31f12166f7136bd019198483333d75aed5ddee610d6480bd5f4ea
/sys/fs/cgroup/system.slice/kubepods-burstable-pod74089e695dcc76637fa5860eb768fd95.slice:cri-containerd:d09cb9e9a846c1dc0dfdefbd3c906211e1a4d660b0905e573b46ff17ca63e67d
/sys/fs/cgroup/kubepods.slice
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod226208d2c5888c202462be30cfe898e9.slice
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2aeff821355ff3ba3eebbc228d4cfa8.slice
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97cac5fc8f5122c2700c3881fff283a1.slice
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74089e695dcc76637fa5860eb768fd95.slice
/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice
kubeletがkubepods.sliceを作ってその中でCGroupの設定を入れているが、containerdはCGroup v1と同じようにsystem.sliceを使うため、ここでおかしなことが起きている雰囲気?
もう少し知識をつけて色々追っていかないとわからなそう。
CGroupの仕組みは今回の件で初めて知ったが、なかなか面白そうなので引き続き調べていきたい。