Junki
Junki
Published on 2025-04-15 / 17 Visits
0
0

Kubernetes 新手安装教程(k8s 1.31,containerd 运行时)

本文介绍 k8s v1.31 版本在 Linux 系统上的简单安装流程。

一、更改主机配置

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

关闭swap

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

将桥接的 IPv4 流量传递到 iptables 的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

修改 hostname

hostnamectl set-hostname master

多节点环境下,子节点类似操作。

将 ip 和主机名对应起来

vi /etc/hosts 

新增:

172.29.245.219   master

多节点环境下,每个节点都配置所有节点的 ip。

开启 ip 转发

vi  /etc/sysctl.conf

新增:

net.ipv4.ip_forward = 1
sysctl -p

二、安装 containerd

yum install -y yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install containerd

三、调整 containerd 的配置

containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml

修改:

# 原来的是 
sandbox_image = "registry.k8s.io/pause:3.6"
# 改成
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.10"
systemctl start containerd

四、安装 k8s 安装包

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
EOF
yum install -y kubelet kubeadm kubectl 
systemctl enable kubelet

五、配置 crictl 指向 containerd

vi  /etc/crictl.yaml

新增:

runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false

六、拉取 k8s 镜像

列出k8s需要的镜像版本:

kubeadm config images list --image-repository registry.aliyuncs.com/google_containers

在每个节点都执行这个操作:

crictl pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.31.7             
crictl pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.31.7   
crictl pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.31.7             
crictl pull registry.aliyuncs.com/google_containers/kube-proxy:v1.31.7    
crictl pull registry.aliyuncs.com/google_containers/coredns:v1.11.3   
crictl pull registry.aliyuncs.com/google_containers/pause:3.10    
crictl pull registry.aliyuncs.com/google_containers/etcd:3.5.15-0

七、k8s 初始化

主节点执行:

kubeadm init \
  --apiserver-advertise-address=172.29.245.219 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.31.7 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16

八、配置管理终端

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

九、安装 calico

拉取所有所需镜像,并打 tag

ctr -n k8s.io image pull docker.m.daocloud.io/calico/pod2daemon-flexvol:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/typha:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/kube-controllers:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/apiserver:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/csi:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/cni:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/node:v3.29.0
ctr -n k8s.io image pull docker.m.daocloud.io/calico/node-driver-registrar:v3.29.0


ctr -n k8s.io image tag docker.m.daocloud.io/calico/pod2daemon-flexvol:v3.29.0 	 	docker.io/calico/pod2daemon-flexvol:v3.29.0 
ctr -n k8s.io image tag docker.m.daocloud.io/calico/typha:v3.29.0        		 	docker.io/calico/typha:v3.29.0
ctr -n k8s.io image tag docker.m.daocloud.io/calico/kube-controllers:v3.29.0     	docker.io/calico/kube-controllers:v3.29.0
ctr -n k8s.io image tag docker.m.daocloud.io/calico/apiserver:v3.29.0            	docker.io/calico/apiserver:v3.29.0
ctr -n k8s.io image tag docker.m.daocloud.io/calico/csi:v3.29.0				     	docker.io/calico/csi:v3.29.0
ctr -n k8s.io image tag docker.m.daocloud.io/calico/cni:v3.29.0   		         	docker.io/calico/cni:v3.29.0
ctr -n k8s.io image tag docker.m.daocloud.io/calico/node:v3.29.0			     	docker.io/calico/node:v3.29.0
ctr -n k8s.io image tag docker.m.daocloud.io/calico/node-driver-registrar:v3.29.0   docker.io/calico/node-driver-registrar:v3.29.0

下载安装配置

mkdir /data/install-calico
cd /data/install-calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/custom-resources.yaml

wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml

修改配置

custom-resources.yaml 有个配置需要更改,改成 pod 的 cidr:

cidr: 10.244.0.0/16

执行安装

kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

十、状态验证

验证节点

执行:

kubectl get nodes

输出:

NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   89m   v1.31.7

验证 pods

执行:

kubectl get pods --all-namespaces -o wide

输出:

NAMESPACE          NAME                                     READY   STATUS    RESTARTS      AGE   IP               NODE     NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-7fc6dd764d-8qr52        1/1     Running   0             77m   10.244.219.71    master   <none>           <none>
calico-apiserver   calico-apiserver-7fc6dd764d-92nfr        1/1     Running   0             77m   10.244.219.67    master   <none>           <none>
calico-system      calico-kube-controllers-765dfdcc-474gr   1/1     Running   0             77m   10.244.219.68    master   <none>           <none>
calico-system      calico-node-7bvdb                        1/1     Running   0             77m   172.29.245.219   master   <none>           <none>
calico-system      calico-typha-759b5b7fb-8jcs7             1/1     Running   0             77m   172.29.245.219   master   <none>           <none>
calico-system      csi-node-driver-b4rt9                    2/2     Running   0             77m   10.244.219.69    master   <none>           <none>
kube-system        coredns-855c4dd65d-d97sr                 1/1     Running   0             89m   10.244.219.66    master   <none>           <none>
kube-system        coredns-855c4dd65d-s2d8x                 1/1     Running   0             89m   10.244.219.65    master   <none>           <none>
kube-system        etcd-master                              1/1     Running   1             90m   172.29.245.219   master   <none>           <none>
kube-system        kube-apiserver-master                    1/1     Running   1             90m   172.29.245.219   master   <none>           <none>
kube-system        kube-controller-manager-master           1/1     Running   1             90m   172.29.245.219   master   <none>           <none>
kube-system        kube-proxy-p6lrc                         1/1     Running   0             89m   172.29.245.219   master   <none>           <none>
kube-system        kube-scheduler-master                    1/1     Running   1             90m   172.29.245.219   master   <none>           <none>
tigera-operator    tigera-operator-f8bc97d4c-hbdv8          1/1     Running   0             77m   172.29.245.219   master   <none>           <none>

十一、后期节点加入

按照上面操作安装好 containerd、kubeadm、kubelet、kubectl。

查询 token 列表:

kubeadm token list

获取加入命令:

kubeadm token create --print-join-command

十二、教程推荐

《Kubernetes 常用命令解析》

《DevOps-Doc》


Comment