1 安装k8s前系统准备
关闭防火墙
关闭swap
1 2 3 4 swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab
将桥接的 IPv4 流量传递到 iptables 的链(所有节点都设置)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system
安装containerd
1 2 3 4 5 6 7 8 9 10 11 12 13 14 apt install -y containerd mkdir -p /etc/containerdcontainerd config default | sudo tee /etc/containerd/config.toml vi /etc/containerd/config.toml systemctl restart containerd systemctl enable containerd
运行时
Unix 域套接字
Containerd
unix:///var/run/containerd/containerd.sock
CRI-O
unix:///var/run/crio/crio.sock
Docker Engine (使用 cri-dockerd)
unix:///var/run/cri-dockerd.sock
修改/etc/hosts
配置主机与IP地址映射,避免以后IP变化导致k8s集群大量适配工作
1 2 10.211.55.7 baihl-node 10.211.55.6 baihl-master
2 安装 kubeadm、kubelet 和 kubectl 以下步骤没有显示说明在指定节点上执行的步骤,则在所有节点上执行。
更新apt
软件包索引并安装使用 Kubernetesapt
存储库所需的软件包:
1 2 3 sudo apt-get update # apt-transport-https may be a dummy package; if so, you can skip that package sudo apt-get install -y apt-transport-https ca-certificates curl gpg
下载 Kubernetes 软件包存储库的公共签名密钥。所有存储库都使用相同的签名密钥,因此您可以忽略 URL 中的版本:
1 2 3 # If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command , read the note below. # sudo mkdir -p -m 755 /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
添加适当的 Kubernetesapt
存储库。请注意,此存储库仅包含适用于 Kubernetes 1.30 的软件包;对于其他 Kubernetes 次要版本,您需要更改 URL 中的 Kubernetes 次要版本以匹配所需的次要版本(您还应该检查您正在阅读的文档是否适用于您计划安装的 Kubernetes 版本)。
1 2 # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
更新apt
软件包索引,安装 kubelet、kubeadm 和 kubectl,并固定其版本
1 2 3 sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
(可选)在运行 kubeadm 之前启用 kubelet 服务:
1 sudo systemctl enable --now kubelet
3 使用kubeadm创建集群 3.1 Init配置示例 在master节点生成默认配置,并修改配置参数。
1 kubeadm config print init-defaults > kubeadm-config.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2 .3 .4 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: node taints: null --- apiServer: certSANs: - lb.k8s.domain - <vip/lb_ip> timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {}dns: {} etcd: local: dataDir: /data/etcd imageRepository: k8s.gcr.io controlPlaneEndpoint: lb.k8s.domain kind: ClusterConfiguration kubernetesVersion: 1.24 .0 networking: dnsDomain: cluster.local serviceSubnet: 10.96 .0 .0 /12 podSubnet: 10.244 .0 .0 /16 scheduler: {}--- kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd
可以通过 kubeadm init --config kubeadm-config.yaml
指定以上配置 进行 init
3.2 通过配置参数安装控制面 kubeadm init 可以把组件在 Master 节点上 运行起来,它还有很多参数用来调整集群的配置,可以用-h查看,常见部署参数说明:
–control-plane-endpoint:指定控制面(kube-apiserver)的IP或DNS域名地址。
–apiserver-advertise-address:kube-apiserver的IP地址。
–pod-network-cidr:pod network范围,控制面会自动给每个节点分配CIDR。
–service-cidr:service的IP范围,default “10.96.0.0/12”。
–kubernetes-version:指定k8s的版本。
–image-repository:指定k8s镜像仓库地址。
–upload-certs :标志用来将在所有控制平面实例之间的共享证书上传到集群。
–node-name:hostname-override,作为节点名称。
1 2 3 4 5 6 kubeadm init \ --apiserver-advertise-address=10.211.55.4 \ --image-repository registry.aliyuncs.com/google_containers \ --pod-network-cidr=10.244.0.0/16 \ --control-plane-endpoint=baihl-master \ --kubernetes-version=1.30.3
执行完毕会输出添加master和添加worker的命令如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join baihl-master:6443 --token 498rk2.b01jixhgg46lr7k4 \ --discovery-token-ca-cert-hash sha256:ae9c570dc73f7164d73a016f3bcfc338dc9d01baed8a4b12858e16cd6dbc0d2a \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join baihl-master:6443 --token 498rk2.b01jixhgg46lr7k4 \ --discovery-token-ca-cert-hash sha256:ae9c570dc73f7164d73a016f3bcfc338dc9d01baed8a4b12858e16cd6dbc0d2a
可以使用 journalctl -f -u kubelet.service
查看 kubelet 的日志
4 安装网络插件Calico
安装 Tigera Calico 操作器和自定义资源定义。1 kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml
通过创建必要的自定义资源来安装 Calico。有关此清单中可用的配置选项的更多信息,请参阅安装参考 。1 kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml
使用以下命令确认所有 pod 都在运行。1 watch kubectl get pods -n calico-system
等到每个 podSTATUS
都有Running
。
5 安装Dashboard 1. 安装Dashboard服务 此处使用helm进行安装,参考官网:https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard
1 2 3 4 helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
2、创建用户:
1 2 $ wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yaml $ kubectl apply -f dashboard-user.yaml
3、创建登录Token:
1 2 3 4 $ kubectl -n kubernetes-dashboard create token admin-user ------------------------------------- eyJhbGciOiJSUzI1NiIsImtpZCI6IkQ1aFlySU9EYzBORlFiZkxLUU5KN0hFRlJZMXNjcUtSeUZoVHFnMW1UU00ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjc2NjI0ODM3LCJpYXQiOjE2NzY2MjEyMzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZjk0MjU5MGItZjgzNC00ZDVkLTlhZGItNmI0NzY0MjAyNmUzIn19LCJuYmYiOjE2NzY2MjEyMzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.eWxD-pVzY9S-QcS4r-YpY7MAzZMg0jgP_Dj0i64aH8z2_NU25IJuNYHWB-3A7H6oEMEAofSbIYui-uE8a2oroLylwSPPP_IjcKmGZ2AUiFOfSD_R2QXzl2AC5-BsXBK068KzSYBfieesB-oWQjS8hKd4AOHjLKWWZlp9gJd_qdc8BbQWrKlKpmdmczQvXpeufj371W_taJIH_xxogmUVMgJOVxwawNsD5YGt0O7-_Y70s8AL9DQs3fAAU4YXGG8TmOI3yvOQCqNgfZuiVg2uE5dc4SGzk_FfBOf3QNCpcL1tvjKe6mH5GWlCNEYbJ4eu9flny9a4iRR2gGpt30AA5Q -------------------------------------
4 其他
设置SystemdCgroup = true
,否则会出现POD重启
参考: k8s集群部署时etcd容器不停重启问题及处理
排查日志
API Server 的日志文件通常位于 /var/log/kube-apiserver.log 或 /var/log/pods/kube-apiserver-pod-id/kube-apiserver.log。
执行kubeadm init后如果需要再次执行,需要执行一下 kubeadm reset
常用命令1 2 3 4 5 kubectl get all -o wide -A kubectl api-resources --verbs=list --namespaced -o name