使用kubeadm部署Kubernetes 1.23

kubeadm是Kubernetes官方提供的用於快速安部署Kubernetes叢集的工具,伴隨Kubernetes每個版本的釋出都會同步更新,kubeadm會對叢集配置方面的一些實踐做調整,透過實驗kubeadm可以學習到Kubernetes官方在叢集配置上一些新的最佳實踐。

使用kubeadm部署Kubernetes 1.23

1。準備

1。1 系統配置

在安裝之前,需要先做好如下準備。3臺CentOS 7。9主機如下:

cat /etc/hosts192。168。96。151 node1192。168。96。152 node2192。168。96。153 node3

在各個主機上完成下面的系統配置。

如果各個主機啟用了防火牆策略,需要開放Kubernetes各個元件所需要的埠,可以檢視Installing kubeadm中的“Check required ports”一節開放相關埠或者關閉主機的防火牆。

禁用SELINUX:

setenforce 0

vi /etc/selinux/configSELINUX=disabled

建立/etc/modules-load。d/containerd。conf配置檔案:

cat << EOF > /etc/modules-load。d/containerd。confoverlaybr_netfilterEOF

執行以下命令使配置生效:

modprobe overlaymodprobe br_netfilter

建立/etc/sysctl。d/99-kubernetes-cri。conf配置檔案:

cat << EOF > /etc/sysctl。d/99-kubernetes-cri。confnet。bridge。bridge-nf-call-ip6tables = 1net。bridge。bridge-nf-call-iptables = 1net。ipv4。ip_forward = 1user。max_user_namespaces=28633EOF

執行以下命令使配置生效:

sysctl -p /etc/sysctl。d/99-kubernetes-cri。conf

1。2 配置伺服器支援開啟ipvs的前提條件

由於ipvs已經加入到了核心的主幹,所以為kube-proxy開啟ipvs的前提需要載入以下的核心模組:

ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrack_ipv4

在各個伺服器節點上執行以下指令碼:

cat > /etc/sysconfig/modules/ipvs。modules <

上面指令碼建立了的

/etc/sysconfig/modules/ipvs。modules

檔案,保證在節點重啟後能自動載入所需模組。 使用

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

命令檢視是否已經正確載入所需的核心模組。

接下來還需要確保各個節點上已經安裝了ipset軟體包,為了便於檢視ipvs的代理規則,最好安裝一下管理工具ipvsadm。

yum install -y ipset ipvsadm

如果以上前提條件如果不滿足,則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式。

1。3 部署容器執行時Containerd

在各個伺服器節點上安裝容器執行時Containerd。

下載Containerd的二進位制包:

wget https://github。com/containerd/containerd/releases/download/v1。5。8/cri-containerd-cni-1。5。8-linux-amd64。tar。gz

cri-containerd-cni-1。5。8-linux-amd64。tar。gz

壓縮包中已經按照官方二進位制部署推薦的目錄結構佈局好。 裡面包含了systemd配置檔案,containerd以及cni的部署檔案。 將解壓縮到系統的根目錄

/

中:

tar -zxvf cri-containerd-cni-1。5。8-linux-amd64。tar。gz -C /etc/etc/systemd/etc/systemd/system/etc/systemd/system/containerd。serviceetc/crictl。yamletc/cni/etc/cni/net。d/etc/cni/net。d/10-containerd-net。conflistusr/usr/local/usr/local/sbin/usr/local/sbin/runcusr/local/bin/usr/local/bin/critestusr/local/bin/containerd-shimusr/local/bin/containerd-shim-runc-v1usr/local/bin/ctd-decoderusr/local/bin/containerdusr/local/bin/containerd-shim-runc-v2usr/local/bin/containerd-stressusr/local/bin/ctrusr/local/bin/crictl……opt/cni/opt/cni/bin/opt/cni/bin/bridge……

注意經測試cri-containerd-cni-1。5。8-linux-amd64。tar。gz包中包含的runc在CentOS 7下的動態連結有問題,這裡從runc的github上單獨下載runc,並替換上面安裝的containerd中的runc:

wget https://github。com/opencontainers/runc/releases/download/v1。1。0-rc。1/runc。amd64

接下來生成containerd的配置檔案:

mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config。toml

根據文件[Container runtimes ](https://kubernetes。io/docs/setup/production-environment/container-runtimes/)中的內容,對於使用systemd作為init system的Linux的發行版,使用systemd作為容器的cgroup driver可以確保伺服器節點在資源緊張的情況更加穩定,因此這裡配置各個節點上containerd的cgroup driver為systemd。

修改前面生成的配置檔案

/etc/containerd/config。toml

[plugins。“io。containerd。grpc。v1。cri”。containerd。runtimes。runc] 。。。 [plugins。“io。containerd。grpc。v1。cri”。containerd。runtimes。runc。options] SystemdCgroup = true

再修改

/etc/containerd/config。toml

中的

[plugins。“io。containerd。grpc。v1。cri”] 。。。 # sandbox_image = “k8s。gcr。io/pause:3。5” sandbox_image = “registry。aliyuncs。com/google_containers/pause:3。6”

配置containerd開機啟動,並啟動containerd

systemctl enable containerd ——now

使用crictl測試一下,確保可以打印出版本資訊並且沒有錯誤資訊輸出:

crictl versionVersion: 0。1。0RuntimeName: containerdRuntimeVersion: v1。5。8RuntimeApiVersion: v1alpha2

2。使用kubeadm部署Kubernetes

2。1 安裝kubeadm和kubelet

下面在各節點安裝kubeadm和kubelet:

cat < /etc/yum。repos。d/kubernetes。repo[kubernetes]name=Kubernetesbaseurl=http://mirrors。aliyun。com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=http://mirrors。aliyun。com/kubernetes/yum/doc/yum-key。gpg http://mirrors。aliyun。com/kubernetes/yum/doc/rpm-package-key。gpgEOF

yum makecache fastyum install kubelet kubeadm kubectl

執行

kubelet ——help

可以看到原來kubelet的絕大多數命令列flag引數都被

DEPRECATED

了,官方推薦我們使用

——config

指定配置檔案,並在配置檔案中指定原來這些flag所配置的內容。具體內容可以檢視這裡Set Kubelet parameters via a config file。這也是Kubernetes為了支援動態Kubelet配置(Dynamic Kubelet Configuration)才這麼做的,參考Reconfigure a Node‘s Kubelet in a Live Cluster。

kubelet的配置檔案必須是json或yaml格式,具體可檢視這裡。

Kubernetes 1。8開始要求關閉系統的Swap,如果不關閉,預設配置下kubelet將無法啟動。 關閉系統的Swap方法如下:

swapoff -a

修改 /etc/fstab 檔案,註釋掉 SWAP 的自動掛載,使用

free -m

確認swap已經關閉。

swappiness引數調整,修改/etc/sysctl。d/99-kubernetes-cri。conf新增下面一行:

vm。swappiness=0

執行

sysctl -p /etc/sysctl。d/99-kubernetes-cri。conf

使修改生效。

2。2 使用kubeadm init初始化叢集

在各節點開機啟動kubelet服務:

systemctl enable kubelet。service

使用

kubeadm config print init-defaults ——component-configs KubeletConfiguration

可以列印叢集初始化預設的使用的配置:

apiVersion: kubeadm。k8s。io/v1beta3bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef。0123456789abcdef ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 1。2。3。4 bindPort: 6443nodeRegistration: criSocket: /var/run/dockershim。sock imagePullPolicy: IfNotPresent name: node taints: null——-apiServer: timeoutForControlPlane: 4m0sapiVersion: kubeadm。k8s。io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd: local: dataDir: /var/lib/etcdimageRepository: k8s。gcr。iokind: ClusterConfigurationkubernetesVersion: 1。23。0networking: dnsDomain: cluster。local serviceSubnet: 10。96。0。0/12scheduler: {}——-apiVersion: kubelet。config。k8s。io/v1beta1authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca。crtauthorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0scgroupDriver: systemdclusterDNS:- 10。96。0。10clusterDomain: cluster。localcpuManagerReconcilePeriod: 0sevictionPressureTransitionPeriod: 0sfileCheckFrequency: 0shealthzBindAddress: 127。0。0。1healthzPort: 10248httpCheckFrequency: 0simageMinimumGCAge: 0skind: KubeletConfigurationlogging: flushFrequency: 0 options: json: infoBufferSize: “0” verbosity: 0memorySwap: {}nodeStatusReportFrequency: 0snodeStatusUpdateFrequency: 0srotateCertificates: trueruntimeRequestTimeout: 0sshutdownGracePeriod: 0sshutdownGracePeriodCriticalPods: 0sstaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 0ssyncFrequency: 0svolumeStatsAggPeriod: 0s

從預設的配置中可以看到,可以使用

imageRepository

定製在叢集初始化時拉取k8s所需映象的地址。基於預設配置定製出本次使用kubeadm初始化叢集所需的配置檔案kubeadm。yaml:

apiVersion: kubeadm。k8s。io/v1beta3kind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 192。168。96。151 bindPort: 6443nodeRegistration: criSocket: /run/containerd/containerd。sock taints: - effect: PreferNoSchedule key: node-role。kubernetes。io/master——-apiVersion: kubeadm。k8s。io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1。22。0imageRepository: registry。aliyuncs。com/google_containersnetworking: podSubnet: 10。244。0。0/16——-apiVersion: kubelet。config。k8s。io/v1beta1kind: KubeletConfigurationcgroupDriver: systemdfailSwapOn: false——-apiVersion: kubeproxy。config。k8s。io/v1alpha1kind: KubeProxyConfigurationmode: ipvs

這裡定製了

imageRepository

為阿里雲的registry,避免因gcr被牆,無法直接拉取映象。

criSocket

設定了容器執行時為containerd。 同時設定kubelet的

cgroupDriver

為systemd,設定kube-proxy代理模式為ipvs。

在開始初始化叢集之前可以使用

kubeadm config images pull ——config kubeadm。yaml

預先在各個伺服器節點上拉取所k8s需要的容器映象。

kubeadm config images pull ——config kubeadm。yaml[config/images] Pulled registry。aliyuncs。com/google_containers/kube-apiserver:v1。23。1[config/images] Pulled registry。aliyuncs。com/google_containers/kube-controller-manager:v1。23。1[config/images] Pulled registry。aliyuncs。com/google_containers/kube-scheduler:v1。23。1[config/images] Pulled registry。aliyuncs。com/google_containers/kube-proxy:v1。23。1[config/images] Pulled registry。aliyuncs。com/google_containers/pause:3。6[config/images] Pulled registry。aliyuncs。com/google_containers/etcd:3。5。1-0[config/images] Pulled registry。aliyuncs。com/google_containers/coredns:v1。8。6

接下來使用kubeadm初始化叢集,選擇node1作為Master Node,在node1上執行下面的命令:

kubeadm init ——config kubeadm。yaml[init] Using Kubernetes version: v1。23。1[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using ’kubeadm config images pull‘[certs] Using certificateDir folder “/etc/kubernetes/pki”[certs] Generating “ca” certificate and key[certs] Generating “apiserver” certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes。default kubernetes。default。svc kubernetes。default。svc。cluster。local node1] and IPs [10。96。0。1 192。168。96。151][certs] Generating “apiserver-kubelet-client” certificate and key[certs] Generating “front-proxy-ca” certificate and key[certs] Generating “front-proxy-client” certificate and key[certs] Generating “etcd/ca” certificate and key[certs] Generating “etcd/server” certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192。168。96。151 127。0。0。1 ::1][certs] Generating “etcd/peer” certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192。168。96。151 127。0。0。1 ::1][certs] Generating “etcd/healthcheck-client” certificate and key[certs] Generating “apiserver-etcd-client” certificate and key[certs] Generating “sa” key and public key[kubeconfig] Using kubeconfig folder “/etc/kubernetes”[kubeconfig] Writing “admin。conf” kubeconfig file[kubeconfig] Writing “kubelet。conf” kubeconfig file[kubeconfig] Writing “controller-manager。conf” kubeconfig file[kubeconfig] Writing “scheduler。conf” kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags。env”[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config。yaml”[kubelet-start] Starting the kubelet[control-plane] Using manifest folder “/etc/kubernetes/manifests”[control-plane] Creating static Pod manifest for “kube-apiserver”[control-plane] Creating static Pod manifest for “kube-controller-manager”[control-plane] Creating static Pod manifest for “kube-scheduler”[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”。 This can take up to 4m0s[apiclient] All control plane components are healthy after 16。003580 seconds[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace[kubelet] Creating a ConfigMap “kubelet-config-1。23” in namespace kube-system with the configuration for the kubelets in the clusterNOTE: The “kubelet-config-1。23” naming of the kubelet ConfigMap is deprecated。 Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just “kubelet-config”。 Kubeadm upgrade will handle this transition transparently。[upload-certs] Skipping phase。 Please see ——upload-certs[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role。kubernetes。io/master(deprecated) node-role。kubernetes。io/control-plane node。kubernetes。io/exclude-from-external-load-balancers][mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role。kubernetes。io/master:PreferNoSchedule][bootstrap-token] Using token: o7d0h6。i9taufdl7u1un4va[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace[kubelet-finalize] Updating “/etc/kubernetes/kubelet。conf” to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/。kube sudo cp -i /etc/kubernetes/admin。conf $HOME/。kube/config sudo chown $(id -u):$(id -g) $HOME/。kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin。confYou should now deploy a pod network to the cluster。Run “kubectl apply -f [podnetwork]。yaml” with one of the options listed at: https://kubernetes。io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192。168。96。151:6443 ——token o7d0h6。i9taufdl7u1un4va \ ——discovery-token-ca-cert-hash sha256:6c55b14e9d71ef098ad0e8f249d85004c41b48063dbcd7692997930f9637f22b

上面記錄了完成的初始化輸出的內容,根據輸出的內容基本上可以看出手動初始化安裝一個Kubernetes叢集所需要的關鍵步驟。 其中有以下關鍵內容:

[certs]

生成相關的各種證書

[kubeconfig]

生成相關的kubeconfig檔案

[kubelet-start]

生成kubelet的配置檔案“/var/lib/kubelet/config。yaml”

[control-plane]

使用

/etc/kubernetes/manifests

目錄中的yaml檔案建立apiserver、controller-manager、scheduler的靜態pod

[bootstraptoken]

生成token記錄下來,後邊使用

kubeadm join

往叢集中新增節點時會用到

下面的命令是配置常規使用者如何使用kubectl訪問叢集:

mkdir -p $HOME/。kube sudo cp -i /etc/kubernetes/admin。conf $HOME/。kube/config sudo chown $(id -u):$(id -g) $HOME/。kube/config

最後給出了將節點加入叢集的命令

kubeadm join 192。168。96。151:6443 ——token o7d0h6。i9taufdl7u1un4va \ ——discovery-token-ca-cert-hash sha256:6c55b14e9d71ef098ad0e8f249d85004c41b48063dbcd7692997930f9637f22b

檢視一下叢集狀態,確認個元件都處於healthy狀態,結果出現了錯誤:

kubectl get csWarning: v1 ComponentStatus is deprecated in v1。19+NAME STATUS MESSAGE ERRORcontroller-manager Unhealthy Get “http://127。0。0。1:10252/healthz”: dial tcp 127。0。0。1:10252: connect: connection refusedscheduler Unhealthy Get “http://127。0。0。1:10251/healthz”: dial tcp 127。0。0。1:10251: connect: connection refusedetcd-0 Healthy {“health”:“true”}

controller-manager和scheduler為不健康狀態,修改

/etc/kubernetes/manifests/

下的靜態pod配置檔案

kube-controller-manager。yaml

kube-scheduler。yaml

,刪除這兩個檔案中命令選項中的

- ——port=0

這行,重啟kubelet,再次檢視一切正常。

kubectl get csWarning: v1 ComponentStatus is deprecated in v1。19+NAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy {“health”:“true”,“reason”:“”}

叢集初始化如果遇到問題,可以使用

kubeadm reset

命令進行清理。

2。3 安裝包管理器helm 3

Helm是Kubernetes的包管理器,後續流程也將使用Helm安裝Kubernetes的常用元件。 這裡先在master節點node1上按照helm。

wget https://get。helm。sh/helm-v3。7。2-linux-amd64。tar。gztar -zxvf helm-v3。7。2-linux-amd64。tar。gzmv linux-amd64/helm /usr/local/bin/

執行

helm list

確認沒有錯誤輸出。

2。4 部署Pod Network元件Calico

選擇calico作為k8s的Pod網路元件,下面使用helm在k8s叢集中按照calico。

下載

tigera-operator

的helm chart:

wget https://github。com/projectcalico/calico/releases/download/v3。21。2/tigera-operator-v3。21。2-1。tgz

檢視這個chart的中可定製的配置:

helm show values tigera-operator-v3。21。2-1。tgzimagePullSecrets: {}installation: enabled: true kubernetesProvider: “”apiServer: enabled: truecerts: node: key: cert: commonName: typha: key: cert: commonName: caBundle:# Configuration for the tigera operatortigeraOperator: image: tigera/operator version: v1。23。3 registry: quay。iocalicoctl: image: quay。io/docker。io/calico/ctl tag: v3。21。2

定製的

values。yaml

如下:

# 可針對上面的配置進行定製,例如calico的映象改成從私有庫拉取。# 這裡只是個人本地環境測試k8s新版本,因此保留value。yaml為空即可

使用helm安裝calico:

helm install calico tigera-operator-v3。21。2-1。tgz -f values。yaml

等待並確認所有pod處於Running狀態:

watch kubectl get pods -n calico-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-7f58dbcbbd-kdnlg 1/1 Running 0 2m34scalico-node-nv794 1/1 Running 0 2m34scalico-typha-65f579bc5d-4pbfz 1/1 Running 0 2m34s

檢視一下calico向k8s中新增的api資源:

kubectl api-resources | grep calicobgpconfigurations crd。projectcalico。org/v1 false BGPConfigurationbgppeers crd。projectcalico。org/v1 false BGPPeerblockaffinities crd。projectcalico。org/v1 false BlockAffinitycaliconodestatuses crd。projectcalico。org/v1 false CalicoNodeStatusclusterinformations crd。projectcalico。org/v1 false ClusterInformationfelixconfigurations crd。projectcalico。org/v1 false FelixConfigurationglobalnetworkpolicies crd。projectcalico。org/v1 false GlobalNetworkPolicyglobalnetworksets crd。projectcalico。org/v1 false GlobalNetworkSethostendpoints crd。projectcalico。org/v1 false HostEndpointipamblocks crd。projectcalico。org/v1 false IPAMBlockipamconfigs crd。projectcalico。org/v1 false IPAMConfigipamhandles crd。projectcalico。org/v1 false IPAMHandleippools crd。projectcalico。org/v1 false IPPoolipreservations crd。projectcalico。org/v1 false IPReservationkubecontrollersconfigurations crd。projectcalico。org/v1 false KubeControllersConfigurationnetworkpolicies crd。projectcalico。org/v1 true NetworkPolicynetworksets crd。projectcalico。org/v1 true NetworkSet

這些api資源是屬於calico的,因此不建議使用kubectl來管理,推薦按照calicoctl來管理這些api資源。 將calicoctl安裝為kubectl的外掛:

cd /usr/local/bincurl -o kubectl-calico -O -L “https://github。com/projectcalico/calicoctl/releases/download/v3。21。2/calicoctl” chmod +x kubectl-calico

驗證外掛正常工作:

kubectl calico -h

2。5 驗證k8s DNS是否可用

kubectl run curl ——image=radial/busyboxplus:curl -itIf you don’t see a command prompt, try pressing enter。[ root@curl:/ ]$

進入後執行

nslookup kubernetes。default

確認解析正常:

nslookup kubernetes。defaultServer: 10。96。0。10Address 1: 10。96。0。10 kube-dns。kube-system。svc。cluster。localName: kubernetes。defaultAddress 1: 10。96。0。1 kubernetes。default。svc。cluster。local

2。6 向Kubernetes叢集中新增Node節點

下面將node2, node3新增到Kubernetes叢集中,分別在node2, node3上執行:

kubeadm join 192。168。96。151:6443 ——token o7d0h6。i9taufdl7u1un4va \ ——discovery-token-ca-cert-hash sha256:6c55b14e9d71ef098ad0e8f249d85004c41b48063dbcd7692997930f9637f22b

node2和node3加入叢集很是順利,在master節點上執行命令檢視叢集中的節點:

kubectl get nodeNAME STATUS ROLES AGE VERSIONnode1 Ready control-plane,master 29m v1。23。1node2 Ready 5m28s v1。23。1node3 Ready 5m4s v1。23。1

3。Kubernetes常用元件部署

3。1 使用Helm部署ingress-nginx

為了便於將叢集中的服務暴露到叢集外部,需要使用Ingress。接下來使用Helm將ingress-nginx部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的邊緣節點上。

這裡將node1(192。168。96。151)作為邊緣節點,打上Label:

kubectl label node node1 node-role。kubernetes。io/edge=

下載ingress-nginx的helm chart:

wget https://github。com/kubernetes/ingress-nginx/releases/download/helm-chart-4。0。13/ingress-nginx-4。0。13。tgz

檢視

ingress-nginx-4。0。13。tgz

這個chart的可定製配置:

helm show values ingress-nginx-4。0。13。tgz

對values。yaml配置定製如下:

controller: ingressClassResource: name: nginx enabled: true default: true controllerValue: “k8s。io/ingress-nginx” admissionWebhooks: enabled: false replicaCount: 1 image: # registry: k8s。gcr。io # image: ingress-nginx/controller # tag: “v1。1。0” registry: docker。io image: unreachableg/k8s。gcr。io_ingress-nginx_controller tag: “v1。1。0” digest: sha256:4f5df867e9367f76acfc39a0f85487dc63526e27735fa82fc57d6a652bafbbf6 hostNetwork: true nodeSelector: node-role。kubernetes。io/edge: ‘’ affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes。io/hostname tolerations: - key: node-role。kubernetes。io/master operator: Exists effect: NoSchedule - key: node-role。kubernetes。io/master operator: Exists effect: PreferNoSchedule

nginx ingress controller的副本數replicaCount為1,將被排程到node1這個邊緣節點上。這裡並沒有指定nginx ingress controller service的externalIPs,而是透過

hostNetwork: true

設定nginx ingress controller使用宿主機網路。 因為k8s。gcr。io被牆,這裡替換成unreachableg/k8s。gcr。io

ingress-nginx

controller提前拉取一下映象:

crictl pull unreachableg/k8s。gcr。io_ingress-nginx_controller:v1。1。0

helm install ingress-nginx ingress-nginx-4。0。13。tgz ——create-namespace -n ingress-nginx -f values。yaml

kubectl get pod -n ingress-nginxNAME READY STATUS RESTARTS AGEingress-nginx-controller-7f574989bc-xwbf4 1/1 Running 0 117s

測試訪問

http://192。168。96。151

返回預設的nginx 404頁,則部署完成。

3。2 使用Helm部署dashboard

先部署metrics-server:

wget https://github。com/kubernetes-sigs/metrics-server/releases/download/v0。5。2/components。yaml

修改components。yaml中的image為

docker。io/unreachableg/k8s。gcr。io_metrics-server_metrics-server:v0。5。2

。 修改components。yaml中容器的啟動引數,加入

——kubelet-insecure-tls

kubectl apply -f components。yaml

metrics-server的pod正常啟動後,等一段時間就可以使用

kubectl top

檢視叢集和pod的metrics資訊:

kubectl top node ——use-protocol-buffers=trueNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%node1 219m 5% 3013Mi 39%node2 102m 2% 1576Mi 20%node3 110m 2% 1696Mi 21%kubectl top pod -n kube-system ——use-protocol-buffers=trueNAME CPU(cores) MEMORY(bytes)coredns-59d64cd4d4-9mclj 4m 17Micoredns-59d64cd4d4-fj7xr 4m 17Mietcd-node1 25m 154Mikube-apiserver-node1 80m 465Mikube-controller-manager-node1 17m 61Mikube-proxy-hhlhc 1m 21Mikube-proxy-nrhq7 1m 19Mikube-proxy-phmrw 1m 17Mikube-scheduler-node1 4m 24Mikubernetes-dashboard-5cb95fd47f-6lfnm 3m 36Mimetrics-server-9ddcc8ddf-jvlzs 5m 21Mi

接下來使用helm部署k8s的dashboard,新增chart repo:

helm repo add kubernetes-dashboard https://kubernetes。github。io/dashboard/helm repo update

檢視chart的可定製配置:

helm show values kubernetes-dashboard/kubernetes-dashboard

對value。yaml定製配置如下:

image: repository: kubernetesui/dashboard tag: v2。4。0ingress: enabled: true annotations: nginx。ingress。kubernetes。io/ssl-redirect: “true” nginx。ingress。kubernetes。io/backend-protocol: “HTTPS” hosts: - k8s。example。com tls: - secretName: example-com-tls-secret hosts: - k8s。example。commetricsScraper: enabled: true

先建立存放

k8s。example。com

ssl證書的secret:

kubectl create secret tls example-com-tls-secret \ ——cert=cert。pem \ ——key=key。pem \ -n kube-system

使用helm部署dashboard:

helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \-n kube-system \-f values。yaml

確認上面的命令部署成功。

建立管理員sa:

kubectl create serviceaccount kube-dashboard-admin-sa -n kube-systemkubectl create clusterrolebinding kube-dashboard-admin-sa \——clusterrole=cluster-admin ——serviceaccount=kube-system:kube-dashboard-admin-sa

獲取叢集管理員登入dashboard所需token:

kubectl -n kube-system get secret | grep kube-dashboard-admin-sa-tokenkube-dashboard-admin-sa-token-rcwlb kubernetes。io/service-account-token 3 68skubectl describe -n kube-system secret/kube-dashboard-admin-sa-token-rcwlb Name: kube-dashboard-admin-sa-token-rcwlbNamespace: kube-systemLabels: Annotations: kubernetes。io/service-account。name: kube-dashboard-admin-sa kubernetes。io/service-account。uid: fcdf27f6-f6f9-4f76-b64e-edc91fb1479bType: kubernetes。io/service-account-tokenData====namespace: 11 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IkYxWTd5aDdzYWsyeWJVMFliUUhJMXI4YWtMZFd4dGFDT1N4eEZoam9HLUEifQ。eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYS10b2tlbi1yY3dsYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImZjZGYyN2Y2LWY2ZjktNGY3Ni1iNjRlLWVkYzkxZmIxNDc5YiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlLWRhc2hib2FyZC1hZG1pbi1zYSJ9。R3l19_Nal4B2EktKFSJ7CgOqAngG_MTgzHRRjWdREN7dLALyfiRXYIgZQ90hxM-a9z2sPXBzfJno4OGP4fPX33D8h_4fgxfpVLjKqjdlZ_HAks_6sV9PBzDNXb_loNW8ECfsleDgn6CZin8Vx1w7sgkoEIKq0H-iZ8V9pRV0fTuOZcB-70pV_JX6H6WBEOgRIAZswhAoyUMvH1qNl47J5xBNwKRgcqP57NCIODo6FiClxfY3MWo2vz44R5wYCuBJJ70p6aBWixjDSxnp5u9mUP0zMF_igICl_OfgKuPyaeuIL83U8dS5ovEwPPGzX5mHUgaPH7JLZmKRNXJqLhTweAca。crt: 1066 bytes

使用上面的token登入k8s dashboard。

使用kubeadm部署Kubernetes 1.23

參考

Installing kubeadm

Creating a cluster with kubeadm

https://github。com/containerd/containerd

https://pkg。go。dev/k8s。io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2

https://docs。projectcalico。org/