使用kubeadm在Centos7上部署k8s1.18

0。安裝規劃

172。21。203。132 master01

172。21。203。133 node01

172。21。203。134 node02

注意:如下操作步驟1~9所有節點均需進行操作。

1。關閉防火牆

systemctl stop firewalld

systemctl disable firewalld

systemctl status firewalld

2。關閉swap

編輯/etc/fstab檔案

註釋裡面的swap那一行

3。關閉selinux

編輯檔案/etc/selinux/config

註釋掉SELINUX=disabled,然後重啟,reboot

4。新增阿里源

rm -rfv /etc/yum。repos。d/*

curl -o /etc/yum。repos。d/CentOS-Base。repo http://mirrors。aliyun。com/repo/Centos-7。repo

5。配置主機名

按照步驟0的規劃,分別在不同的節點執行命令設定對應的主機名

172。21。203。132

hostnamectl set-hostname master01

172。21。203。133

hostnamectl set-hostname node01

172。21。203。134

hostnamectl set-hostname node02

配置核心引數

將橋接的IPv4流量傳遞到iptables的鏈

cat > /etc/sysctl。d/k8s。conf <

net。bridge。bridge-nf-call-ip6tables = 1

net。bridge。bridge-nf-call-iptables = 1

EOF

7。安裝常用包

yum install vim bash-completion net-tools gcc -y

8。安裝docker-ce

執行如下命令安裝需要的元件以及docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager ——add-repo https://mirrors。aliyun。com/docker-ce/linux/centos/docker-ce。repo

yum -y install docker-ce

裝完docker後新增aliyun的docker倉庫加速器

mkdir -p /etc/docker

tee /etc/docker/daemon。json <<-‘EOF’

{

“registry-mirrors”: [“https://fl791z1h。mirror。aliyuncs。com”]

}

EOF

systemctl daemon-reload

systemctl restart docker

9。安裝kubectl、kubelet、kubeadm

執行如下命令配置yum源

cat < /etc/yum。repos。d/kubernetes。repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors。aliyun。com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors。aliyun。com/kubernetes/yum/doc/yum-key。gpg https://mirrors。aliyun。com/kubernetes/yum/doc/rpm-package-key。gpg

EOF

安裝kubectl、kubelet、kubeadm

yum -y install kubectl-1。18。0 kubelet-1。18。0 kubeadm-1。18。0

systemctl enable kubelet

10。初始化k8s叢集(master節點)

注意:

這一步開始區分master、ndoe節點執行的命令了,上面的步驟master、node都是一樣

執行如下命令初始化叢集,其中apiserver-advertise-address後跟的是master節點ip

kubeadm init ——kubernetes-version=1。18。0 \

——apiserver-advertise-address=172。21。203。132 \

——image-repository registry。aliyuncs。com/google_containers \

——service-cidr=10。10。0。0/16 ——pod-network-cidr=10。122。0。0/16

叢集初始化成功後返回如下資訊:

W0319 15:06:42。121644 41707 configset。go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet。config。k8s。io kubeproxy。config。k8s。io]

[init] Using Kubernetes version: v1。18。0

[preflight] Running pre-flight checks

[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker。service’

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver。 The recommended driver is “systemd”。 Please follow the guide at https://kubernetes。io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20。10。5。 Latest validated version: 19。03

[WARNING Hostname]: hostname “master01” could not be reached

[WARNING Hostname]: hostname “master01”: lookup master01 on 114。114。114。114:53: no such host

[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet。service’

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags。env

[kubelet-start] Writing kubelet configuration to file ”/var/lib/kubelet/config。yaml“

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder ”/etc/kubernetes/pki“

[certs] Generating ”ca“ certificate and key

[certs] Generating ”apiserver“ certificate and key

[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes。default kubernetes。default。svc kubernetes。default。svc。cluster。local] and IPs [10。10。0。1 172。21。203。132]

[certs] Generating ”apiserver-kubelet-client“ certificate and key

[certs] Generating ”front-proxy-ca“ certificate and key

[certs] Generating ”front-proxy-client“ certificate and key

[certs] Generating ”etcd/ca“ certificate and key

[certs] Generating ”etcd/server“ certificate and key

[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [172。21。203。132 127。0。0。1 ::1]

[certs] Generating ”etcd/peer“ certificate and key

[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [172。21。203。132 127。0。0。1 ::1]

[certs] Generating ”etcd/healthcheck-client“ certificate and key

[certs] Generating ”apiserver-etcd-client“ certificate and key

[certs] Generating ”sa“ key and public key

[kubeconfig] Using kubeconfig folder ”/etc/kubernetes“

[kubeconfig] Writing ”admin。conf“ kubeconfig file

[kubeconfig] Writing ”kubelet。conf“ kubeconfig file

[kubeconfig] Writing ”controller-manager。conf“ kubeconfig file

[kubeconfig] Writing ”scheduler。conf“ kubeconfig file

[control-plane] Using manifest folder ”/etc/kubernetes/manifests“

[control-plane] Creating static Pod manifest for ”kube-apiserver“

[control-plane] Creating static Pod manifest for ”kube-controller-manager“

W0319 15:10:23。400590 41707 manifests。go:225] the default kube-apiserver authorization-mode is ”Node,RBAC“; using ”Node,RBAC“

[control-plane] Creating static Pod manifest for ”kube-scheduler“

W0319 15:10:23。402605 41707 manifests。go:225] the default kube-apiserver authorization-mode is ”Node,RBAC“; using ”Node,RBAC“

[etcd] Creating static Pod manifest for local etcd in ”/etc/kubernetes/manifests“

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory ”/etc/kubernetes/manifests“。 This can take up to 4m0s

[apiclient] All control plane components are healthy after 39。002973 seconds

[upload-config] Storing the configuration used in ConfigMap ”kubeadm-config“ in the ”kube-system“ Namespace

[kubelet] Creating a ConfigMap ”kubelet-config-1。18“ in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase。 Please see ——upload-certs

[mark-control-plane] Marking the node master01 as control-plane by adding the label ”node-role。kubernetes。io/master=‘’“

[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role。kubernetes。io/master:NoSchedule]

[kubelet-check] Initial timeout of 40s passed。

[bootstrap-token] Using token: g50g2a。rdqkyxt1max6df3k

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the ”cluster-info“ ConfigMap in the ”kube-public“ namespace

[kubelet-finalize] Updating ”/etc/kubernetes/kubelet。conf“ to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/。kube

sudo cp -i /etc/kubernetes/admin。conf $HOME/。kube/config

sudo chown $(id -u):$(id -g) $HOME/。kube/config

You should now deploy a pod network to the cluster。

Run ”kubectl apply -f [podnetwork]。yaml“ with one of the options listed at:

https://kubernetes。io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172。21。203。132:6443 ——token g50g2a。rdqkyxt1max6df3k \

——discovery-token-ca-cert-hash sha256:826f88a5cd9cd028be49f27edd4fe6c1c4d164d60837038adf40759d6b908d2e

建立kubectl

mkdir -p $HOME/。kube

sudo cp -i /etc/kubernetes/admin。conf $HOME/。kube/config

sudo chown $(id -u):$(id -g) $HOME/。kube/config

執行如下命令確認叢集組成

[root@master01 ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

master01 NotReady master 8m31s v1。18。0

11。安裝calico網路(master節點)

執行如下命令安裝calico網路

kubectl apply -f https://docs。projectcalico。org/manifests/calico。yaml

configmap/calico-config created

customresourcedefinition。apiextensions。k8s。io/bgpconfigurations。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/bgppeers。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/blockaffinities。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/clusterinformations。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/felixconfigurations。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/globalnetworkpolicies。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/globalnetworksets。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/hostendpoints。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/ipamblocks。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/ipamconfigs。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/ipamhandles。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/ippools。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/kubecontrollersconfigurations。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/networkpolicies。crd。projectcalico。org created

customresourcedefinition。apiextensions。k8s。io/networksets。crd。projectcalico。org created

clusterrole。rbac。authorization。k8s。io/calico-kube-controllers created

clusterrolebinding。rbac。authorization。k8s。io/calico-kube-controllers created

clusterrole。rbac。authorization。k8s。io/calico-node created

clusterrolebinding。rbac。authorization。k8s。io/calico-node created

daemonset。apps/calico-node created

serviceaccount/calico-node created

deployment。apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

poddisruptionbudget。policy/calico-kube-controllers created

執行kubectl get pod ——all-namespaces確認所有namespace的status為running後,再次執行kubectl get node

[root@master01 kubernetes]# kubectl get pod ——all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-kube-controllers-65d7476764-sdtr5 1/1 Running 0 16m

kube-system calico-node-rxgxz 1/1 Running 0 16m

kube-system coredns-7ff77c879f-5x88w 1/1 Running 0 36m

kube-system coredns-7ff77c879f-bqmbl 1/1 Running 0 36m

kube-system etcd-master01 1/1 Running 0 36m

kube-system kube-apiserver-master01 1/1 Running 0 36m

kube-system kube-controller-manager-master01 1/1 Running 0 36m

kube-system kube-proxy-lpjhk 1/1 Running 0 36m

kube-system kube-scheduler-master01 1/1 Running 0 36m

[root@master01 kubernetes]# kubectl get node

NAME STATUS ROLES AGE VERSION

master01 Ready master 37m v1。18。0

12。node節點加入叢集(node節點)

執行如下命令將兩個node節點加入叢集中

kubeadm join 172。21。203。132:6443 ——token g50g2a。rdqkyxt1max6df3k \

——discovery-token-ca-cert-hash sha256:826f88a5cd9cd028be49f27edd4fe6c1c4d164d60837038adf40759d6b908d2e

W0319 15:50:28。617785 43008 join。go:346] [preflight] WARNING: JoinControlPane。controlPlane settings will be ignored when control-plane flag is not set。

[preflight] Running pre-flight checks

[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker。service’

[WARNING IsDockerSystemdCheck]: detected ”cgroupfs“ as the Docker cgroup driver。 The recommended driver is ”systemd“。 Please follow the guide at https://kubernetes。io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20。10。5。 Latest validated version: 19。03

[WARNING Hostname]: hostname ”node01“ could not be reached

[WARNING Hostname]: hostname ”node01“: lookup node01 on 114。114。114。114:53: no such host

[preflight] Reading configuration from the cluster…

[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’

[kubelet-start] Downloading configuration for the kubelet from the ”kubelet-config-1。18“ ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file ”/var/lib/kubelet/config。yaml“

[kubelet-start] Writing kubelet environment file with flags to file ”/var/lib/kubelet/kubeadm-flags。env“

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

Certificate signing request was sent to apiserver and a response was received。

The Kubelet was informed of the new secure connection details。

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster。

建立kubectl

mkdir -p $HOME/。kube

sudo cp -i /etc/kubernetes/kubelet。conf $HOME/。kube/config

sudo chown $(id -u):$(id -g) $HOME/。kube/config

執行如下命令檢視新增完成後的叢集構成

kubectl get nodes

NAME STATUS ROLES AGE VERSION

master01 Ready master 82m v1。18。0

node01 Ready 43m v1。18。0

node02 Ready 2m57s v1。18。0

13。安裝kubernetes-dashboard(master節點)

執行如下命令下載dashboard的yaml檔案。

wget https://raw。githubusercontent。com/kubernetes/dashboard/v2。0。0-rc7/aio/deploy/recommended。yaml

修改yaml檔案,在service處新增nodeport配置

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kubernetes-dashboard

spec:

type: NodePort ——新增1

ports:

- port: 443

targetPort: 8443

nodePort: 30000 ——新增2

selector:

k8s-app: kubernetes-dashboard

執行如下命令安裝dashboard

kubectl create -f recommended。yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role。rbac。authorization。k8s。io/kubernetes-dashboard created

clusterrole。rbac。authorization。k8s。io/kubernetes-dashboard created

rolebinding。rbac。authorization。k8s。io/kubernetes-dashboard created

clusterrolebinding。rbac。authorization。k8s。io/kubernetes-dashboard created

deployment。apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment。apps/dashboard-metrics-scraper created

執行如下命令,可以看到dashboard已啟動

[root@master01 ~]# kubectl get svc -n kubernetes-dashboard

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

dashboard-metrics-scraper ClusterIP 10。10。222。217 8000/TCP 2m27s

kubernetes-dashboard NodePort 10。10。190。83 443:30000/TCP 2m27s

在瀏覽器輸入我們的這臺機器的ip+埠,進入登入頁面

使用kubeadm在Centos7上部署k8s1.18

k8s認證頁面

14。獲取token

執行如下命令,檢視憑證

kubectl -n kubernetes-dashboard get secret

NAME TYPE DATA AGE

default-token-tlmjr kubernetes。io/service-account-token 3 39m

kubernetes-dashboard-certs Opaque 0 39m

kubernetes-dashboard-csrf Opaque 1 39m

kubernetes-dashboard-key-holder Opaque 2 39m

kubernetes-dashboard-token-vpjkw kubernetes。io/service-account-token 3 39m

執行如下命令,獲取token,注意標粗部分和上面檢視憑證的輸出結果對應

kubectl describe secrets -n kubernetes-dashboard

kubernetes-dashboard-token-vpjkw

| grep token | awk ‘NR==3{print $2}’

eyJhbGciOiJSUzI1NiIsImtpZCI6IlA3YzRaOWxCenlpMXM2dllNWXpvMXJjQ3ZqRmZNLVJ5U2VncHUxZDhOUjQifQ。eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi12cGprdyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImUwOTU4NWUzLTllZjctNGM5OC1iMTc1LWEzMDZjNTQ1YjQ2MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9。bemA1DYy21uq2HUwxMI4tzu3zPV1NAZgWut6p_Em5UIFzlkV_CK-4OH2btdm_QtAZtTtk-rL8kRGP2g1Alvuj-4jiMJVbpuDbTUE4wwCwKY3KrQxHXGrwf9wZXC97XQY-eQs5I4vIzmUS_sMB_TCHv_tzsuFymeB44x3HG1CxrNB0E_o2X-O8Yn4l_9XNI1nw6VIURqSKRHqxT-rJMVX6Ga6otPu_7ZOBuAVQC8uHrTawVHvKCnmH4——o41zoeU5c71V_UWGvWEGXD-8EbfWY51jFrvwuHvaW3sIhOsdezu4jctuKEVarbKIbzC3htEEnClQEugq3jYCjsZaEAubTQ

15。使用token登入

把token貼上到登入頁面上的輸入token框,點選登入

使用kubeadm在Centos7上部署k8s1.18

k8s登入介面