文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd0645df509c8e49059a2f00d7&chksm=e9fdd407de8a5d119d439b70dc2c381ec2eceddb63ed43767c2e1b7cffefe077e41955568cb5&cur_album_id=1341273083637989377&scene=189#wechat_redirect

环境准备

架构图

IP地址规划

使用阿里服务器Centos 7.7镜像,默认操作系统版本3.10.0-1062.9.1.el7.x86_64。

注意:由于阿里云服务器,无法使用VIP,没有办法使用keepalive+nginx使用三节点VIP,这里在kubeadm init初始化配置文件中指定了一个master01节点的IP。

如果你的环境可以使用VIP,可能参考:第五篇 安装keepalived与Nginx

服务器初始化

所有服务器进行初始化,只需要对master和node节点就可以,脚本内容在阿里提供的虚拟机上面已经得到验证。

主要做了以下操作,安装一些必要依赖包、禁用ipv6、停止默认网络管理功能、启用时间同步、加载ipvs模块、修改内核参数、禁用swap、关闭防火墙等,另外还可以修改下主机名;

#!/bin/bash

# 1. install common tools,these commands are not required.
source /etc/profile
yum -y install chrony bridge-utils chrony ipvsadm ipset sysstat conntrack libseccomp wget tcpdump screen vim nfs-utils bind-utils wget socat telnet sshpass net-tools sysstat lrzsz yum-utils device-mapper-persistent-data lvm2 tree nc lsof strace nmon iptraf iftop rpcbind mlocate # 2. disable IPv6
if [ $(cat /etc/default/grub |grep 'ipv6.disable=1' |grep GRUB_CMDLINE_LINUX|wc -l) -eq 0 ];then
sed -i 's/GRUB_CMDLINE_LINUX="/GRUB_CMDLINE_LINUX="ipv6.disable=1 /' /etc/default/grub
/usr/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
fi # 3. disable NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager # 3.
systemctl enable chronyd.service
systemctl start chronyd.service
# 4. add bridge-nf-call-ip6tables ,notice: You may need to run '/usr/sbin/modprobe br_netfilter' this commond after reboot.
cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
chmod 755 /etc/sysconfig/modules/br_netfilter.modules # 5. add route forwarding
[ $(cat /etc/sysctl.conf | grep "net.ipv4.ip_forward=1" |wc -l) -eq 0 ] && echo "net.ipv4.ip_forward=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "net.bridge.bridge-nf-call-iptables=1" |wc -l) -eq 0 ] && echo "net.bridge.bridge-nf-call-iptables=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "net.bridge.bridge-nf-call-ip6tables=1" |wc -l) -eq 0 ] && echo "net.bridge.bridge-nf-call-ip6tables=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.may_detach_mounts=1" |wc -l) -eq 0 ] && echo "fs.may_detach_mounts=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "vm.overcommit_memory=1" |wc -l) -eq 0 ] && echo "vm.overcommit_memory=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "vm.panic_on_oom=0" |wc -l) -eq 0 ] && echo "vm.panic_on_oom=0" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "vm.swappiness=0" |wc -l) -eq 0 ] && echo "vm.swappiness=0" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.inotify.max_user_watches=89100" |wc -l) -eq 0 ] && echo "fs.inotify.max_user_watches=89100" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.file-max=52706963" |wc -l) -eq 0 ] && echo "fs.file-max=52706963" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.nr_open=52706963" |wc -l) -eq 0 ] && echo "fs.nr_open=52706963" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "net.netfilter.nf_conntrack_max=2310720" |wc -l) -eq 0 ] && echo "net.netfilter.nf_conntrack_max=2310720" >>/etc/sysctl.conf
/usr/sbin/sysctl -p # 6. modify limit file
[ $(cat /etc/security/limits.conf|grep '* soft nproc 10240000'|wc -l) -eq 0 ]&&echo '* soft nproc 10240000' >>/etc/security/limits.conf
[ $(cat /etc/security/limits.conf|grep '* hard nproc 10240000'|wc -l) -eq 0 ]&&echo '* hard nproc 10240000' >>/etc/security/limits.conf
[ $(cat /etc/security/limits.conf|grep '* soft nofile 10240000'|wc -l) -eq 0 ]&&echo '* soft nofile 10240000' >>/etc/security/limits.conf
[ $(cat /etc/security/limits.conf|grep '* hard nofile 10240000'|wc -l) -eq 0 ]&&echo '* hard nofile 10240000' >>/etc/security/limits.conf # 6. disable selinux
sed -i '/SELINUX=/s/enforcing/disabled/' /etc/selinux/config # 6. Close the swap partition
/usr/sbin/swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab # 7. disable firewalld
systemctl stop firewalld
systemctl disable firewalld # 8. reset iptables
yum install -y iptables-services
/usr/sbin/iptables -P FORWARD ACCEPT
/usr/sbin/iptables -X
/usr/sbin/iptables -F -t nat
/usr/sbin/iptables -X -t nat reboot

安装etcd

CA签发根证书

由于使用TLS安全认证功能,需要为etcd访问ca证书和私钥,证书签发原理可以参考:第三篇 PKI基础概念、cfssl工具介绍及kubernetes中证书

#!/bin/bash

# 1. download cfssl related files.
while true;
do
echo "Download cfssl, please wait a monment." &&\
curl -L -C - -O https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && \
curl -L -C - -O https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && \
curl -L -C - -O https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
if [ $? -eq 0 ];then
echo "cfssl download success."
break
else
echo "cfssl download failed."
break
fi
done # 2. Create a binary dirctory to store kubernetes related files.
if [ ! -d /usr/kubernetes/bin/ ];then
mkdir -p /usr/kubernetes/bin/
fi # 3. copy binary files to before create a binary dirctory.
mv cfssl_linux-amd64 /usr/kubernetes/bin/cfssl
mv cfssljson_linux-amd64 /usr/kubernetes/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/kubernetes/bin/cfssl-certinfo
chmod +x /usr/kubernetes/bin/{cfssl,cfssljson,cfssl-certinfo} # 4. add environment variables
[ $(cat /etc/profile|grep 'PATH=/usr/kubernetes/bin'|wc -l ) -eq 0 ] && echo 'PATH=/usr/kubernetes/bin:$PATH' >>/etc/profile && source /etc/profile || source /etc/profile # 5. create a CA certificate directory and access this directory
CA_SSL=/etc/kubernetes/ssl/ca
[ ! -d ${CA_SSL} ] && mkdir -p ${CA_SSL}
cd $CA_SSL ## cfssl print-defaults config > config.json
## cfssl print-defaults csr > csr.json
# 我们这里不使用上面两行命令生成 # 可以定义多个profiles,分别指定不同的过期时间,使用场景等参数,后续签名证书时使用某个profile;
# signing: 表示该证书可用于签名其它证书,生成的ca.pem证书中的CA=TRUE;
# server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
# client auth: 表示server 可以用该CA 对client 提供的证书进行验证。
cat > ${CA_SSL}/ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF # CN: Common Name, kube-apiserver从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法;
# O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group); cat > ${CA_SSL}/ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF # 6. generate ca.pem, ca-key.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca [ $? -eq 0 ] && echo "CA certificate and private key generated successfully." || echo "CA certificate and private key generation failed."
[root@ops ~]#

使用私有CA为ETCD签发证书和私钥

#!/bin/bash

# 2. create csr file.
source /etc/profile ETCD_SSL="/etc/kubernetes/ssl/etcd/" [ ! -d ${ETCD_SSL} ] && mkdir ${ETCD_SSL}
cat >$ETCD_SSL/etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"172.17.173.15",
"172.17.173.16",
"172.17.173.17"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF # 3. Determine if the ca required file exits.
[ ! -f /etc/kubernetes/ssl/ca/ca.pem ] && echo "no ca.pem file." && exit 0
[ ! -f /etc/kubernetes/ssl/ca/ca-key.pem ] && echo "no ca-key.pem file" && exit 0
[ ! -f /etc/kubernetes/ssl/ca/ca-config.json ] && echo "no ca-config.json file" && exit 0 # 4. generate etcd private key and public key.
cd $ETCD_SSL
cfssl gencert -ca=/etc/kubernetes/ssl/ca/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca/ca-key.pem \
-config=/etc/kubernetes/ssl/ca/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd [ $? -eq 0 ] && echo "Etcd certificate and private key generated successfully." || echo "Etcd certificate and private key generation failed."

把私有CA与ETCD的私有证书和密钥copy到etcd服务器上面;

注意这里ssh-copy-id这步我省略了。
[root@ops ~]# cat ip2
172.17.173.15 etcd01
172.17.173.16 etcd02
172.17.173.17 etcd03
172.17.173.18 node02
172.17.173.19 master01
172.17.173.20 master02
172.17.173.21 master03
172.17.173.22 node01
[root@ops ~]#
[root@ops ~]# for i in `cat ip2|grep etcd|gawk '{print $1}'`
> do
> scp -r /etc/kubernetes $i:/etc/
> done
[root@ops ~]#

etcd安装

etcd的三台机器分别执行此脚本,以下脚本为完成etcd的全部安装过程,注意下载etcd安装包时,可以提前下载好,因为国内网络,大家都懂的;

[root@ops ~]# cat 3.sh
#!/bin/bash # 1. env info
source /etc/profile
declare -A dict dict=(['etcd01']=172.17.173.15 ['etcd02']=172.17.173.16 ['etcd03']=172.17.173.17)
#IP=`ip a |grep inet|grep -v 127.0.0.1|grep -v 172.17|gawk -F/ '{print $1}'|gawk '{print $NF}'`
IP=`ip a |grep inet|grep -v 127.0.0.1|gawk -F/ '{print $1}'|gawk '{print $NF}'` for key in $(echo ${!dict[*]})
do
if [[ "$IP" == "${dict[$key]}" ]];then
LOCALIP=$IP
LOCAL_ETCD_NAME=$key
fi
done if [[ "$LOCALIP" == "" || "$LOCAL_ETCD_NAME" == "" ]];then
echo "Get localhost IP failed." && exit 1
fi # 2. download etcd source code and decompress.
CURRENT_DIR=`pwd`
cd $CURRENT_DIR
curl -L -C - -O https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
#( [ $? -eq 0 ] && echo "etcd source code download success." ) || ( echo "etcd source code download failed." && exit 1 ) /usr/bin/tar -zxf etcd-v3.3.18-linux-amd64.tar.gz
cp etcd-v3.3.18-linux-amd64/etc* /usr/local/bin/
#rm -rf etcd-v3.3.18-linux-amd64* # 3. deploy etcd config and enable etcd.service. ETCD_SSL="/etc/kubernetes/ssl/etcd/"
ETCD_CONF=/etc/etcd/etcd.conf
ETCD_SERVICE=/usr/lib/systemd/system/etcd.service [ ! -d /data/etcd/ ] && mkdir -p /data/etcd/
[ ! -d /etc/etcd/ ] && mkdir -p /etc/etcd/ # 3.1 create /etc/etcd/etcd.conf configure file.
cat > $ETCD_CONF << EOF
#[Member]
ETCD_NAME="${LOCAL_ETCD_NAME}"
ETCD_DATA_DIR="/data/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${LOCALIP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${LOCALIP}:2379"
ETCD_LISTEN_CLIENT_URLS2="http://127.0.0.1:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${LOCALIP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${LOCALIP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${dict['etcd01']}:2380,etcd02=https://${dict['etcd02']}:2380,etcd03=https://${dict['etcd03']}:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF # 3.2 create etcd.service
cat>$ETCD_SERVICE<<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=$ETCD_CONF
ExecStart=/usr/local/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},\${ETCD_LISTEN_CLIENT_URLS2} \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--peer-key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca/ca.pem
Restart=on-failure
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target
EOF # 4. enable etcd.service and start
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service
[root@ops ~]#

验证etcd安装结果

#!/bin/bash
declare -A dict
dict=(['etcd01']=172.17.173.15 ['etcd02']=172.17.173.16 ['etcd03']=172.17.173.17) cd /usr/local/bin
etcdctl --ca-file=/etc/kubernetes/ssl/ca/ca.pem \
--cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--endpoints="https://${dict['etcd01']}:2379,https://${dict['etcd02']}:2379,https://${dict['etcd03']}:2379" cluster-health etcdctl --ca-file=/etc/kubernetes/ssl/ca/ca.pem \
--cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--endpoints="https://${dict['etcd01']}:2379,https://${dict['etcd02']}:2379,https://${dict['etcd03']}:2379" member list 结果如下:
member 1ad1e168a6f672a1 is healthy: got healthy result from https://172.17.173.16:2379
member 68b047a9be8ab72e is healthy: got healthy result from https://172.17.173.15:2379
member 85e6e69d2915ec95 is healthy: got healthy result from https://172.17.173.17:2379
cluster is healthy
1ad1e168a6f672a1: name=etcd02 peerURLs=https://172.17.173.16:2380 clientURLs=https://172.17.173.16:2379 isLeader=false
68b047a9be8ab72e: name=etcd01 peerURLs=https://172.17.173.15:2380 clientURLs=https://172.17.173.15:2379 isLeader=true
85e6e69d2915ec95: name=etcd03 peerURLs=https://172.17.173.17:2380 clientURLs=https://172.17.173.17:2379 isLeader=false

到此,etcd安装完成,具体参数配置可以参考之前文章:第四篇 Etcd存储组件高可用部署

所有节点安装docker引擎

安装docker运行时

[root@ops ~]# for i in `cat ip2|grep -v etcd|gawk '{print $1}'`
> do
> ssh $i "yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo &&yum makecache && yum -y install docker-ce"
> done
[root@ops ~]#

所有节点启动运行时

systemctl daemon-reload
systemctl enable docker.service
systemctl start docker.service

初始化Master01

配置镜像源

所有节点分发repo文件,用阿里云镜像源安装kubeadm、kubelet、kubectl等,分发过程略过;

[root@ops ~]# cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@ops ~]#

安装

# master节点安装三个组件
yum -y install kubelet kubeadm kubectl # node节点安装两个组件
yum -y install kubelet kubeadm

配置kubelet并设置开机启动

现在不用着急启动,使用kubeadm初始化或者加入集群时,会自动启动。

# 修改所有这点这个配置文件,swap相关
cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 设置开机启动
systemctl enable kubelet.service

创建初始化配置文件

生成初始化配置文件,在此基础上面修改即可

# 生成默认配置文件
kubeadm config print init-defaults # 可以根据组件生成
kubeadm config print init-defaults --component-configs KubeProxyConfiguration

这里我们的默认配置文件如下,使用外部etcd的方式,还有一个注意点,修改pod网段及kubeproxy运行模式,把此文件copy到master01节点上面运行,注意还需要把etcd的证书copy到master节点;

[root@ops ~]# cat kube-adm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.173.19
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master-01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "172.17.173.19:6443"
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- https://172.17.173.15:2379
- https://172.17.173.16:2379
- https://172.17.173.17:2379
caFile: /etc/kubernetes/ssl/ca/ca.pem
certFile: /etc/kubernetes/ssl/etcd/etcd.pem
keyFile: /etc/kubernetes/ssl/etcd/etcd-key.pem
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: "192.168.224.0/24"
serviceSubnet: 10.96.0.0/12
scheduler: {} ---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
[root@ops ~]# mv kube-adm.yaml kubeadm-config.yaml
[root@ops ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.173.19
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master-01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "172.17.173.19:6443"
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- https://172.17.173.15:2379
- https://172.17.173.16:2379
- https://172.17.173.17:2379
caFile: /etc/kubernetes/ssl/ca/ca.pem
certFile: /etc/kubernetes/ssl/etcd/etcd.pem
keyFile: /etc/kubernetes/ssl/etcd/etcd-key.pem
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: "192.168.224.0/24"
serviceSubnet: 10.96.0.0/12
scheduler: {} ---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
[root@ops ~]#

初始化安装

注意,在国内镜像下载都很慢,需要自己想办法解决此问题;

[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml

.......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root: kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78 \
--control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78
[root@master01 ~]#

创建kubeconfig配置文件

[root@master01 ~]# mkdir .kube
[root@master01 ~]# cd .kube/
[root@master01 .kube]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 .kube]# ls
config
[root@master01 .kube]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-01 NotReady master 2m12s v1.17.3
[root@master01 .kube]#

证书copy

需要把master节点生成证书copy到其它master 节点

#!/bin/bash
ssh 172.17.173.20 "mkdir -p /etc/kubernetes/pki"
scp /etc/kubernetes/pki/ca.* 172.17.173.20:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* 172.17.173.20:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* 172.17.173.20:/etc/kubernetes/pki/
scp /etc/kubernetes/admin.conf 172.17.173.20:/etc/kubernetes/ ssh 172.17.173.21 "mkdir -p /etc/kubernetes/pki"
scp /etc/kubernetes/pki/ca.* 172.17.173.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* 172.17.173.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* 172.17.173.21:/etc/kubernetes/pki/
scp /etc/kubernetes/admin.conf 172.17.173.21:/etc/kubernetes/

其它Master加入control-plane

master02与master03均执行下面的操作,注意这里的命令是master01生成的,切莫粘贴复制;

[root@master02 ~]# kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78 --control-plane --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at ...... [kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. [root@master02 ~]#

Node节点加入集群

node01与node02均做相同的操作即可,注意这里的命令是master01生成的,切莫粘贴复制;

[root@node02 ~]# kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78 --ignore-preflight-errors=Swap
W0220 18:12:44.122557 12912 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node02 ~]#

master01节点验证

此时所有节点都没有ready,是因为我们没有部署CNI网络插件,并且coredns也处于pending状态,不用管的,部署成网络插件CNI,自动就会变为running状态了。

[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-01 NotReady master 27m v1.17.3
master02 NotReady master 6m15s v1.17.3
master03 NotReady master 10s v1.17.3
node01 NotReady <none> 57s v1.17.3
node02 NotReady <none> 3m12s v1.17.3
[root@master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-9lwgk 0/1 Pending 0 27m
coredns-6955765f44-rhsps 0/1 Pending 0 27m
kube-apiserver-k8s-master-01 1/1 Running 0 27m
kube-apiserver-master02 1/1 Running 0 6m23s
kube-apiserver-master03 1/1 Running 0 17s
kube-controller-manager-k8s-master-01 1/1 Running 0 27m
kube-controller-manager-master02 1/1 Running 0 6m23s
kube-controller-manager-master03 1/1 Running 0 18s
kube-proxy-2hlgz 1/1 Running 0 6m23s
kube-proxy-8tptz 1/1 Running 0 3m20s
kube-proxy-cj55k 1/1 Running 0 18s
kube-proxy-f2lfv 1/1 Running 0 27m
kube-proxy-jg4sp 1/1 Running 0 65s
kube-scheduler-k8s-master-01 1/1 Running 0 27m
kube-scheduler-master02 1/1 Running 0 6m23s
kube-scheduler-master03 1/1 Running 0 17s
[root@master01 ~]#

部署CNI插件calico

这里使用calico网络插件,下载链接,这个yaml文件中的image下载,在国内也很慢,需要你想办法解决此问题;

wget https://docs.projectcalico.org/v3.11/manifests/calico.yaml

你可以根据你自定义的pod网段修改CALICO_IPV4POOL_CIDR对应的网段,默认是192.168.0.0/16,这里默认存储是etcd,还可以使用我们之前创建的etcd集群;由于这里没有做特别的修改,兼于文件太长,此配置不粘贴出来了。

[root@master01 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@master01 ~]#

验证集群基本功能

集群状态

[root@master01 ~]# kubectl get nodes
kuNAME STATUS ROLES AGE VERSION
k8s-master-01 Ready master 37m v1.17.3
master02 Ready master 16m v1.17.3
master03 Ready master 10m v1.17.3
node01 Ready <none> 11m v1.17.3
node02 Ready <none> 13m v1.17.3
[root@master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5b644bc49c-pvz58 1/1 Running 0 3m5s
calico-node-9bg8w 1/1 Running 0 3m5s
calico-node-d2xnr 1/1 Running 0 3m5s
calico-node-fjn7x 1/1 Running 0 3m5s
calico-node-gs7zt 1/1 Running 0 3m5s
calico-node-pt46g 1/1 Running 0 3m5s
coredns-6955765f44-9lwgk 1/1 Running 0 37m
coredns-6955765f44-rhsps 1/1 Running 0 37m
kube-apiserver-k8s-master-01 1/1 Running 0 37m
kube-apiserver-master02 1/1 Running 0 16m
kube-apiserver-master03 1/1 Running 0 10m
kube-controller-manager-k8s-master-01 1/1 Running 0 37m
kube-controller-manager-master02 1/1 Running 0 16m
kube-controller-manager-master03 1/1 Running 0 10m
kube-proxy-2hlgz 1/1 Running 0 16m
kube-proxy-8tptz 1/1 Running 0 13m
kube-proxy-cj55k 1/1 Running 0 10m
kube-proxy-f2lfv 1/1 Running 0 37m
kube-proxy-jg4sp 1/1 Running 0 11m
kube-scheduler-k8s-master-01 1/1 Running 0 37m
kube-scheduler-master02 1/1 Running 0 16m
kube-scheduler-master03 1/1 Running 0 10m
[root@master01 ~]#

创建demo

[root@master01 ~]# cat demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-deployment-nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: default-deployment-nginx
template:
metadata:
labels:
run: default-deployment-nginx
spec:
containers:
- name: default-deployment-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: default-svc-nginx
namespace: default
spec:
selector:
run: default-deployment-nginx
type: ClusterIP
ports:
- name: nginx-port
port: 80
targetPort: 80
[root@master01 ~]#

访问

[root@master01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
default-deployment-nginx-54bbbcf9f5-4rq7f 1/1 Running 0 18m
[root@master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-svc-nginx ClusterIP 10.96.15.216 <none> 80/TCP 18m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 59m
[root@master01 ~]# ping 10.96.15.216
PING 10.96.15.216 (10.96.15.216) 56(84) bytes of data.
64 bytes from 10.96.15.216: icmp_seq=1 ttl=64 time=0.136 ms
64 bytes from 10.96.15.216: icmp_seq=2 ttl=64 time=0.064 ms
^C
--- 10.96.15.216 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.064/0.100/0.136/0.036 ms
[root@master01 ~]# curl 10.96.15.216
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master01 ~]#

创建另外的pod验证dns

[root@master01 ~]# kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test3-deployment-nginx-8ddffb97b-w576p 1/1 Running 0 22m
[root@master01 ~]# kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test3-svc-nginx ClusterIP 10.96.77.86 <none> 80/TCP 22m
[root@master01 ~]# kubectl exec -it test3-deployment-nginx-8ddffb97b-w576p -n test /bin/bash
[root@test3-deployment-nginx-8ddffb97b-w576p /]#
[root@test3-deployment-nginx-8ddffb97b-w576p /]# curl test3-svc-nginx
AAAAAAAAAAAAAAAAA[root@test3-deployment-nginx-8ddffb97b-w576p /]#
[root@test3-deployment-nginx-8ddffb97b-w576p /]# ping default-svc-nginx.default
PING default-svc-nginx.default.svc.cluster.local (10.96.15.216) 56(84) bytes of data.
64 bytes from default-svc-nginx.default.svc.cluster.local (10.96.15.216): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from default-svc-nginx.default.svc.cluster.local (10.96.15.216): icmp_seq=2 ttl=64 time=0.082 ms
^C
--- default-svc-nginx.default.svc.cluster.local ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.040/0.061/0.082/0.021 ms
[root@test3-deployment-nginx-8ddffb97b-w576p /]# curl default-svc-nginx
curl: (6) Could not resolve host: default-svc-nginx
[root@test3-deployment-nginx-8ddffb97b-w576p /]# curl default-svc-nginx.default
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@test3-deployment-nginx-8ddffb97b-w576p /]#

从上可以看出pod不仅可以访问自己的service名称,也可访问其它名称空间的serviceName.NAMESPACE。

总结

kuberadm安装kubernetes v1.17.3还是非常简单的,有一个关键的点,大家需要注意一下,如果你弄高可用集群,建议在默认init时,指定的配置文件当中,要指定这个 controlPlaneEndpoint: "172.17.173.19:6443",初始化完成后才会有kubeadm join加入master节点control-plane命令参数和node节点加入集群的命令参数;还有一个地方需要注意,master01初始化完成后,需要把生成的pki下面的证书和私钥copy到其它的master节点,再执行kubeadm join,否则失败。

最新文章

  1. Red5 1.0.0RC1 集成到tomcat6.0.35中运行&amp;部署新的red5项目到tomcat中
  2. sklearn学习笔记2
  3. 【Java每日一题】20161019
  4. Leetcode 313. super ugly number
  5. c++ poco库https调用
  6. Oracle的DDL、DML、DCL
  7. HDU 4081 Qin Shi Huang&#39;s National Road System 最小生成树
  8. 随机数是骗人的,.Net、Java、C为我作证(转载)
  9. 解压Taobao手机客户端发现的东西
  10. [Android开发常见问题-14] Unexpected namespace prefix &quot;abc&quot; found for tag SomeThing
  11. Java开发笔记(五十一)多态的发生场景
  12. JS_ Date对象应用实例
  13. pycharm修改快捷键
  14. php绝对路径转相对路径
  15. 自定义页签logo
  16. WEB服务器与应用服务器解疑
  17. MySQL Subquery
  18. XDU 1140 寻找万神(字符串匹配)
  19. 大型站点技术架构PDF阅读笔记(一):
  20. Qt状态机框架(状态机就开始异步的运行了,也就是说,它成为了我们应用程序事件循环的一部分了)

热门文章

  1. Tapdata “设擂招贤”携手 LeetCode 举办全球极客技术竞赛
  2. LinkedList集合和Vector集合
  3. 数学工具类Math
  4. 攻防世界MISC进阶区--39、40、47
  5. 4-11 CS后台项目-4 及 Redis缓存数据
  6. Go语言基础二:常用的Go工具命令
  7. 【P1809 过河问题】题解
  8. 秋季招聘季如何制作一款“秀色可餐”的简历?由ShareLatex和Python3打造
  9. odoo 14 python 单元测试步骤
  10. React Native环境配置、初始化项目、打包安装到手机,以及开发小知识