一、K8S的ip地址

Node IP:节点设备的IP,如物理机,虚拟机等容器宿主的实际IP。

Pod IP:Pod的IP地址,是根据docker0网络IP段进行分配的。

Cluster IP:Service的IP,是一个虚拟IP,仅作用于service对象,由K8S管理和分配,需要结合service port才能使用,单独的IP没有通信功能,集群外访问需要一些修改。

在K8S集群内部,node ip、pod ip、clustere ip的通信机制是由k8s指定的路由规则,不是IP路由。

[root@linux-node1 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.0.1 <none> /TCP 3h

二、Flannel网络部署

(1)为Flannel生成证书

[root@linux-node1 ssl]# vim flanneld-csr.json
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

(2)生成证书

[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
> -ca-key=/opt/kubernetes/ssl/ca-key.pem \
> -config=/opt/kubernetes/ssl/ca-config.json \
> -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@linux-node1 ssl]# ll flannel*
-rw-r--r-- root root May : flanneld.csr
-rw-r--r-- root root May : flanneld-csr.json
-rw------- root root May : flanneld-key.pem
-rw-r--r-- root root May : flanneld.pem

(3)分发证书

[root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.120:/opt/kubernetes/ssl/
flanneld-key.pem % .2KB/s :
flanneld.pem % .3KB/s :
[root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.130:/opt/kubernetes/ssl/
flanneld-key.pem % .1KB/s :
flanneld.pem % .4KB/s :

(4)下载Flannel软件包

[root@linux-node1 ~]# cd /usr/local/src
# wget
https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
复制到linux-node2和linux-node3节点
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.120:/opt/kubernetes/bin/
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.130:/opt/kubernetes/bin/ 复制对应脚本到/opt/kubernetes/bin目录下。
[root@linux-node1 ~]# cd /usr/local/src/kubernetes/cluster/centos/node/bin/
[root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/
[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.120:/opt/kubernetes/bin/
[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.130:/opt/kubernetes/bin/

(5)配置Flannel

[root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
复制配置到其它节点上
[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.120:/opt/kubernetes/cfg/
[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.130:/opt/kubernetes/cfg/

(6)设置Flannel系统服务

[root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service [Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker Type=notify [Install]
WantedBy=multi-user.target
RequiredBy=docker.service
复制系统服务脚本到其它节点上
# scp /usr/lib/systemd/system/flannel.service 192.168.56.120:/usr/lib/systemd/system/
# scp /usr/lib/systemd/system/flannel.service 192.168.56.130:/usr/lib/systemd/system/

三、Flannel CNI集成

(1)下载CNI插件

https://github.com/containernetworking/plugins/releases
wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
[root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node2 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node3 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node1 src]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.120:/opt/kubernetes/bin/cni/
[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.130:/opt/kubernetes/bin/cni/

(2)创建Etcd的key

此步的操作是为了创建POD的网段,并在ETCD中存储,而后FLANNEL从ETCD中取出并进行分配

[root@linux-node1 src]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
--no-sync -C https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379 \
mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null >&

(3)启动flannel

[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable flannel
[root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node1 ~]# systemctl start flannel [root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable flannel
[root@linux-node2 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node2 ~]# systemctl start flannel [root@linux-node3 ~]# systemctl daemon-reload
[root@linux-node3 ~]# systemctl enable flannel
[root@linux-node3 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node3 ~]# systemctl start flannel

可以看到每个节点上会多出一个flannel.1的网卡,不同的节点都在不同网段。

[root@linux-node1 ~]# ifconfig flannel.
flannel.: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 10.2.46.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::f4e6:1aff:fe7e:575b prefixlen scopeid 0x20<link>
ether f6:e6:1a:7e::5b txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions [root@linux-node2 ~]# ifconfig flannel.
flannel.: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 10.2.87.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::d4e5:72ff:fe3e: prefixlen scopeid 0x20<link>
ether d6:e5::3e:: txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions [root@linux-node3 ~]# ifconfig flannel.
flannel.: flags=<UP,BROADCAST,RUNNING,MULTICAST> mtu
inet 10.2.33.0 netmask 255.255.255.255 broadcast 0.0.0.0
ether be:cd:5a:4f:6b:d1 txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions

(4)遇到的问题:Flannel无法启动

检查/opt/kubernetes/cfg/etcd.conf配置文件中的ETCD_LISTEN_CLIENT_URLS是否配置监听127.0.0.1:2379。依旧无法启动flannel,重新输入了一遍,正常了,暂时没发现其他原因,至于etcdctl无法获取key值,有待研究!!!

四、配置Docker使用Flannel

[root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service
[Unit] #在Unit下面修改After和增加Requires
After=network-online.target firewalld.service flannel.service #让docker在flannel网络后面启动
Wants=network-online.target
Requires=flannel.service [Service] #增加EnvironmentFile=-/run/flannel/docker
Type=notify
EnvironmentFile=-/run/flannel/docker #加载环境文件,设置docker0的ip地址为flannel分配的ip地址
ExecStart=/usr/bin/dockerd $DOCKER_OPTS
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl restart docker
[root@linux-node1 ~]# ifconfig docker0
docker0: flags=<UP,BROADCAST,MULTICAST> mtu
inet 10.2.46.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether ::1f:ef:9f:b5 txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions [root@linux-node2 ~]# ifconfig docker0
docker0: flags=<UP,BROADCAST,MULTICAST> mtu
inet 10.2.87.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether ::8a:a5::d7 txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions [root@linux-node3 ~]# ifconfig docker0
docker0: flags=<UP,BROADCAST,MULTICAST> mtu
inet 10.2.33.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether ::::: txqueuelen (Ethernet)
RX packets bytes (0.0 B)
RX errors dropped overruns frame
TX packets bytes (0.0 B)
TX errors dropped overruns carrier collisions

总结

kubectl get node时,会看到节点的状态READY,如果状态为NotReady,可以查看节点上的kubelet是否已经启动,如果未启动,进行启动。kubelet无法启动,要进行查看systemctl status kubeletjournalctl -xe看看是什么原因导致无法启动。遇到的一种情况是依赖docker,查看docker无法启动。再进一步排查docker无法启动的原因。

最新文章

  1. 0028 Java学习笔记-面向对象-Lambda表达式
  2. 使用Nancy.Host实现脱离iis的Web应用
  3. 【原】iOS学习之应用程序的启动原理
  4. Sqlserver 函数
  5. 锋利的JQuery —— 事件和动画
  6. plsql+绿色版oracle连接远程数据库配置及提示缺少msvcr71.dll解决方法
  7. 不解压直接查看tar包内容
  8. (地址)propedit安装说明的地址
  9. 为VirtualBox里的Linux系统安装增强功能
  10. windows操作系统日常使用
  11. 最好的Java IDE之争:Eclipse PK IntelliJ IDEA
  12. SVN学习
  13. android事件分发介绍
  14. iOS扫描二维码(系统方法)
  15. hadoop笔记之Hive的数据类型
  16. 百度地图API相关点
  17. github与git之间怎么建立连接
  18. NOIP2012junior—P1—质因数分解
  19. C#字符串分割成列表及相反转换
  20. 8.Mysql数据类型选择

热门文章

  1. C# Json转对象
  2. flask的orm操作
  3. Replace-iOS
  4. 跟我一起阅读Java源代码之HashMap(一)
  5. AOP-Pointcut-笔记
  6. [AHOI2009]最小割
  7. HustOJ平台搭建
  8. 记录一个python公式罗列的方法 join()方法和map()方法的妙用
  9. JS判断指定dom元素是否在屏幕内的方法实例
  10. HDU 2639(01背包求第K大值)