master上通过kubeadm安装Kubernetes

添加国内阿里源后安装kubeadm:

 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
2 apt-get update && apt-get install kubeadm

创建kubeadm.yaml文件, 然后执行安装:

 apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
controllerManagerExtraArgs:
horizontal-pod-autoscaler-use-rest-clients: "true"
horizontal-pod-autoscaler-sync-period: "10s"
node-monitor-grace-period: "10s"
apiServerExtraArgs:
runtime-config: "api/all=true"
kubernetesVersion: "stable-1.12.2"
 kubeadm init --config kubeadm.yaml

安装过程中出现的问题:

 [ERROR Swap]: running with swap on is not supported. Please disable swap
[ERROR SystemVerification]: missing cgroups: memory
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.2]

解决办法:

. 报错很直白, 禁用swap分区即可. 
不过不建议使用: swapoff -a
从操作记录来看, 使用swapoff -a后kubeadm init命令虽然可以执行,但是却总是失败, 提示:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. 查看日志发现实际还是swap没有关闭的问题:
➜ kubernetes journalctl -xefu kubelet
11月 :: debian kubelet[]: F1105 ::28.609272 server.go:] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority /dev/sda9 partition -]
➜ kubernetes cat /proc/swaps
Filename Type Size Used Priority
/dev/sda9 partition -
➜ kubernetes 注释掉/etc/fstab下swap挂载后安装成功

. echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory\" >> /etc/default/grub && update-grub && reboot . 国内正常网络不能从k8s.grc.io拉取镜像, 所以从docker.io拉取, 然后重新打上一个符合k8s的tag:
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2
docker pull mirrorgooglecontainers/kube-proxy:v1.12.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.
docker pull coredns/coredns:1.2. docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/etcd:3.2. k8s.gcr.io/etcd:3.2.
docker tag docker.io/coredns/coredns:1.2. k8s.gcr.io/coredns:1.2. docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.
docker rmi coredns/coredns:1.2. 也可以增加加速器(测试163后速度比直接访问更慢), 加入方法如下,然后重启docker服务:
➜ kubernetes cat /etc/docker/daemon.json
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}
➜ kubernetes

安装成功记录:

➜  kubernetes  kubeadm init --config kubeadm.yaml
I1205 23:08:15.852917 5188 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.12.2.txt": Get https://dl.k8s.io/release/stable-1.12.2.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 23:08:15.853144 5188 version.go:94] falling back to the local client version: v1.12.2
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [debian localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [debian localhost] and IPs [192.168.2.118 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [debian kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.118]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 48.078220 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node debian as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node debian as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian" as an annotation
[bootstraptoken] using token: x4p0vz.tdp1xxxx7uyerrrs
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root: kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9 ➜ kubernetes  

部署网络插件

安装成功后, 通过kubectl get nodes查看节点信息(kubectl命令需要使用kubernetes-admin来运行, 需要拷贝下配置文件并配置环境变量才能运行kubectl get nods):

➜  kubernetes  kubectl get nodes
The connection to the server localhost: was refused - did you specify the right host or port?
➜ kubernetes echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
➜ kubernetes source ~/.bashrc
➜ kubernetes kubectl get nodes
NAME STATUS ROLES AGE VERSION
debian NotReady master 21m v1.12.2
➜ kubernetes

可以看到节点NotReady, 这是由于还没有部署任何网络插件:

➜  kubernetes  kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-4vjhf / Pending 24m
coredns-576cbf47c7-xzjk7 / Pending 24m
etcd-debian / Running 23m
kube-apiserver-debian / Running 23m
kube-controller-manager-debian / Running 23m
kube-proxy-5wb6k / Running 24m
kube-scheduler-debian / Running 23m
➜ kubernetes ➜ kubernetes kubectl describe node debian
Name: debian
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=debian
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl:
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, Dec :: +
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Wed, Dec :: + Wed, Dec :: + KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, Dec :: + Wed, Dec :: + KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, Dec :: + Wed, Dec :: + KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, Dec :: + Wed, Dec :: + KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, Dec :: + Wed, Dec :: + KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported
Addresses:
InternalIP: 192.168.2.118
Hostname: debian
Capacity:
cpu:
ephemeral-storage: 4673664Ki
hugepages-2Mi:
memory: 5716924Ki
pods:
Allocatable:
cpu:
ephemeral-storage:
hugepages-2Mi:
memory: 5614524Ki
pods:
System Info:
Machine ID: 4341bb45c5c84ad2827c173480039b5c
System UUID: 05F887C4-A455-122E-8B14-8C736EA3DBDB
Boot ID: ff68f27b-fba0--a1cf-796dd013e025
Kernel Version: 3.16.--amd64
OS Image: Debian GNU/Linux (jessie)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.12.2
Kube-Proxy Version: v1.12.2
Non-terminated Pods: ( in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-debian (%) (%) (%) (%)
kube-system kube-apiserver-debian 250m (%) (%) (%) (%)
kube-system kube-controller-manager-debian 200m (%) (%) (%) (%)
kube-system kube-proxy-5wb6k (%) (%) (%) (%)
kube-system kube-scheduler-debian 100m (%) (%) (%) (%)
Allocated resources:
(Total limits may be over percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (%) (%)
memory (%) (%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 22m kubelet, debian Starting kubelet.
Normal NodeAllocatableEnforced 22m kubelet, debian Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 22m (x6 over 22m) kubelet, debian Node debian status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 22m (x5 over 22m) kubelet, debian Node debian status is now: NodeHasSufficientPID
Normal Starting 21m kube-proxy, debian Starting kube-proxy.
➜ kubernetes

部署插件后可查看所有pods已经running(插件要几分钟才能运行起来, 中间状态有ContainerCreating/CrashLoopBackOff):

➜  kubernetes  kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-4vjhf / Pending 25m
coredns-576cbf47c7-xzjk7 / Pending 25m
etcd-debian / Running 25m
kube-apiserver-debian / Running 25m
kube-controller-manager-debian / Running 25m
kube-proxy-5wb6k / Running 25m
kube-scheduler-debian / Running 25m
weave-net-nj7bk / ContainerCreating 21s
➜ kubernetes kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-4vjhf / CrashLoopBackOff 27m
coredns-576cbf47c7-xzjk7 / CrashLoopBackOff 27m
etcd-debian / Running 27m
kube-apiserver-debian / Running 27m
kube-controller-manager-debian / Running 27m
kube-proxy-5wb6k / Running 27m
kube-scheduler-debian / Running 27m
weave-net-nj7bk / Running 2m32s
➜ kubernetes kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-4vjhf / Running 27m
coredns-576cbf47c7-xzjk7 / Running 27m
etcd-debian / Running 27m
kube-apiserver-debian / Running 27m
kube-controller-manager-debian / Running 27m
kube-proxy-5wb6k / Running 27m
kube-scheduler-debian / Running 27m
weave-net-nj7bk / Running 2m42s
➜ kubernetes
➜  kubernetes  kubectl get nodes
NAME STATUS ROLES AGE VERSION
debian Ready master 38m v1.12.2
➜ kubernetes

调整master可以执行Pod

默认情况下,Kubernetes通过Taint/Toleration 机制给某一个节点打上"污点":

➜  kubernetes  kubectl describe node debian | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
➜ kubernetes

那么所有Pod默认就不在被标记的节点上运行,除非:

 . Pod主动声明允许在这种节点上运行(通过在Pod的yaml文件中的spec部分,加入 tolerations 字段即可)。
. 对于总共就几台机器的k8s测试机器来说,最好的选择就是删除Taint:
  ➜ kubernetes kubectl taint nodes --all node-role.kubernetes.io/master-
  node/debian untainted
  ➜ kubernetes kubectl describe node debian | grep Taints
  Taints: <none>
  ➜ kubernetes

增加节点

由于master上kubeadm/kubelet都是v1.12.2版本,worker节点执行默认apt-get install时默认装了v1.13版本,导致加入集群失败,需卸载重装匹配的版本:
root@debian-vm:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
root@debian-vm:~# root@debian-vm:~# kubelet --version
Kubernetes v1.13.0
root@debian-vm:~# apt-get --purge remove kubeadm kubelet
root@debian-vm:~# apt-cache policy kubeadm
kubeadm:
已安装: (无)
候选软件包:1.13.0-00
版本列表:
1.13.0-00 0
500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
1.12.3-00 0
500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
1.12.2-00 0
500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
root@debian-vm:~# apt-get install kubeadm=1.12.2-00 kubelet=1.12.2-00
root@debian-vm:~# kubeadm join 192.168.2.118: --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.168.2.118:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.118:6443"
[discovery] Requesting info from "https://192.168.2.118:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.118:6443"
[discovery] Successfully established connection with API Server "192.168.2.118:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian-vm" as an annotation This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. root@debian-vm:~#

节点加入集群成功

https://github.com/kubernetes/kubernetes/issues/54914
https://github.com/kubernetes/kubeadm/issues/610
https://blog.csdn.net/acxlm/article/details/79069468

最新文章

  1. CephRGW 在多个RGW负载均衡场景下,RGW 大文件并发分片上传功能验证
  2. 我只是想开个饭店—— JavaIO模型的演变
  3. JAVA基础代码分享--学生成绩管理
  4. android 如何正确使用 泛型 和 多参数 “偷懒”
  5. JS 做时钟
  6. 导入DXF文件
  7. js填写银行卡号,每隔4位数字加一个空格
  8. 转:Unicode汉字编码表
  9. JAR包
  10. 删除map容器中指定的元素
  11. 远程连接数据库(通过pgAdmin)
  12. hadoop 各种counter 解读
  13. 程序员必备基础知识:通信协议——Http、TCP、UDP
  14. 从陌陌上市看BAT的移动保卫战(转)
  15. JavaScript常用全局属性与方法
  16. Node.js调用C#代码
  17. springMVC源码分析--访问请求执行ServletInvocableHandlerMethod和InvocableHandlerMethod
  18. 此主机支持Intel VT-x,但Intel VT-x处于禁用状态
  19. 关于mysql的压测sysbench
  20. C++关于运算符的注意事项

热门文章

  1. 使用Mosh,本地Mac locale与Remote Debian locale不一致的问题
  2. EF执行SQL语句
  3. SpringBoot非官方教程 | 第二篇:Spring Boot配置文件详解
  4. Linux环境下利用句柄恢复Oracle误删除的数据文件
  5. CRS
  6. Spring Boot学习笔记(二二) - 与Mybatis集成
  7. 【2017 World Final E】Need For Speed(二分)
  8. 【TOJ 1743】集合运算(set并、交、差集)
  9. php判断某个数是素数的3种方法
  10. Cacti 学习笔记