唯一的真理

True or False

Kubernetes Install

信息版本

名称 信息
系统 CentOS Linux 3.10.0-514.el7.x86_64
硬盘 200G
内网 互通
外网 不受限制
hostname IP status
ktest01 10.10.9.89 master
ktest02 10.20.161.159 node
ktest03 10.20.161.155 node
  • 内核最低要求Centos 7 3.10以上

Basic environment

  • 通过ansible同时设置3台服务器(这边忽略ansible具体配置)
  • 如果不会ansible,没关系在每台服务器上执行“”里面的命令就行 kubeadm 初始化环境配置
    ansible-playbook kubeadm_base.yml
    kubeadm_base.yml配置内容如下
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#初始化kubeadm基础环境
- hosts: k8s_test    #通过hosts确定k8s服务器组
  remote_user: root    #使用root来执行
  vars_files:
    - /etc/ansible/playbook/kubeadm_variable.yml    #变量文件,暂时可以忽略
  tasks:
    - name: disabled selnux         #关闭selinux
      selinux: state=disabled
    - name: stop firewalld        #关闭Centos 7 默认防火墙 firewalld
      command: systemctl stop firewalld
    - name: disable firewalld       #禁用Centos 7 默认防火墙firewalld
      command: systemctl disable firewalld
    - name: add br_netfilter        #加载 br_netfilter 模块
      command: modprobe br_netfilter
    - name: add modules            #增加相关模块启动文件
      template: src=/etc/ansible/playbook/j2/kubeadm_ipvs_modules.j2 dest=/etc/sysconfig/modules/ipvs.modules mode=0755
    - name: run ipvs.modules       #运行脚本,加载模块
      shell: sh /etc/sysconfig/modules/ipvs.modules
    - name: kubeadm_sysctl       #增加内核调优参数,开启转发
      template: src=/etc/ansible/playbook/j2/kubeadm_sysctl.j2 dest=/etc/sysctl.conf
    - name: sysctl save            #保存内核调优参数
      command: sysctl -p
    - name: install ipset        #安装ipset,便于kube-proxy开启ipvs模式
      yum: name=ipset state=installed
    - name: install ipvsadm     #安装ipvsadm ipset的管理工具
      yum: name=ipvsadm state=installed
    - name: add docker repo        #增加docker国内镜像
      command: wget -P /etc/yum.repos.d/ https://download.docker.com/linux/centos/docker-ce.repo
    - name: install docker-ce-18.06.3.ce-3.el7    #安装docker-ce-18.06.3.ce-3.el7 (ce为docker的开源版)
      yum: name=docker-ce-18.06.3.ce-3.el7 state=installed

kubeadm_ipvs_modules.j2 配置如下

1
2
3
4
5
6
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

kubeadm_sysctl.j2 配置如下

1
2
3
4
5
6
7
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
vm.swappiness=0

docker_centos7_repo.j2 配置如下

1
2
3
4
5
6
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Kubernetes 1.13最低支持的Docker版本是1.11.1,这次安装较为新的版本18.06.3
启动3台服务器的docker并设置成开机启动

1
2
ansible k8s_test -m shell -a "systemctl start docker"
ansible k8s_test -m shell -a "systemctl enable docker"

Kubernetes install

所有节点安装kubeadm和kubelet

版本信息

rpm name version
kubelet 1.13.4
kubeadm 1.13.4
kubectl 1.13.4

kubeadm_base.yml 配置新增如下

1
2
3
4
5
6
7
8
- name: add Kubernetes repo
  template: src=/etc/ansible/playbook/j2/kubernetes_centos7_repo.j2 dest=/etc/yum.repos.d/kubernetes.r
- name: kubelet install
  yum: name=kubelet-1.13.4 state=installed
- name: kubeadm install
  yum: name=kubeadm-1.13.4 state=installed
- name: kubectl install
  yum: name=kubectl-1.13.4 state=installed

kubernetes_centos7_repo.j2 配置如下

1
2
3
4
5
6
7
8
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

这次用的是阿里云的源,就不必特地去科学上网了
和官方源唯一的区别就是更新速度,如果需要最新的源信息可以用下面的命令将官方源加入进来

1
2
3
4
5
6
7
8
9
10
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • 关闭SWAP Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动
    关闭系统的Swap方法如下:
1
swapoff -a

修改 /etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。
swappiness参数调整,修改/etc/sysctl.conf添加下面一行

1
vm.swappiness=0

由于之前我的配置文件kubeadm_sysctl.j2已增加了vm.swappiness=0,所以不需要再每天执行了
如果服务器由于业务或服务原因无法关闭swap的话,可以参考下面设置将检查swap设置去掉
/etc/sysconfig/kubelet 增加如下这行设置

1
KUBELET_EXTRA_ARGS=--fail-swap-on=false

kubeadm init初始化集群

各节点开启kubelet服务
ansible k8s_test -m command -a “systemctl enable kubelet.service”
下载kubeadm config image 相关镜像
由于默认镜像是k8s.gcr.io,需要科学上网,可通过阿里源下载相关版本镜像重新TAG的方式来规避这个问题
脚本:images_download.sh (每台服务器上运行一下)

1
2
3
4
5
6
7
8
#!/bin/bash
images_name=`kubeadm config images list|awk -F '/' '{print $2}'|xargs`
for i in $images_name
do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$i
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$i k8s.gcr.io/$i
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$i
done

check 是否完成镜像的下载
命令:docker images

1
2
3
4
5
6
7
8
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.4 fadcc5d2b066 5 days ago 80.3MB
k8s.gcr.io/kube-apiserver v1.13.4 fc3801f0fc54 5 days ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.4 40a817357014 5 days ago 146MB
k8s.gcr.io/kube-scheduler v1.13.4 dd862b749309 5 days ago 79.6MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 4 months ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB

开机启动kubelet.service

1
2
3
4
#开启服务
ansible k8s_test -m command -a "systemctl enable kubelet.service"
#查看服务是否已开启
ansible k8s_test -m shell -a "systemctl list-unit-files|grep kubelet"

初始化集群

  • Master 操作
1
2
3
4
5
kubeadm init \ 
--kubernetes-version=v1.13.4 \    #指定版本号
--pod-network-cidr=10.60.0.0/16 \    #选择flannel作为Pod网络插件,网络并设置IP和掩码
--apiserver-advertise-address=10.10.9.89 \ #指定APIserver IP
--token-ttl 0 #设置token 永不过期
  • 输出信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "ktest01" could not be reached
 [WARNING Hostname]: hostname "ktest01": lookup ktest01 on 10.10.9.98:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ktest01 localhost] and IPs [10.10.9.89 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ktest01 localhost] and IPs [10.10.9.89 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ktest01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.9.89]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.502598 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ktest01" as an annotation
[mark-control-plane] Marking the node ktest01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ktest01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: iubzz7.7qmxaohjhk553dpx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.9.89:6443 --token iubzz7.7qmxaohjhk553dpx --discovery-token-ca-cert-hash sha256:7121557de50b8d40c34b98ad3eb0b34f11444434354af494c0e63e57aafde631
  • 注意内容
    [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
    [certificates]生成相关的各种证书
    [kubeconfig]生成相关的kubeconfig文件
    [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • 配置常规用户如何使用kubectl访问集群

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看集群状态
1
2
3
4
5
kubectl get cs
NAME   STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}

确认个组件都处于healthy状态

安装Pod Network

1
2
3
4
mkdir -p /etc/kubernetes/addons/
cd /etc/kubernetes/addons/
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
  • 输出信息
1
2
3
4
5
6
7
8
9
10
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

目前需要在kube-flannel.yml中使用—iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上—iface=<iface-name>

1
2
3
4
5
6
7
8
9
10
vim /etc/kubernetes/addons/kube-flannel.yml
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0            #增加- --iface=eth0

使用kubectl get pod --all-namespaces -o wide确保所有的Pod都处于Running状态

1
2
3
4
5
6
7
8
9
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-86c58d9df4-btdh2 1/1 Running 0 65m 10.60.0.2 ktest01 <none> <none>
kube-system coredns-86c58d9df4-clwk5 1/1 Running 0 65m 10.60.0.3 ktest01 <none> <none>
kube-system etcd-ktest01 1/1 Running 0 64m 10.10.9.89 ktest01 <none> <none>
kube-system kube-apiserver-ktest01 1/1 Running 0 64m 10.10.9.89 ktest01 <none> <none>
kube-system kube-controller-manager-ktest01 1/1 Running 0 64m 10.10.9.89 ktest01 <none> <none>
kube-system kube-flannel-ds-amd64-v884z 1/1 Running 0 11m 10.10.9.89 ktest01 <none> <none>
kube-system kube-proxy-m2w8p 1/1 Running 0 65m 10.10.9.89 ktest01 <none> <none>
kube-system kube-scheduler-ktest01 1/1 Running 0 64m 10.10.9.89 ktest01 <none> <none>

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:

1
2
kubectl describe node ktest01 |grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule

如果需要master也要参与负载工作,可以参考下面命令

1
2
kubectl taint nodes ktest01 node-role.kubernetes.io/master-
node "ktest01" untainted

由于我测试服务有3台,这次并没有让master参与负载 * 在其他node分别执行join命令将其加入集群中

1
kubeadm join 10.10.9.89:6443 --token iubzz7.7qmxaohjhk553dpx --discovery-token-ca-cert-hash sha256:7121557de50b8d40c34b98ad3eb0b34f11444434354af494c0e63e57aafde631

测试DNS

  • 所有节点(master,node)增加使内网IP10.60.0.0/16 路由
1
2
/sbin/iptables -t nat -I POSTROUTING -s 10.60.0.0/16 -j MASQUERADE
echo "/sbin/iptables -t nat -I POSTROUTING -s 10.60.0.0/16 -j MASQUERADE" >> /etc/rc.local
  • 通过下面命令可创建一个pod,并进入
1
kubectl run curl --image=radial/busyboxplus:curl -it
  • 如出现如下情况,可通过下面命令来确认状态并选择进入方式
1
2
3
4
5
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
Error from server (AlreadyExists): deployments.apps "curl" already exists
kubectl get pod
kubectl describe pod curl-66959f6557-4m9p5
kubectl exec -it curl-66959f6557-4m9p5 /bin/sh
  • 测试DNS nslookup kubernetes.default
  • 返回信息如下
1
2
3
4
5
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
  • 查看集群状态 kubectl get nodes
1
2
3
4
NAME STATUS ROLES AGE VERSION
ktest01 Ready master 5d23h v1.13.4
ktest02 Ready <none> 4d22h v1.13.4
ktest03 Ready <none> 4d22h v1.13.4

节点移除

  • 查看pod情况
1
2
3
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-66959f6557-4m9p5 1/1 Running 0 5d22h 10.60.1.2 ktest02 <none> <none>
  • Master 节点操作,封锁node,排干node上的pod
1
2
3
4
5
kubectl drain ktest02 --delete-local-data --force --ignore-daemonsets
node/ktest02 cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-lk42j, kube-proxy-tswl9
pod/curl-66959f6557-4m9p5 evicted
node/ktest02 evicted
  • 查看节点pod是否已转移
1
2
3
4
5
6
7
8
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl-66959f6557-wqqkj 1/1 Running 0 58s 10.60.2.2 ktest03 <none> <none>
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ktest01 Ready master 5d23h v1.13.4
ktest02 Ready,SchedulingDisabled <none> 4d22h v1.13.4
ktest03 Ready <none> 4d22h v1.13.4

可以看到ktest02上的pod已转移至ktest03,并且状态属于待删除 * 删除ktest02节点(master操作)

1
2
kubectl delete node ktest02
node "ktest02" deleted
  • ktest02 操作删除多余网卡,如果该node准备重装可忽略
1
2
3
4
5
6
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

重新加入该节点

如果忘记之前的token,可通过以下命令进行获取

1
2
3
kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
iubzz7.7qmxaohjhk553dpx <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
  • token的有效期是24小时,如果token已经过期的话,可以使用以下命令重新生成
1
kubeadm token create
  • 如果你找不到–discovery-token-ca-cert-hash的值,可以使用以下命令生成:
1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • 重新加入
1
2
3
4
5
6
kubeadm join 10.10.9.89:6443 --token iubzz7.7qmxaohjhk553dpx --discovery-token-ca-cert-hash sha256:7121557de50b8d40c34b98ad3eb0b34f11444434354af494c0e63e57aafde631
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ktest01 Ready master 6d v1.13.4
ktest02 Ready <none> 82s v1.13.4
ktest03 Ready <none> 4d22h v1.13.4

kube-proxy开启ipvs

  • 修改kube-proxy配置,mode: “ipvs” kubectl edit configmap kube-proxy -n kube-system
  • 重启各个节点上的kube-proxy pod
1
2
3
4
5
6
7
8
9
10
11
12
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-m2w8p 1/1 Running 0 6d
kube-proxy-r4d6v 1/1 Running 0 4d23h
kube-proxy-w5gh9 1/1 Running 0 16m
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-m2w8p" deleted
pod "kube-proxy-r4d6v" deleted
pod "kube-proxy-w5gh9" deleted
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-ldxnn 1/1 Running 0 17s
kube-proxy-vswpn 1/1 Running 0 4s
kube-proxy-z4tcz 1/1 Running 0 7s
  • 查看kube-proxy是否有异常
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-6rcl7 1/1 Running 0 16s
kube-proxy-vswpn 1/1 Running 0 19m
kube-proxy-z4tcz 1/1 Running 0 19m
kubectl logs kube-proxy-6rcl7 -n kube-system
I0402 07:37:40.248158 1 server_others.go:189] Using ipvs Proxier.
W0402 07:37:40.248901 1 proxier.go:381] IPVS scheduler not specified, use rr by default
I0402 07:37:40.249114 1 server_others.go:216] Tearing down inactive rules.
I0402 07:37:40.311080 1 server.go:483] Version: v1.13.4
I0402 07:37:40.342010 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0402 07:37:40.343034 1 config.go:202] Starting service config controller
I0402 07:37:40.343048 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0402 07:37:40.343080 1 config.go:102] Starting endpoints config controller
I0402 07:37:40.343090 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0402 07:37:40.443207 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0402 07:37:40.443207 1 controller_utils.go:1034] Caches are synced for service config controller

出现Using ipvs Proxier,说明成功了

Nodeport 测试

  • 生成实例 leon_nginx_app_test.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: apps/v1        #版本
kind: Deployment        #方式
metadata:
  name: nginx-test    #pod的名称
  labels:
    app: nginx       #定义app:nginx
spec:
  replicas: 3        #副本3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13.12        #版本
        ports:
        - name: http
          containerPort: 80
  • 执行kubectl create -f leon_nginx_app_test.yaml
1
2
3
4
5
6
kubectl get pods
NAME READY STATUS RESTARTS AGE
curl-66959f6557-wqqkj 1/1 Running 0 13d
nginx-test-6456659578-8mvfz 1/1 Running 0 4d2h
nginx-test-6456659578-k8cjz 1/1 Running 0 4d2h
nginx-test-6456659578-qr4ts 1/1 Running 0 4d2h
  • 通过Nodeport 暴露端口 leon_nodeport_test.yaml
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1        #版本
kind: Service            #服务
metadata:
 name: nginx-service        #svc的名称
spec:
 type: NodePort            #模式为NodePort
 ports:
 - port: 80
   targetPort: 80
   nodePort: 30001
 selector:
  app: nginx                #对应的pod实例,参考之前leon_nginx_app_test.yaml的设置
  • 执行leon_nodeport_test.yaml

测试访问

由于之前定义了nodeport 为30001暴露端口 可以在2个node节点上分别访问这个端口来访问到对应的nginx服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
###ktest02 node
curl http://10.20.161.159:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

###ktest03 node
curl http://10.20.161.155:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

到这里k8s的集群完成

返回顶部