基于CentOS使用Yum搭建简单K8s集群

环境简介

名称 版本
VirtualBox.app 6.0.10
Vagrant 2.2.3
CentOS 7.6
Etcd 3.3.11
Kubernetes v1.5.2

虚拟机相关配置参考Vagrantfile.

环境准备

Vagrantfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.require_version ">= 1.6.0"

boxes = [
{
:name => "k8s-master",
:eth1 => "192.168.11.21",
:mem => "1024",
:cpu => "1"
},
{
:name => "k8s-node1",
:eth1 => "192.168.11.22",
:mem => "1024",
:cpu => "1"
},
{
:name => "k8s-node2",
:eth1 => "192.168.11.23",
:mem => "1024",
:cpu => "1"
}
]

Vagrant.configure(2) do |config|

config.vm.box = "centos-7.6"

boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.hostname = opts[:name]
config.vm.provider "vmware_fusion" do |v|
v.vmx["memsize"] = opts[:mem]
v.vmx["numvcpus"] = opts[:cpu]
end

config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", opts[:mem]]
v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
end

config.vm.network :private_network, ip: opts[:eth1]
end
end
end

创建完成之后分别为三台机器更新yum源和epel源:

1
2
3
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

分别修改三台机器的 /etc/hosts文件,添加以下内容,使得三台机器之间可以互相访问:

1
2
3
192.168.11.21 k8s-master
192.168.11.22 k8s-node1
192.168.11.23 k8s-node2

修改完成之后测试是否可以互通。

开始搭建

为Master节点安装配置etcd

yum安装etcd:

1
[root@k8s-master ~]# yum install etcd -y  # 安装etcd

修改etcd配置文件:

1
2
3
[root@k8s-master ~]# vim /etc/etcd/etcd.conf  # 修改配置文件
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" # 第6行
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.21:2379" # 第21行

设置etcd开机启动&启动etcd:

1
2
[root@k8s-master ~]# systemctl start etcd.service  # 启动etcd
[root@k8s-master ~]# systemctl enable etcd.service # 设置开启启动

测试etcd服务是否健康:

1
2
[root@k8s-master ~]# etcdctl -C http://192.168.11.21:2379 cluster-health                   
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.11.21:2379 cluster is healthy

为Master节点安装Kubernetes

使用yum安装kubernetes:

1
[root@k8s-master ~]# yum install kubernetes-master.x86_64 -y

修改kubernetes的apiserver的配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
[root@k8s-master ~]# vim /etc/kubernetes/apiserver  :set nu 

# 第8行 将默认值修改为0.0.0.0
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# 第11行 打开注释
KUBE_API_PORT="--port=8080"
# 第14行 打开注释
KUBELET_PORT="--kubelet-port=10250"
#第17行 将默认值修改为master节点ip
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.11.21:2379"
# 第23行 删掉ServiceAccount内容
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

修改kubernetes的配置文件:

1
2
[root@k8s-master ~]# vim /etc/kubernetes/config  
KUBE_MASTER="--master=http://192.168.11.21:8080" # 第22行

设置开机启动&启动K8s相关服务:

1
2
3
4
5
6
[root@k8s-master ~]# systemctl enable kube-apiserver.service  
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl enable kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl enable kube-scheduler.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service

检查服务是否安装正常

1
2
3
4
5
[root@k8s-master ~]# kubectl get componentstatus                                                                                                       
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true"}
scheduler Healthy ok
controller-manager Healthy ok

为Node节点安装Kubernetes

在k8s-node1& k8s-node2节点分别执行以下命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install kubernetes-node.x86_64 -y  # 安装k8s

vim /etc/kubernetes/config :set nu # 修改kubernetes配置文件
KUBE_MASTER="--master=http://192.168.11.21:8080"

vim /etc/kubernetes/kubelet # 修改kubernetes配置文件

# 第5行 将默认值修改为 0.0.0.0
KUBELET_ADDRESS="--address=0.0.0.0"
# 第8行 打开注释
KUBELET_PORT="--port=10250"
# 第11行 修改默认值为node节点的Ip
KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
# 第14行 修改默认值为Master节点的IP
KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"

修改完成配置之后,分别重启相关服务:

1
2
3
4
systemctl enable kubelet.service
systemctl restart kubelet.service
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service

为所有节点配置Flannel网络

在所有节点上进行安装Flannel ,然后修改其配置文件:

1
2
yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://192.168.11.21:2379#g'

Master节点:

1
2
3
4
5
6
7
8
9
10
##master节点:
etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16" }'
yum install docker -y
systemctl enable flanneld.service
systemctl restart flanneld.service
systemctl restart docker
systemctl enable docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

Node节点:

1
2
3
4
5
6
7
8
9
10
11
systemctl enable flanneld.service 
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service

vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
systemctl daemon-reload
systemctl restart docker

将Master节点配置为镜像仓库

1
2
3
4
5
6
7
8
9
10
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}

systemctl restart docker

#master节点
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry

至此一个简单的K8s集群就搭建完毕了。

环境测试

在Master节点检查集群服务是否正常:

1
2
3
4
[root@k8s-master ~]# kubectl get nodes                      
NAME STATUS AGE
192.168.11.22 Ready 29s
192.168.11.23 Ready 27s

如果出现如上内容说明集群各节点正常运行。