上一篇文章“Kubernetes二进制安装之Node节点准备”着重介绍了安装Kubernetes前的一些准备工作,本篇文章我们介绍下Kubernetes中的Master节点安装过程。
Master节点在Kubernetes的架构中扮演着关键的角色,它是各个Kubernetes组件信息通讯的核心组件。
Master节点上面主要由四个模块组成,APIServer,schedule,controller-manager,etcd
APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
Schedule: schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
Controller Manager: 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
Etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
下面我们通过具体的步骤来描述下Master节点的安装过程。
步骤一:准备安装,在Master节点执行如下命令:
- 创建存放Kubernetes二进制文件的目录
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
- 创建存放Kubernetes log的目录
mkdir /var/log/{kubernetes,etcd,flanneld} -p
- 安装CFSSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
步骤二:创建Kubernetes CA证书JSON文件
vi ca-config.json
vi ca-csr.json
/usr/local/bin/cfssl gencert -initca ca-csr.json | /usr/local/bin/cfssljson -bare ca -
vi server-csr.json
{
"CN": "kubernetes",
"hosts": [
"master",
"master01",
"master02",
"master03",
"master04",
"master05",
"master06",
"master07",
"master08",
"master09",
"master10",
"master11",
"master12",
"master13",
"master14",
"master15",
"master16",
"master17",
"master18",
"master19",
"master20",
"node01",
"node02",
"node03",
"node04",
"node05",
"node06",
"node07",
"node08",
"node09",
"node10",
"node11",
"node12",
"node13",
"node14",
"node15",
"node16",
"node17",
"node18",
"node19",
"node20",
"10.0.0.10",
"10.0.0.11",
"10.0.0.12",
"10.0.0.13",
"10.0.0.14",
"10.0.0.15",
"10.0.0.16",
"10.0.0.17",
"10.0.0.18",
"10.0.0.19",
"10.0.0.20",
"10.0.0.21",
"10.0.0.22",
"10.0.0.23",
"10.0.0.24",
"10.0.0.25",
"10.0.0.26",
"10.0.0.27",
"10.0.0.28",
"10.0.0.29",
"10.0.0.30",
"127.0.0.1",
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shannxi",
"ST": "Shannxi",
"O": "k8s",
"OU": "System"
}
]
}
其中,hosts多添加了15个hostname是为了以后Kubernetes cluster扩展,加入新的Node节点不需要更新证书文件做准备。
/usr/local/bin/cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | /usr/local/bin/cfssljson -bare server
vi kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [
"master",
"master01",
"master02",
"master03",
"master04",
"master05",
"master06",
"master07",
"master08",
"master09",
"master10",
"master11",
"master12",
"master13",
"master14",
"master15",
"master16",
"master17",
"master18",
"master19",
"master20",
"node01",
"node02",
"node03",
"node04",
"node05",
"node06",
"node07",
"node08",
"node09",
"node10",
"node11",
"node12",
"node13",
"node14",
"node15",
"node16",
"node17",
"node18",
"node19",
"node20",
"10.0.0.10",
"10.0.0.11",
"10.0.0.12",
"10.0.0.13",
"10.0.0.14",
"10.0.0.15",
"10.0.0.16",
"10.0.0.17",
"10.0.0.18",
"10.0.0.19",
"10.0.0.20",
"10.0.0.21",
"10.0.0.22",
"10.0.0.23",
"10.0.0.24",
"10.0.0.25",
"10.0.0.26",
"10.0.0.27",
"10.0.0.28",
"10.0.0.29",
"10.0.0.30",
"127.0.0.1",
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shannxi",
"ST": "Shannxi",
"O": "k8s",
"OU": "System"
}
]
}
其中,hosts预留了15个hostname,方便以后Kubernetes cluster扩展需要。
/usr/local/bin/cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | /usr/local/bin/cfssljson -bare kube-proxy
将上面生成的证书文件拷贝到相应的目录中
cp *.pem /k8s/kubernetes/ssl/
步骤三:部署etcd
- 解压二进制安装包
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
- 创建etcd配置文件
vi /k8s/etcd/cfg/etcd
- 创建etcd开机自启动脚本
vi /usr/lib/systemd/system/etcd.service
- 启动etcd服务
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
- 查看etcd服务是否启动
ps auwx | grep etcd
步骤四:部署Flannel网络
- 向etcd写入集群Pod网段信息
/k8s/etcd/bin/etcdctl set /coreos.com/network/config '{"Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
- 解压Flannel二进制安装包
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
- 创建Flannel配置文件
vi /k8s/kubernetes/cfg/flanneld
- 创建Flannel开机自启动脚本
vi /usr/lib/systemd/system/flanneld.service
- 启动Flannel服务
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
- 查看Flannel服务是否启动
ps auwx | grep flanneld
步骤五:部署Kubernetes API Server组件
- 解压Kubernetes二进制安装包
tar -xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
- 将Kubernetes安装所需要的二进制文件拷贝到安装目录
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
- 生成token
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
f381fd5bcdb9b752298fbde19da95ee6
vi /k8s/kubernetes/cfg/token.csv
f381fd5bcdb9b752298fbde19da95ee6,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
- 创建API Server配置文件
vi /k8s/kubernetes/cfg/kube-apiserver
- --logtostderr:log是否写入系统文件;
- --v:log级别;
- --etcd-servers:etcd cluster的host name和port,etcd我们以cluster方式安装,因此这里把所有的etcd节点都加入配置文件中;
- --bind-address:https绑定的ip地址,0.0.0.0表示监听所有的IP地址;
- --secure-port:https安全端口,默认是6443;
- --insecure-bind-address:http绑定的ip地址;
- --insecure-port:http监听的port,默认是8080;
- --advertise-address:通过该 ip 地址向集群其他节点公布 api server 的信息,必须能够被其他节点访问;
- --allow-privileged:是否允许特权容器运行;
- --service-cluster-ip-range:Kubernetes service 要使用的网段;
- --enable-admission-plugins:开启的管理员权限插件;
- --authorization-mode:authorization认证模式,有如下值可选:AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node. (default [AlwaysAllow]);
- --enable-bootstrap-token-auth:是否开启token认证;
- --token-auth-file:token认证csv格式文件位置;
- --kubelet-read-only-port:只读端口;
- --service-node-port-range:Kubernetes service端口范围;
- --log-dir:log目录;
- --log-file:完整的log文件存放位置;
- --tls-cert-file,--tls-private-key-file,--client-ca-file,--service-account-key-file:证书存放位置;
- 创建API Server自启动脚本
vi /usr/lib/systemd/system/kube-apiserver.service
- 启动API Server服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
- 查看api-server是否启动
ps auwx | grep kube-apiserver
步骤六:部署Kubernetes scheduler组件
- 创建Kubernetes scheduler配置文件
vi /k8s/kubernetes/cfg/kube-scheduler
- --logtostderr:是否将log写入系统log文件;
- --v:log级别;
- --master:在127.0.0.1:10251端口接收http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
- --leader-elect:集群运行模式,是否启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
- --log-dir:log存放目录;
- --log-file:log文件存放位置;
- 创建scheduler自启动脚本
vi /usr/lib/systemd/system/kube-scheduler.service
- 启动scheduler服务
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
- 查看Kubernetes scheduler服务是否启动
ps auwx | grep kube-scheduler
步骤七:部署kube-controller-manager组件
- 创建kube-controller-manager配置文件
vi /k8s/kubernetes/cfg/kube-controller-manager
- --logtostderr:是否将log写入系统日志文件;
- --v:log级别;
- --master:master ip地址和port;
- --leader-elect:集群运行模式,是否启用选举功能;
- --address:监听的ip地址;
- --service-cluster-ip-range:Kubernetes service 要使用的网段;
- --cluster-name:Kubernetes Controller Manager名字;
- --log-dir:log目录;
- --log-file:完整的log文件存放位置;
- --cluster-signing-cert-file,--cluster-signing-key-file,--root-ca-file,--service-account-private-key-file:证书存放位置;
- 创建kube-controller-manager开机自启动脚本
vi /usr/lib/systemd/system/kube-controller-manager.service
- 启动kube-controller-manager服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
- 查看kube-controller-manager服务是否启动
ps auwx | grep kube-controller-manager
步骤八:将Kubernetes可执行文件路径/k8s/kubernetes/加到环境变量path中
vi /etc/profile.d/k8s.sh
source /etc/profile
步骤九:验证Master安装结果
- 查看Master状态
kubectl get cs
- 查看Kubernetes log
ls -rlt /var/log/kubernetes/
到此,Kubernetes master节点安装完毕。通过上述操作我们在Master节点上安装了etcd组件,Flannel组件,Kubernetes API Server组件,Kubernetes Scheduler组件,Kubernetes Controller Manager组件。关于Node节点的安装过程,我们通过下一期文章具体介绍,敬请期待!
文章如有不妥之处,欢迎指正!谢谢!
本文来自投稿,不代表本人立场,如若转载,请注明出处:http://www.souzhinan.com/kj/252405.html