Kubernetes
一 架构介绍和集群规划
- master api server
K8S架构图

Master节点
API Server 提供k8s API接口,主要处理rest操作一级更新etcd找是哪个的对象。所有资源增删改查的唯一入口
Scheduler 资源调度,负责Pod到Node的调度
Controller Manager 所有其他集群级别的功能,目前有控制器Manager执行,资源对象的自动化控制中心
Etcd, 一个持久化存储 所有持久化的状态信息存储中心
Node节点
Kubelet
管理Pods以及容器、镜像、Volume等,实现对集群对节点的管理。
Kube-proxy
提供网络代理以及负载均衡(lvs),实现与Service通信。
Docker Engine
负责节点的容器管理
手动部署K8S
docker镜像加速
cat <<EOF > /etc/docker/daemon.json
{
“registry-mirrors”: [“https://yourself.mirror.aliyuncs.com”] #请自行申请阿里云账号获取镜像加速链接
}
EOF
vim /etc/default/docker
DOCKER_OPTS="--registry-mirror=http://aad0405c.m.daocloud.io"
service docker restart
系统初始化
一 安装docker
docker官方安装文档 :https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce-1
较旧版本的Docker被称为docker或docker-engine。如果已安装,请卸载它们:
apt-get remove docker docker-engine docker.io
apt-get update
apt-get install \
linux-image-extra-$(uname -r) \
linux-image-extra-virtual
apt-get update
apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install docker-ce
# 列出可安装版本号
apt-cache madison docker-ce
# 安装指定版本docker
apt-get install docker-ce=<VERSION> # docker-ce=18.03.0~ce-0~ubuntu
docker --version
准备部署目录
# 全部节点执行
mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
准备软件包
将提供的安装包上传到服务器,放到/usr/local/src,解压
tar xf kubernetes.tar.gz
tar xf kubernetes-client-linux-amd64.tar.gz
tar xf kubernetes-node-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz
手动制作ca证书
1. k8s系统各组件需要使用TLS证书对通讯进行加密
CA证书管理工具
- easyrsa
- openssl
- cfssl
这里使用cfssl cfssl下载地址 我们在提供的安装包 已经有了
初始化cfssl
oot@k8s-01:/usr/local/src# ll /usr/local/src/cfssl*
-rw-r--r-- 1 root root 6595195 Mar 30 2016 /usr/local/src/cfssl-certinfo_linux-amd64
-rw-r--r-- 1 root root 10376657 Mar 30 2016 /usr/local/src/cfssl_linux-amd64
-rw-r--r-- 1 root root 2277873 Mar 30 2016 /usr/local/src/cfssljson_linux-amd64
root@k8s-01:/usr/local/src# chmod +x /usr/local/src/cfssl*
root@k8s-01:/usr/local/src# ll /usr/local/src/cfssl*
-rwxr-xr-x 1 root root 6595195 Mar 30 2016 /usr/local/src/cfssl-certinfo_linux-amd64*
-rwxr-xr-x 1 root root 10376657 Mar 30 2016 /usr/local/src/cfssl_linux-amd64*
-rwxr-xr-x 1 root root 2277873 Mar 30 2016 /usr/local/src/cfssljson_linux-amd64*
root@k8s-01:/usr/local/src# mv /usr/local/src/cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
root@k8s-01:/usr/local/src# mv /usr/local/src/cfssl_linux-amd64* /opt/kubernetes/bin/cfssl
root@k8s-01:/usr/local/src# mv /usr/local/src/cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson
root@k8s-01:/opt/kubernetes/bin# scp * node2:/opt/kubernetes/bin/
cfssl 100% 10MB 9.9MB/s 00:00
cfssl-certinfo 100% 6441KB 6.3MB/s 00:00
cfssljson 100% 2224KB 2.2MB/s 00:00
root@k8s-01:/opt/kubernetes/bin# scp * node3:/opt/kubernetes/bin/
cfssl 100% 10MB 9.9MB/s 00:00
cfssl-certinfo 100% 6441KB 6.3MB/s 00:00
cfssljson
创建用来生成CA证书的JSON配置文件(官方文档可得到)
root@k8s-01:~# cd /usr/local/src/
root@k8s-01:/usr/local/src# mkdir ssl && cd ssl
vim ca_config.json
#######
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
创建用来生成CA证书签名请求(CSR)的json配置文件(官方文档可得到)
vim ca-csr.json
#######
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成CA证书(ca.pem)和密钥(ca-key.pem)
# 当前目录执行
cfssl gencert -initca ca-csr.json |cfssljson -bare ca
# 查看当前文件
root@k8s-01:/usr/local/src/ssl# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
证书分发 (节点必备)
cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl/
scp ca.csr ca.pem ca-key.pem ca-config.json node2:/opt/kubernetes/ssl/
scp ca.csr ca.pem ca-key.pem ca-config.json node3:/opt/kubernetes/ssl/
ETCD集群部署
etcd简介
类似于zookeeper的分布式存储 ETCD官网
etcd下载安装
下载地址 我们已经准备好安装包,解压即可
tar xf etcd-v3.2.18-linux-amd64.tar.gz
cd etcd-v3.2.18-linux-amd64/
scp etcd etcdctl node2:/opt/kubernetes/bin/
scp etcd etcdctl node3:/opt/kubernetes/bin/
创建etcd证书签名请求:
cd /usr/local/src/ssl/
vim etcd-csr.json
##########
#注:下面的IP地址为各个节点的IP地址,在每个节点写本节点的单独生成即可
# 我们把所有节点的IP都加上,可以直接生成一个通用的证书
##########
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.80.81.246",
"10.80.80.61",
"10.80.81.144"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成证书(上一步生成证书一定不要出错)
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
> -ca-key=/opt/kubernetes/ssl/ca-key.pem \
> -config=/opt/kubernetes/ssl/ca-config.json \
> -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
证书分发
cp etcd*.pem /opt/kubernetes/ssl/
scp etcd*.pem node2:/opt/kubernetes/ssl/
scp etcd*.pem node2:/opt/kubernetes/ssl/
准备etcd数据目录 每个节点创建
mkdir /data/etcd/
配置etcd配置文件(system启动可用,ubuntu见下方)
vim /opt/kubernetes/cfg/etcd.conf
#[member]
## etcd_name 每个节点都必须不一样
ETCD_NAME="etcd-node1"
## 数据目录
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
## 监听url 2379 客户端用 2380 集群通讯
ETCD_LISTEN_PEER_URLS="https://10.80.81.246:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.80.81.246:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.56.11:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
## 集群初始化 注意写法
ETCD_INITIAL_CLUSTER="etcd-node1=https://10.80.81.246:2380,etcd-node2=https://10.80.80.61:2380,etcd-node3=https://10.80.81.144:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://10.80.81.246:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
ubuntu etcd启动脚本及配置(override) override用法 override示例git地址
cat /etc/init/etcd.conf
# description方法描述此upstart脚本信息,upstart脚本名即为被接管的服务名;
description "Etcd service"
author "@wangpenghong"
# start on/stop on方法,设置正确的启动/停止服务的条件,此项设置内容取决于被管理的服务、及其运行环境;本行的启动条件为:网络设备启动、本地文件系统挂载和在2345系统运行级别上启动;
start on (net-device-up
and local-filesystems
and runlevel [2345])
# 停止服务的条件:若系统运行级别为016,则停止服务;
stop on runlevel [016]
# 若服务异常退出,该命令将不断尝试重启服务,直至服务正常退出;
respawn
# Syntax:respawn limit COUNT INTERVAL | unlimited,每{INTERVAL}的时间后,便尝试重启服务{COUNT}次数;
#respawn limit 10 5
# 服务启动后,将服务的标准输出和标准错误输出重定向到/var/log/upstart/{service_name}.log内;
console log
# set max open files limit方法,限制upstart任务的使用资源;
limit nofile 2048 4096
# # Syntax: pre-start exec | script, 此方法为即将运行的服务准备运行环境;
# pre-start script
# # see also https://github.com/jainvipin/kubernetes-ubuntu-start
# ETCD=/opt/kubernetes/bin/$UPSTART_JOB
# if [ -f /etc/default/$UPSTART_JOB ]; then
# . /etc/default/$UPSTART_JOB
# fi
# if [ -f $ETCD ]; then
# exit 0
# fi
# echo "$ETCD binary not found, exiting"
# exit 22
#
# # pre-start script的结束标志:end script;
# end script
# 声明需要执行的shell代码块,由“end script”标志作为结束标志;
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
#exec "$ETCD" $ETCD_OPTS
chdir /data/etcd
exec /opt/kubernetes/bin/etcd >> /var/log/etcd.log 2>&1
#exec /opt/kubernetes/bin/etcd --ETCD_DATA_DIR="/data/etcd/default.etcd"
# script方法的结束标志
end script
override文件配合upstart中etcd.conf
cat /etc/init/etcd.override
#[member]
env ETCD_NAME="etcd-node1"
#ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
env ETCD_DATA_DIR="/data/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
env ETCD_LISTEN_PEER_URLS="https://10.80.81.246:2380"
env ETCD_LISTEN_CLIENT_URLS="https://10.80.81.246:2379,https://127.0.0.1:2379"
#env ETCD_MAX_SNAPSHOTS="5"
#env ETCD_MAX_WALS="5"
#env ETCD_CORS=""
#env [cluster]
env ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.80.81.246:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
env ETCD_INITIAL_CLUSTER="etcd-node1=https://10.80.81.246:2380,etcd-node2=https://10.80.80.61:2380,etcd-node3=https://10.80.81.144:2380"
env ETCD_INITIAL_CLUSTER_STATE="new"
env ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
env ETCD_ADVERTISE_CLIENT_URLS="https://10.80.81.246:2379"
#[security]
env CLIENT_CERT_AUTH="true"
env ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
env ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
env ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
env PEER_CLIENT_CERT_AUTH="true"
env ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
env ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
env ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
在所有节点上启动etcd服务
start etcd
测试节点健康状态命令
etcdctl --endpoints=https://10.80.81.246:2379 \
--ca-file=/opt/kubernetes/ssl/ca.pem \
--cert-file=/opt/kubernetes/ssl/etcd.pem \
--key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
# centos集群
etcdctl --endpoints=https://10.31.145.179:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health::w:
Master 节点部署
- APIServer 提供集群管理的restapi接口,包括认证授权、数据校验以及集群状态变更
- 只api server 才能直接操作etcd
- 其他模块通过api server查询或修改数据
- 提供其他模块之间的数据交互和通信的枢纽
- scheduler 负责分配调度Pod到集群内的node节点
- 监听kube-apiserver,查询还未分配Node的Pod
- 根据调度策略为这些Pod分配节点
- controller-manage由一系列的控制器组成,它通过apiserver控制整个集群的状态,并确保集群处于预期的工作状态
部署二进制文件(工具包已经准备,只需要在node1部署)
cd /usr/local/src/kubernetes
cp server/bin/kube-apiserver /opt/kubernetes/bin/
cp server/bin/kube-controller-manager /opt/kubernetes/bin/
cp server/bin/kube-scheduler /opt/kubernetes/bin/
1.创建生成CSR的 JSON 配置文件
cd /usr/local/src/ssl
vim kubernetes-csr.json
####### 以下是配置文件正文,防止出错使用请删除注释 ############
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.80.81.246", # 本机IP地址 Master IP地址
"10.1.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
2. 生成 kubernetes 证书和私钥,分发证书
# 创建
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
# 分发
cp kubernetes*pem /opt/kubernetes/ssl/
scp kubernetes*pem node2:/opt/kubernetes/ssl/
scp kubernetes*pem node3:/opt/kubernetes/ssl/
3. 创建 kube-apiserver 使用的客户端 token 文件
干嘛用的???
生成token
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
# 输出:
7ba00db78a9c84ab23fad54c51abfc61
vim /opt/kubernetes/ssl/bootstrap-token.csv
##### 文件内容 ####
7ba00db78a9c84ab23fad54c51abfc61,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
4. 创建基础用户名/密码认证配置
vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2
5. 部署Kubernetes API Server 配置启动
cat /etc/init/kube-apiserver.conf
# description方法描述此upstart脚本信息,upstart脚本名即为被接管的服务名;
description "Kube-apiserver service"
author "@wangpenghong"
# https://github.com/GoogleCloudPlatform/kubernetes
# start on/stop on方法,设置正确的启动/停止服务的条件,此项设置内容取决于被管理的服务、及其运行环境;本行的启动条件为:网络设备启动、本地文件系统挂载和在2345系统运行级别上启动;
start on (net-device-up
and local-filesystems
and runlevel [2345])
# 停止服务的条件:若系统运行级别为016,则停止服务;
stop on runlevel [016]
# 若服务异常退出,该命令将不断尝试重启服务,直至服务正常退出;
respawn
# Syntax:respawn limit COUNT INTERVAL | unlimited,每{INTERVAL}的时间后,便尝试重启服务{COUNT}次数;
#respawn limit 10 5
# 服务启动后,将服务的标准输出和标准错误输出重定向到/var/log/upstart/{service_name}.log内;
console log
# set max open files limit方法,限制upstart任务的使用资源;
limit nofile 2048 4096
# 声明需要执行的shell代码块,由“end script”标志作为结束标志;
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
#exec /opt/kubernetes/bin/kube-apiserver --enable-bootstrap-token-auth
exec /opt/kubernetes/bin/kube-apiserver \
--bind-address="10.80.81.246" \
--insecure-bind-address="127.0.0.1" \
--authorization-mode="Node,RBAC" \
--runtime-config="rbac.authorization.k8s.io/v1" \
--kubelet-https="true" \
--anonymous-auth="false" \
--basic-auth-file="/opt/kubernetes/ssl/basic-auth.csv" \
--enable-bootstrap-token-auth \
--token-auth-file="/opt/kubernetes/ssl/bootstrap-token.csv" \
--service-cluster-ip-range="10.1.0.0/16" \
--service-node-port-range="20000-40000" \
--tls-cert-file="/opt/kubernetes/ssl/kubernetes.pem" \
--tls-private-key-file="/opt/kubernetes/ssl/kubernetes-key.pem" \
--client-ca-file="/opt/kubernetes/ssl/ca.pem" \
--service-account-key-file="/opt/kubernetes/ssl/ca-key.pem" \
--etcd-cafile="/opt/kubernetes/ssl/ca.pem" \
--etcd-certfile="/opt/kubernetes/ssl/kubernetes.pem" \
--etcd-keyfile="/opt/kubernetes/ssl/kubernetes-key.pem" \
--etcd-servers="https://10.80.81.246:2379,https://10.80.80.61:2379,https://10.80.81.144:2379" \
--enable-swagger-ui="true" \
--allow-privileged="true" \
--audit-log-maxage="30" \
--audit-log-maxbackup="3" \
--audit-log-maxsize="100" \
--audit-log-path="/opt/kubernetes/log/api-audit.log" \
--event-ttl="1h" \
--v="2" \
--logtostderr="false" \
--log-dir="/opt/kubernetes/log"
# script方法的结束标志
end script
验证启动
start kube-apiserver
netstat -ntlp
###
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.80.81.246:6443 0.0.0.0:* LISTEN 4972/kube-apiserver
###
部署Controller Manager 服务配置启动
# description方法描述此upstart脚本信息,upstart脚本名即为被接管的服务名;
description "Kube-controller-manager service"
author "@wangpenghong"
start on (net-device-up
and local-filesystems
and runlevel [2345])
stop on runlevel [016]
respawn
console log
limit nofile 2048 4096
script
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec /opt/kubernetes/bin/kube-controller-manager \
--address=127.0.0.1 \
#--master=http://127.0.0.1:8080 \ # 在kube-apiserver启动时,127.0.0.1监听不生效,这里改为内网地址
--master=http://10.80.81.246:6443 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.1.0.0/16 \
--cluster-cidr=10.2.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--leader-elect=true \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
# script方法的结束标志
end script
启动验证:
start kube-controller-manager
netstat -lntp
#监听本地10252端口
########
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 14445/kube-controll
#######
#####################
etcdctl 部署
配置集群参数
kubectl config set-cluster kubernetes
–certificate-authority=/opt/kubernetes/ssl/ca.pem
–embed-certs=true
–server=https://10.31.145.179:6443
4.设置集群参数
[root@linux-node1 src]# kubectl config set-cluster kubernetes
–certificate-authority=/opt/kubernetes/ssl/ca.pem
–embed-certs=true
–server=https://192.168.56.11:6443
Cluster “kubernetes” set.
5.设置客户端认证参数
[root@linux-node1 src]# kubectl config set-credentials admin
–client-certificate=/opt/kubernetes/ssl/admin.pem
–embed-certs=true
–client-key=/opt/kubernetes/ssl/admin-key.pem
User “admin” set.
6.设置上下文参数
[root@linux-node1 src]# kubectl config set-context kubernetes
–cluster=kubernetes
–user=admin
Context “kubernetes” created.
7.设置默认上下文
[root@linux-node1 src]# kubectl config use-context kubernetes Switched to context “kubernetes”.
以上步骤生成文件
ll ~/.kube/config 如果想在其他主机上使用kubectl命令,需要把这个文件拿过去
[root@k8s01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
node节点部署
1.二进制包准备 将软件包从linux-node1复制到linux-node2中去。
[root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/ [root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/ [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.12:/opt/kubernetes/bin/ [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.13:/opt/kubernetes/bin/
2.创建角色绑定
[root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap –clusterrole=system:node-bootstrapper –user=kubelet-bootstrap clusterrolebinding “kubelet-bootstrap” created
3.创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
[root@linux-node1 ~]# kubectl config set-cluster kubernetes
–certificate-authority=/opt/kubernetes/ssl/ca.pem
–embed-certs=true \
–server=https://10.31.145.179:6443
–kubeconfig=bootstrap.kubeconfig
设置客户端认证参数
[root@linux-node1 ~]# kubectl config set-credentials kubelet-bootstrap
–token=fd244edaebd19f903109667cd00ab139
–kubeconfig=bootstrap.kubeconfig
2.创建kubelet目录
####### [root@linux-node2 ~]# mkdir /var/lib/kubelet [root@linux-node2 ~]# mkdir /data/kubelet
kubectl config set-cluster kubernetes
–certificate-authority=/opt/kubernetes/ssl/ca.pem
–embed-certs=true
–server=https://10.31.145.179:6443
–kubeconfig=kube-proxy.kubeconfig
创建etcd的key
/opt/kubernetes/bin/etcdctl –ca-file /opt/kubernetes/ssl/ca.pem –cert-file /opt/kubernetes/ssl/flanneld.pem –key-file /opt/kubernetes/ssl/flanneld-key.pem
–no-sync -C https://10.31.145.179:2379,https://10.66.205.206:2379,https://10.30.139.215:2379
mk /kubernetes/network/config ‘{ “Network”: “10.2.0.0/16”, “Backend”: { “Type”: “vxlan”, “VNI”: 1 }}’ >/dev/null 2>&1
Flannel节点网络部署
相关IP概念
Node IP:节点设备的IP,物理机,虚拟机,ecs的ip地址 Pod IP: Pod的IP地址,是根据docker0网格IP段进行分配的 clusterIP: Service的IP,是一个虚拟IP,单独的IP没有通讯功能,集群外访问需要修改一些东西。。。
service # 逻辑组件 一个service对应一个vip,可以通过vip访问到Pod里的服务
在K8S集群内部,nodeIP、PodIP、ClusterIP的通信机制是由K8S指定的路由规则,不是IP路由。
deployment 操作
#1. 创建deployment
kubectl create -f nginx-deployment.yaml
#2. 查看deployment
kubectl get deployment
#3. 查看pod
kubectl get pod -o wide
#4. 测试pod访问
curl --head http://10.2.x.x
#5. 更新deployment
kubectl set image deployment/nginx-deployment nginx=nginx:1.12.2 --record
#6. 查看更新后的Depolyment
kubectl get deployment -o wide
#7. 查看更新历史
kubectl rollout history deployment/nginx-deployment
#8. 查看具体某一个版本的升级历史
kubectl rollout history deployment/nginx-deployment --revision=1
#9. 快速回滚到上一个版本
kubectl rollout undo deployment/nginx-deployment
#10. 自动扩容
kubectl scale deployment nginx-deployment --replicas 5
#10. 创建service
kubectl create -f nginx-service.yml
#11. 查看service
kubectl get service
K8S DNS服务 CoreDNS
dashboard
登录令牌
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWtkcHhnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkY2E2MmU2Ni1iNmFmLTExZTgtYjdjYi0wMDFjNDJiZWQ2YmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.J-YckrharlUpGIUaA904LjOMHxK4p8maRFJtyeE3h2QExrwf9pQ4Veam91pmJNIjP2pk67PwmbDv5ruiNaQMSkBEmbjYLAllhxdL9fu_YO3OEdmoFOh8_M1Qc2CsYaBLjmsbgIx4xaQbWAjqrukT0UW4qg43avxUBH2CtfCyCNzKj1n0QhNe4mpKwuBkOA66AMBi60s9BIzk9JShjXnPaaqLJEwAkUsEQIyDeOVh_7aJyMwhpcAKiG10x8lYWzIrACpVTzOoMusgHARhXoL1xDazdL3l6_8rza8P--3RmF_fzETBpLDABTCaiCGlIu5Z_S_6a0sb_Hg-2Jixp8V_cA