RKE高可用安装K8s集群
本次安装演示是在本地虚拟机完成,下面所有IP地址请替换成自己本机配置地址;
若使用云服务器请保证各服务器同属同一个内网,请将下面地址替换成自己服务器内网地址,
并设置一下RKE所需端口入站出站规则(参照:https://rancher2.docs.rancher.cn/docs/installation/requirements/ports/_index)
我们准备了四台服务器,k8s-nginx
(负载均衡器)、k8s-node01
(节点1)、k8s-node02
(节点2)、k8s-node03
(节点3)
我们设置每个node节点都作为master节点和node节点使用,当然土豪的话可以配置三个master节点和三个node节点的
hostnamectl set-hostname k8s-nginx hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-nginx hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-nginx hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 hostnamectl set-hostname k8s-node03
<span class="token function">hostname</span><span class="token function">hostname</span>hostname
<span class="token function">vi</span> /etc/hosts 192.168.66.14 k8s-nginx 192.168.66.10 k8s-node01 192.168.66.11 k8s-node02 192.168.66.12 k8s-node03<span class="token function">vi</span> /etc/hosts 192.168.66.14 k8s-nginx 192.168.66.10 k8s-node01 192.168.66.11 k8s-node02 192.168.66.12 k8s-node03vi /etc/hosts 192.168.66.14 k8s-nginx 192.168.66.10 k8s-node01 192.168.66.11 k8s-node02 192.168.66.12 k8s-node03
systemctl stop firewalld <span class="token operator">&&</span> systemctl disable firewalldsystemctl stop firewalld <span class="token operator">&&</span> systemctl disable firewalldsystemctl stop firewalld && systemctl disable firewalld
swapoff -a <span class="token operator">&&</span> <span class="token function">sed</span> -i <span class="token string">'/ swap / s/^\(.*\)$/#\1/g'</span> /etc/fstabswapoff -a <span class="token operator">&&</span> <span class="token function">sed</span> -i <span class="token string">'/ swap / s/^\(.*\)$/#\1/g'</span> /etc/fstabswapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#关闭selinux
setenforce 0 <span class="token operator">&&</span> <span class="token function">sed</span> -i <span class="token string">'s/^SELINUX=.*/SELINUX=disabled/'</span> /etc/selinux/configsetenforce 0 <span class="token operator">&&</span> <span class="token function">sed</span> -i <span class="token string">'s/^SELINUX=.*/SELINUX=disabled/'</span> /etc/selinux/configsetenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
<span class="token function">mv</span> /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup<span class="token function">mv</span> /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backupmv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
<span class="token function">wget</span> -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo<span class="token function">wget</span> -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repowget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
<span class="token function">cat</span> <span class="token operator">>></span> /etc/sysctl.conf<span class="token operator"><<</span><span class="token string">EOF net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.neigh.default.gc_thresh1=4096 net.ipv4.neigh.default.gc_thresh2=6144 net.ipv4.neigh.default.gc_thresh3=8192 EOF</span><span class="token function">cat</span> <span class="token operator">>></span> /etc/sysctl.conf<span class="token operator"><<</span><span class="token string">EOF net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.neigh.default.gc_thresh1=4096 net.ipv4.neigh.default.gc_thresh2=6144 net.ipv4.neigh.default.gc_thresh3=8192 EOF</span>cat >> /etc/sysctl.conf<<EOF net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.neigh.default.gc_thresh1=4096 net.ipv4.neigh.default.gc_thresh2=6144 net.ipv4.neigh.default.gc_thresh3=8192 EOF
sysctl -psysctl -psysctl -p
<span class="token function">cat</span> <span class="token operator"><<</span>EOF <span class="token operator">></span> /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables <span class="token operator">=</span> 1 net.bridge.bridge-nf-call-iptables <span class="token operator">=</span> 1 vm.swappiness<span class="token operator">=</span>0 EOF sysctl --system<span class="token function">cat</span> <span class="token operator"><<</span>EOF <span class="token operator">></span> /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables <span class="token operator">=</span> 1 net.bridge.bridge-nf-call-iptables <span class="token operator">=</span> 1 vm.swappiness<span class="token operator">=</span>0 EOF sysctl --systemcat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF sysctl --system
<span class="token function">cat</span> <span class="token operator"><<</span>EOF <span class="token operator">></span> /etc/yum.repos.d/kubernetes.repo <span class="token punctuation">[</span>kubernetes<span class="token punctuation">]</span> name<span class="token operator">=</span>Kubernetes baseurl<span class="token operator">=</span>http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled<span class="token operator">=</span>1 gpgcheck<span class="token operator">=</span>0 repo_gpgcheck<span class="token operator">=</span>0 gpgkey<span class="token operator">=</span>http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF<span class="token function">cat</span> <span class="token operator"><<</span>EOF <span class="token operator">></span> /etc/yum.repos.d/kubernetes.repo <span class="token punctuation">[</span>kubernetes<span class="token punctuation">]</span> name<span class="token operator">=</span>Kubernetes baseurl<span class="token operator">=</span>http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled<span class="token operator">=</span>1 gpgcheck<span class="token operator">=</span>0 repo_gpgcheck<span class="token operator">=</span>0 gpgkey<span class="token operator">=</span>http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFcat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum update -yyum update -yyum update -y
yum <span class="token function">install</span> -y yum-utils device-mapper-persistent-data lvm2yum <span class="token function">install</span> -y yum-utils device-mapper-persistent-data lvm2yum install -y yum-utils device-mapper-persistent-data lvm2
#配置docker源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum update -y <span class="token operator">&&</span> yum <span class="token function">install</span> docker-ceyum update -y <span class="token operator">&&</span> yum <span class="token function">install</span> docker-ceyum update -y && yum install docker-ce
#创建/etc/docker目录
<span class="token function">mkdir</span> /etc/docker<span class="token function">mkdir</span> /etc/dockermkdir /etc/docker
<span class="token function">cat</span> <span class="token operator">></span> /etc/docker/daemon.json <span class="token operator"><<</span><span class="token string">EOF {"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"registry-mirrors": ["https://<your Id>.mirror.aliyuncs.com",https://mirror.ccs.tencentyun.com/"]} EOF</span><span class="token function">cat</span> <span class="token operator">></span> /etc/docker/daemon.json <span class="token operator"><<</span><span class="token string">EOF {"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"registry-mirrors": ["https://<your Id>.mirror.aliyuncs.com",https://mirror.ccs.tencentyun.com/"]} EOF</span>cat > /etc/docker/daemon.json <<EOF {"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"registry-mirrors": ["https://<your Id>.mirror.aliyuncs.com",https://mirror.ccs.tencentyun.com/"]} EOF
#注意: 一定注意编码问题,出现错误:查看命令:journalctl -amu docker 即可发现错误
#your Id替换为你的id,阿里加速镜像需要登录自己容器镜像服务查询:https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
<span class="token function">mkdir</span> -p /etc/systemd/system/docker.service.d<span class="token function">mkdir</span> -p /etc/systemd/system/docker.service.dmkdir -p /etc/systemd/system/docker.service.d
#重启docker服务
systemctl daemon-reload <span class="token operator">&&</span> systemctl restart docker <span class="token operator">&&</span> systemctl <span class="token function">enable</span> dockersystemctl daemon-reload <span class="token operator">&&</span> systemctl restart docker <span class="token operator">&&</span> systemctl <span class="token function">enable</span> dockersystemctl daemon-reload && systemctl restart docker && systemctl enable docker
<span class="token function">useradd</span> ubuntu <span class="token function">passwd</span> ubuntu <span class="token function">usermod</span> -aG docker ubuntu<span class="token function">useradd</span> ubuntu <span class="token function">passwd</span> ubuntu <span class="token function">usermod</span> -aG docker ubuntuuseradd ubuntu passwd ubuntu usermod -aG docker ubuntu
vi /etc/sudoers
找到这一行:“root ALL=(ALL) ALL”,在下面添加:
ubuntu ALL<span class="token operator">=</span><span class="token punctuation">(</span>ALL<span class="token punctuation">)</span> ALLubuntu ALL<span class="token operator">=</span><span class="token punctuation">(</span>ALL<span class="token punctuation">)</span> ALLubuntu ALL=(ALL) ALL
#1、ssh配置
<span class="token function">su</span> - ubuntu<span class="token function">su</span> - ubuntusu - ubuntu
ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.10 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.11 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.12 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.14ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.10 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.11 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.12 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.14ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.10 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.11 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.12 ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu@192.168.68.14
#免密测试
<span class="token function">ssh</span> 用户名@机器IP<span class="token function">ssh</span> 用户名@机器IPssh 用户名@机器IP
创建/etc/nginx/nginx.conf:
worker_processes 4<span class="token punctuation">;</span> worker_rlimit_nofile 40000<span class="token punctuation">;</span> events <span class="token punctuation">{<!-- --></span> worker_connections 8192<span class="token punctuation">;</span> <span class="token punctuation">}</span> stream <span class="token punctuation">{<!-- --></span> upstream rancher_servers_http <span class="token punctuation">{<!-- --></span> least_conn<span class="token punctuation">;</span> server 192.168.68.10:80 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.11:80 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.12:80 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> <span class="token punctuation">}</span> server <span class="token punctuation">{<!-- --></span> listen 80<span class="token punctuation">;</span> proxy_pass rancher_servers_http<span class="token punctuation">;</span> <span class="token punctuation">}</span> upstream rancher_servers_https <span class="token punctuation">{<!-- --></span> least_conn<span class="token punctuation">;</span> server 192.168.68.10:443 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.11:443 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.12:443 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> <span class="token punctuation">}</span> server <span class="token punctuation">{<!-- --></span> listen 443<span class="token punctuation">;</span> proxy_pass rancher_servers_https<span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span>worker_processes 4<span class="token punctuation">;</span> worker_rlimit_nofile 40000<span class="token punctuation">;</span> events <span class="token punctuation">{<!-- --></span> worker_connections 8192<span class="token punctuation">;</span> <span class="token punctuation">}</span> stream <span class="token punctuation">{<!-- --></span> upstream rancher_servers_http <span class="token punctuation">{<!-- --></span> least_conn<span class="token punctuation">;</span> server 192.168.68.10:80 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.11:80 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.12:80 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> <span class="token punctuation">}</span> server <span class="token punctuation">{<!-- --></span> listen 80<span class="token punctuation">;</span> proxy_pass rancher_servers_http<span class="token punctuation">;</span> <span class="token punctuation">}</span> upstream rancher_servers_https <span class="token punctuation">{<!-- --></span> least_conn<span class="token punctuation">;</span> server 192.168.68.10:443 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.11:443 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> server 192.168.68.12:443 max_fails<span class="token operator">=</span>3 fail_timeout<span class="token operator">=</span>5s<span class="token punctuation">;</span> <span class="token punctuation">}</span> server <span class="token punctuation">{<!-- --></span> listen 443<span class="token punctuation">;</span> proxy_pass rancher_servers_https<span class="token punctuation">;</span> <span class="token punctuation">}</span> <span class="token punctuation">}</span>worker_processes 4; worker_rlimit_nofile 40000; events { worker_connections 8192; } stream { upstream rancher_servers_http { least_conn; server 192.168.68.10:80 max_fails=3 fail_timeout=5s; server 192.168.68.11:80 max_fails=3 fail_timeout=5s; server 192.168.68.12:80 max_fails=3 fail_timeout=5s; } server { listen 80; proxy_pass rancher_servers_http; } upstream rancher_servers_https { least_conn; server 192.168.68.10:443 max_fails=3 fail_timeout=5s; server 192.168.68.11:443 max_fails=3 fail_timeout=5s; server 192.168.68.12:443 max_fails=3 fail_timeout=5s; } server { listen 443; proxy_pass rancher_servers_https; } }
#3.拉取nginx镜像
docker pull nginx:1.15docker pull nginx:1.15docker pull nginx:1.15
#4.运行nginx镜像
docker run -d --restart<span class="token operator">=</span>always -p 80:80 -p 443:443 \ --name nginx \ -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf \ nginx:1.15docker run -d --restart<span class="token operator">=</span>always -p 80:80 -p 443:443 \ --name nginx \ -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf \ nginx:1.15docker run -d --restart=always -p 80:80 -p 443:443 \ --name nginx \ -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf \ nginx:1.15
#下载地址:
https://github.com/rancher/rke/releases/download/v1.0.10/rke_linux-amd64https://github.com/rancher/rke/releases/download/v1.0.10/rke_linux-amd64https://github.com/rancher/rke/releases/download/v1.0.10/rke_linux-amd64
#github下载较慢,我已上传安装包至百度网盘:
(链接:https://pan.baidu.com/s/1RU_XoTtrsnK-nQTggVU_AQ 提取码:caqk )(链接:https://pan.baidu.com/s/1RU_XoTtrsnK-nQTggVU_AQ 提取码:caqk )(链接:https://pan.baidu.com/s/1RU_XoTtrsnK-nQTggVU_AQ 提取码:caqk )
2.创建rancher-cluster.yml文件
nodes: - address: 192.168.68.10 internal_address: 192.168.68.10 user: ubuntu role: <span class="token punctuation">[</span>controlplane, worker, etcd<span class="token punctuation">]</span> - address: 192.168.68.11 internal_address: 192.168.68.11 user: ubuntu role: <span class="token punctuation">[</span>controlplane, worker, etcd<span class="token punctuation">]</span> - address: 192.168.68.12 internal_address: 192.168.68.12 user: ubuntu role: <span class="token punctuation">[</span>controlplane, worker, etcd<span class="token punctuation">]</span> services: etcd: snapshot: <span class="token boolean">true</span> creation: 6h retention: 24hnodes: - address: 192.168.68.10 internal_address: 192.168.68.10 user: ubuntu role: <span class="token punctuation">[</span>controlplane, worker, etcd<span class="token punctuation">]</span> - address: 192.168.68.11 internal_address: 192.168.68.11 user: ubuntu role: <span class="token punctuation">[</span>controlplane, worker, etcd<span class="token punctuation">]</span> - address: 192.168.68.12 internal_address: 192.168.68.12 user: ubuntu role: <span class="token punctuation">[</span>controlplane, worker, etcd<span class="token punctuation">]</span> services: etcd: snapshot: <span class="token boolean">true</span> creation: 6h retention: 24hnodes: - address: 192.168.68.10 internal_address: 192.168.68.10 user: ubuntu role: [controlplane, worker, etcd] - address: 192.168.68.11 internal_address: 192.168.68.11 user: ubuntu role: [controlplane, worker, etcd] - address: 192.168.68.12 internal_address: 192.168.68.12 user: ubuntu role: [controlplane, worker, etcd] services: etcd: snapshot: true creation: 6h retention: 24h
#当使用外部 TLS 终止,并且使用 ingress-nginx v0.22或以上版本时,必须。
ingress: provider: nginx options: use-forwarded-headers: <span class="token string">"true"</span>ingress: provider: nginx options: use-forwarded-headers: <span class="token string">"true"</span>ingress: provider: nginx options: use-forwarded-headers: "true"
#.执行安装
<span class="token function">mv</span> rke_linux-amd64 rke rke up --config ./rancher-cluster.yml<span class="token function">mv</span> rke_linux-amd64 rke rke up --config ./rancher-cluster.ymlmv rke_linux-amd64 rke rke up --config ./rancher-cluster.yml
直至出现:
Finished building Kubernetes cluster successfullyFinished building Kubernetes cluster successfullyFinished building Kubernetes cluster successfully
安装完成
提示:若安装失败或者镜像(images)pull失败,那可能是docker镜像加速地址未配置成功,请重新配置/etc/docker/daemon.json文件并重启
然后再执行rke up --config ./rancher-cluster.yml
(可多次执行,不影响),直至成功为止
#设置KUBECONFIG环境变量(这步可跳跃)
<span class="token function">export</span> KUBECONFIG<span class="token operator">=</span><span class="token variable"><span class="token variable">$(</span><span class="token function">pwd</span><span class="token variable">)</span></span>/kube_config_rancher-cluster.yml<span class="token function">export</span> KUBECONFIG<span class="token operator">=</span><span class="token variable"><span class="token variable">$(</span><span class="token function">pwd</span><span class="token variable">)</span></span>/kube_config_rancher-cluster.ymlexport KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
$(pwd):请替换成kube_config_rancher-cluster.yml文件所在目录
#将kube_config_rancher-cluster.yml文件复制到$HOME/.kube/config
<span class="token function">touch</span> ~/.kube/config <span class="token function">cp</span> -a ./kube_config_rancher-cluster.yml ~/.kube/config<span class="token function">touch</span> ~/.kube/config <span class="token function">cp</span> -a ./kube_config_rancher-cluster.yml ~/.kube/configtouch ~/.kube/config cp -a ./kube_config_rancher-cluster.yml ~/.kube/config
#安装kubectl
yum <span class="token function">install</span> -y kubectl-1.18.0yum <span class="token function">install</span> -y kubectl-1.18.0yum install -y kubectl-1.18.0
#使用kubectl测试您的连通性,并查看您的所有节点是否都处于Ready状态:
kubectl get nodeskubectl get nodeskubectl get nodes
#打印
NAME STATUS ROLES AGE VERSION 192.168.68.10 Ready controlplane,etcd,worker 2d17h v1.17.6 192.168.68.11 Ready controlplane,etcd,worker 2d17h v1.17.6 192.168.68.12 Ready controlplane,etcd,worker 2d17h v1.17.6NAME STATUS ROLES AGE VERSION 192.168.68.10 Ready controlplane,etcd,worker 2d17h v1.17.6 192.168.68.11 Ready controlplane,etcd,worker 2d17h v1.17.6 192.168.68.12 Ready controlplane,etcd,worker 2d17h v1.17.6NAME STATUS ROLES AGE VERSION 192.168.68.10 Ready controlplane,etcd,worker 2d17h v1.17.6 192.168.68.11 Ready controlplane,etcd,worker 2d17h v1.17.6 192.168.68.12 Ready controlplane,etcd,worker 2d17h v1.17.6
#检查集群 Pod 的运行状况
检查所有必需的 Pod 和容器是否状况良好,然后可以继续进行。
Pod 是Running或Completed状态。
STATUS 为 Running 的 Pod,READY 应该显示所有容器正在运行 (例如,3/3)。
STATUS 为 Completed的 Pod 是一次运行的作业。对于这些 Pod,READY应为0/1。
执行:
kubectl get pods --all-namespaceskubectl get pods --all-namespaceskubectl get pods --all-namespaces
打印:
NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-67cf578fc4-jpprk 1/1 Running 2 2d17h ingress-nginx nginx-ingress-controller-8dkq5 1/1 Running 2 2d17h ingress-nginx nginx-ingress-controller-8qs9n 1/1 Running 1 2d17h ingress-nginx nginx-ingress-controller-w7wq9 1/1 Running 1 2d17h kube-system canal-bngcs 2/2 Running 2 2d17h kube-system canal-tdn87 2/2 Running 2 2d17h kube-system canal-z9zh7 2/2 Running 2 2d17h kube-system coredns-7c5566588d-95926 1/1 Running 1 2d17h kube-system coredns-7c5566588d-qsgm4 1/1 Running 1 2d17h kube-system coredns-autoscaler-65bfc8d47d-8m9bg 1/1 Running 1 2d17h kube-system metrics-server-6b55c64f86-6mn7c 1/1 Running 1 2d17h kube-system rke-coredns-addon-deploy-job-kw86f 0/1 Completed 0 2d17h kube-system rke-ingress-controller-deploy-job-qk6pw 0/1 Completed 0 2d17h kube-system rke-metrics-addon-deploy-job-rpr48 0/1 Completed 0 2d17h kube-system rke-network-plugin-deploy-job-79nc9 0/1 Completed 0 2d17hNAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-67cf578fc4-jpprk 1/1 Running 2 2d17h ingress-nginx nginx-ingress-controller-8dkq5 1/1 Running 2 2d17h ingress-nginx nginx-ingress-controller-8qs9n 1/1 Running 1 2d17h ingress-nginx nginx-ingress-controller-w7wq9 1/1 Running 1 2d17h kube-system canal-bngcs 2/2 Running 2 2d17h kube-system canal-tdn87 2/2 Running 2 2d17h kube-system canal-z9zh7 2/2 Running 2 2d17h kube-system coredns-7c5566588d-95926 1/1 Running 1 2d17h kube-system coredns-7c5566588d-qsgm4 1/1 Running 1 2d17h kube-system coredns-autoscaler-65bfc8d47d-8m9bg 1/1 Running 1 2d17h kube-system metrics-server-6b55c64f86-6mn7c 1/1 Running 1 2d17h kube-system rke-coredns-addon-deploy-job-kw86f 0/1 Completed 0 2d17h kube-system rke-ingress-controller-deploy-job-qk6pw 0/1 Completed 0 2d17h kube-system rke-metrics-addon-deploy-job-rpr48 0/1 Completed 0 2d17h kube-system rke-network-plugin-deploy-job-79nc9 0/1 Completed 0 2d17hNAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-67cf578fc4-jpprk 1/1 Running 2 2d17h ingress-nginx nginx-ingress-controller-8dkq5 1/1 Running 2 2d17h ingress-nginx nginx-ingress-controller-8qs9n 1/1 Running 1 2d17h ingress-nginx nginx-ingress-controller-w7wq9 1/1 Running 1 2d17h kube-system canal-bngcs 2/2 Running 2 2d17h kube-system canal-tdn87 2/2 Running 2 2d17h kube-system canal-z9zh7 2/2 Running 2 2d17h kube-system coredns-7c5566588d-95926 1/1 Running 1 2d17h kube-system coredns-7c5566588d-qsgm4 1/1 Running 1 2d17h kube-system coredns-autoscaler-65bfc8d47d-8m9bg 1/1 Running 1 2d17h kube-system metrics-server-6b55c64f86-6mn7c 1/1 Running 1 2d17h kube-system rke-coredns-addon-deploy-job-kw86f 0/1 Completed 0 2d17h kube-system rke-ingress-controller-deploy-job-qk6pw 0/1 Completed 0 2d17h kube-system rke-metrics-addon-deploy-job-rpr48 0/1 Completed 0 2d17h kube-system rke-network-plugin-deploy-job-79nc9 0/1 Completed 0 2d17h
这确认您已经成功安装了可以运行 Rancher Server 的 Kubernetes 集群。
<span class="token function">wget</span> https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz<span class="token function">wget</span> https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gzwget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
或者百度网盘下载:
(链接:https://pan.baidu.com/s/1Ma7aNznrnaEaTe1Ke4cheg 提取码:5oul)(链接:https://pan.baidu.com/s/1Ma7aNznrnaEaTe1Ke4cheg 提取码:5oul)(链接:https://pan.baidu.com/s/1Ma7aNznrnaEaTe1Ke4cheg 提取码:5oul)
<span class="token function">tar</span> -xf helm-v3.2.4-linux-amd64.tar.gz <span class="token function">cd</span> helm-v3.2.4-linux-amd64/ <span class="token function">cp</span> helm /usr/local/bin/<span class="token function">tar</span> -xf helm-v3.2.4-linux-amd64.tar.gz <span class="token function">cd</span> helm-v3.2.4-linux-amd64/ <span class="token function">cp</span> helm /usr/local/bin/tar -xf helm-v3.2.4-linux-amd64.tar.gz cd helm-v3.2.4-linux-amd64/ cp helm /usr/local/bin/
#2、添加 Helm Chart 仓库
#国外地址
helm repo add rancher-stable https://releases.rancher.com/server-charts/stablehelm repo add rancher-stable https://releases.rancher.com/server-charts/stablehelm repo add rancher-stable https://releases.rancher.com/server-charts/stable
#国内地址(推荐)
helm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stablehelm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stablehelm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable
#3、为 Rancher 创建 Namespace
kubectl create namespace cattle-systemkubectl create namespace cattle-systemkubectl create namespace cattle-system
#4、选择您的 SSL 选项
安装 cert-manager
有三种关于证书来源的推荐选项,证书将用来在 Rancher Server 中终止 TLS:
Rancher 生成的自签名证书: 在这种情况下,您需要在集群中安装cert-manager。 Rancher 利用cert-manager签发并维护证书。Rancher 将生成自己的 CA 证书,并使用该 CA 签署证书。然后,cert-manager负责管理该证书。
Let’s Encrypt: Let’s Encrypt 选项也需要使用cert-manager。但是,在这种情况下,cert-manager与特殊的 Issuer 结合使用,cert-manager将执行获取 Let’s Encrypt 发行的证书所需的所有操作(包括申请和验证)。此配置使用 HTTP 验证(HTTP-01),因此负载均衡器必须具有可以从公网访问的公共 DNS 记录。
使用您自己的证书: 此选项使您可以使用自己的权威 CA 颁发的证书或自签名 CA 证书。 Rancher 将使用该证书来保护 WebSocket 和 HTTPS 流量。在这种情况下,您必须上传名称分别为tls.crt和tls.key的 PEM 格式的证书以及相关的密钥。如果使用私有 CA,则还必须上传该证书。这是由于您的节点可能不信任此私有 CA。 Rancher 将获取该 CA 证书,并从中生成一个校验和,各种 Rancher 组件将使用该校验和来验证其与 Rancher 的连接。
这里我们选择rancher生成的证书:
kubectl apply --validate<span class="token operator">=</span>false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yamlkubectl apply --validate<span class="token operator">=</span>false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yamlkubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yaml
kubectl create namespace cert-managerkubectl create namespace cert-managerkubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.iohelm repo add jetstack https://charts.jetstack.iohelm repo add jetstack https://charts.jetstack.io
helm repo updatehelm repo updatehelm repo update
helm <span class="token function">install</span> \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --version v0.15.0helm <span class="token function">install</span> \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --version v0.15.0helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --version v0.15.0
安装完 cert-manager 后,您可以通过检查 cert-manager 命名空间中正在运行的 Pod 来验证它是否已正确部署:
kubectl get pods --namespaces cert-manager cert-manager-766d5c494b-6gtfk 1/1 Running 0 2m58s cert-manager-cainjector-6649bbb695-7fbp4 1/1 Running 0 2m58s cert-manager-webhook-68d464c8b-s4hrj 1/1 Running 0 2m58skubectl get pods --namespaces cert-manager cert-manager-766d5c494b-6gtfk 1/1 Running 0 2m58s cert-manager-cainjector-6649bbb695-7fbp4 1/1 Running 0 2m58s cert-manager-webhook-68d464c8b-s4hrj 1/1 Running 0 2m58skubectl get pods --namespaces cert-manager cert-manager-766d5c494b-6gtfk 1/1 Running 0 2m58s cert-manager-cainjector-6649bbb695-7fbp4 1/1 Running 0 2m58s cert-manager-webhook-68d464c8b-s4hrj 1/1 Running 0 2m58s
其中镜像cert-manager-webhook-68d464c8b-s4hrj较大,pull需要一段时间,请稍后刷新
#5、根据您选择的 SSL 选项,通过 Helm 安装 Rancher
#因为rancher安装时需要指定一个域名地址,我们可以修改本地主机(不是虚拟机,是本地你的电脑)解析一个自定义域名映射到到ranchar主机中:
修改hosts文件:
1、Windows的hosts的完整路径为:“C:\Windows\System32\drivers\etc\hosts”。如果你看不到,说明你的改路径被隐藏了,在“查看”中勾选“隐藏的项目”。
2、更改权限。在“属性”中的“安全”选项卡中勾选“完全控制”权限,这样hosts修改完保存时就有了操作权限。
3、用记事本打开,在最后一行添加并保存就可以了,域名可以随便定义:
192.168.68.14 www.rancher.k8s.com192.168.68.14 www.rancher.k8s.com192.168.68.14 www.rancher.k8s.com
开始安装rancher
这里我们选择使用 Rancher 生成的自签名证书安装rancher
helm <span class="token function">install</span> rancher rancher-<span class="token operator"><</span>CHART_REPO<span class="token operator">></span>/rancher \ --namespace cattle-system \ --set hostname<span class="token operator">=</span>www.rancher.k8s.comhelm <span class="token function">install</span> rancher rancher-<span class="token operator"><</span>CHART_REPO<span class="token operator">></span>/rancher \ --namespace cattle-system \ --set hostname<span class="token operator">=</span>www.rancher.k8s.comhelm install rancher rancher-<CHART_REPO>/rancher \ --namespace cattle-system \ --set hostname=www.rancher.k8s.com
其中<CHART_REPO>
替换成上面我们选择的stable版本,www.rancher.k8s.com
是我们一开始为负载均衡器设置的DNS
我们可以看到打印中有:
Browse to https://www.rancher.k8s.comBrowse to https://www.rancher.k8s.comBrowse to https://www.rancher.k8s.com
该地址是我们访问rancher的地址
#我们可以检测rancher是否已安装运行:
执行:
kubectl -n cattle-system rollout status deploy/rancherkubectl -n cattle-system rollout status deploy/rancherkubectl -n cattle-system rollout status deploy/rancher
打印:
Waiting <span class="token keyword">for</span> deployment <span class="token string">"rancher"</span> rollout to finish: 0 of 3 updated replicas are available<span class="token punctuation">..</span>.Waiting <span class="token keyword">for</span> deployment <span class="token string">"rancher"</span> rollout to finish: 0 of 3 updated replicas are available<span class="token punctuation">..</span>.Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
出现:
deployment <span class="token string">"rancher"</span> successfully rolled outdeployment <span class="token string">"rancher"</span> successfully rolled outdeployment "rancher" successfully rolled out
表示rancher已运行成功
如果看到以下错误: error: deployment "rancher" exceeded its progress deadline
, 您可以通过运行以下命令来检查 deployment 的状态:
kubectl -n cattle-system get deploy rancherkubectl -n cattle-system get deploy rancherkubectl -n cattle-system get deploy rancher
注意:重要,因为我们用的本机解析了rancher需要的域名地址,所有一定要配置下面,当然生成环境,您有公网域名的话就不用
#cattle-cluster-agent配置(因为我们配置的
kubectl -n cattle-system \ patch deployments cattle-cluster-agent --patch <span class="token string">'{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "www.rancher.k8s.com" ], "ip": "192.168.68.14" } ] } } } }'</span>kubectl -n cattle-system \ patch deployments cattle-cluster-agent --patch <span class="token string">'{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "www.rancher.k8s.com" ], "ip": "192.168.68.14" } ] } } } }'</span>kubectl -n cattle-system \ patch deployments cattle-cluster-agent --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "www.rancher.k8s.com" ], "ip": "192.168.68.14" } ] } } } }'
#cattle-node-agent pod
kubectl -n cattle-system \ patch daemonsets cattle-node-agent --patch <span class="token string">'{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "www.rancher.k8s.com" ], "ip": "192.168.68.14" } ] } } } }'</span>kubectl -n cattle-system \ patch daemonsets cattle-node-agent --patch <span class="token string">'{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "www.rancher.k8s.com" ], "ip": "192.168.68.14" } ] } } } }'</span>kubectl -n cattle-system \ patch daemonsets cattle-node-agent --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "www.rancher.k8s.com" ], "ip": "192.168.68.14" } ] } } } }'
至此,rancher高可用k8s安装完毕,其它使用教程,后续再更新喽!
原文链接:https://blog.csdn.net/qq_28408457/article/details/118358493