最新要闻

广告

手机

iphone11大小尺寸是多少?苹果iPhone11和iPhone13的区别是什么?

iphone11大小尺寸是多少?苹果iPhone11和iPhone13的区别是什么?

警方通报辅警执法直播中被撞飞:犯罪嫌疑人已投案

警方通报辅警执法直播中被撞飞:犯罪嫌疑人已投案

家电

部署Kubernetes Cluster

来源:博客园

安装方法

  • kubernetes 二进制安装 (配置最繁琐,不亚于安装openstack)

  • kubeadm 安装 (谷歌推出的自动化安装工具,网络有要求)

  • minikube 安装 (仅仅用来体验k8s)


    【资料图】

  • yum 安装 (最简单,版本比较低====学习推荐此种方法)

  • go编译安装 (最难)

基本环境说明

ip:192.168.115.149   主机名:node1ip:192.168.115.151  主机名:node2ip:192.168.115.152  主机名:node3

准备工作

说明: k8s集群涉及到的3台机器都需要进行准备

1、检查ip和uuid:确保每个节点上 MAC 地址和 product_uuid 的唯一性

2、允许 iptables 检查桥接流量:确保 br_netfilter 模块被加载、iptables 能够正确地查看桥接流量、设置路由

3、关闭系统的selinux、防火墙、Swap

4、修改主机名,添加hosts

5、安装好docker: 注意docker和k8s的版本对应关系,并设置设置cgroup驱动,这里用systemd,否则后续kubeadm init会有相关warning

#####检查ip和uuidifconfig -a/ ip acat /sys/class/dmi/id/product_uuid#####允许 iptables 检查桥接流量
1.确保 br_netfilter 模块被加载#显示已载入系统的模块lsmod | grep br_netfilter #如果未加载则加载该模块modprobe br_netfilter   2.iptables 能够正确地查看桥接流量确保在sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1sysctl -a |grep net.bridge.bridge-nf-call-iptables3.设置路由cat <
setenforce 0 //临时关闭selinux,重启后失效#永久关闭,休修改文件后重启服务器即可vim /etc/selinux/config#修改SELINUX=enforcing  为 SELINUX=disabled
#关闭防火墙systemctl status firewalldsystemctl stop firewalld#关闭swap
#临时关闭swapoff -a#永久关闭vim /etc/fstab注释掉 SWAP 的自动挂载vim /etc/sysctl.d/k8s.conf  (可选)添加下面一行:vm.swappiness=0sysctl -p /etc/sysctl.d/k8s.conf#####修改主机名、添加hostshostnamectl set-hostname k8s-masterhostnamectl set-hostname k8s-node1hostnamectl set-hostname k8s-node2vim /etc/hosts192.168.115.149   k8s-master192.168.115.151  k8s-node1192.168.115.152  k8s-node2

yum方式安装部署

(192条消息) k8s搭建部署(超详细)_Anime777的博客-CSDN博客_k8s部署

安装kubeadm,kubelet和kubectl

说明: k8s集群涉及到的3台机器都需要进行准备

#添加k8s阿里云YUM软件源vim /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg#安装kubeadm,kubelet和kubectl,注意和docker版本对应yum install -y kubelet-1.21.1 kubeadm-1.21.1 kubectl-1.21.1#启动,注意master节点systemctl start kubeletsystemctl enable kubeletsystemctl status kubelet 

集群部署

#master节点部署初始化master节点kubeadm init   --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

#node节点部署,根据kubeadm init执行成功生成的命令复制到node节点执行

kubeadm join 192.168.115.149:6443 --token swshsb.7yu37gx1929902tl \

--discovery-token-ca-cert-hash sha256:626728b1a039991528a031995ed6ec8069382b489c8ae1e61286f96fcd9a3bfc#node节点加入后,可在master节点进行查看节点加入情况kubectl get nodes集群部署后查看集群状态的话还不是ready的状态,所以需要安装网络插件来完成k8s的集群创建的最后一步

安装网络插件

说明:master节点安装,可安装flannel插件也可安装安装calico插件,此处安装flannel插件

vim kube-flannel.yml---apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:  name: psp.flannel.unprivileged  annotations:    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultspec:  privileged: false  volumes:  - configMap  - secret  - emptyDir  - hostPath  allowedHostPaths:  - pathPrefix: "/etc/cni/net.d"  - pathPrefix: "/etc/kube-flannel"  - pathPrefix: "/run/flannel"  readOnlyRootFilesystem: false  runAsUser:    rule: RunAsAny  supplementalGroups:    rule: RunAsAny  fsGroup:    rule: RunAsAny  allowPrivilegeEscalation: false  defaultAllowPrivilegeEscalation: false  allowedCapabilities: ["NET_ADMIN", "NET_RAW"]  defaultAddCapabilities: []  requiredDropCapabilities: []  hostPID: false  hostIPC: false  hostNetwork: true  hostPorts:  - min: 0    max: 65535  seLinux:    rule: "RunAsAny"---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelrules:- apiGroups: ["extensions"]  resources: ["podsecuritypolicies"]  verbs: ["use"]  resourceNames: ["psp.flannel.unprivileged"]- apiGroups:  - ""  resources:  - pods  verbs:  - get- apiGroups:  - ""  resources:  - nodes  verbs:  - list  - watch- apiGroups:  - ""  resources:  - nodes/status  verbs:  - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: flannelsubjects:- kind: ServiceAccount  name: flannel  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  name: kube-flannel-cfg  namespace: kube-system  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "cniVersion": "0.3.1",      "plugins": [        {          "type": "flannel",          "delegate": {            "hairpinMode": true,            "isDefaultGateway": true          }        },        {          "type": "portmap",          "capabilities": {            "portMappings": true          }        }      ]    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:            - matchExpressions:              - key: kubernetes.io/os                operator: In                values:                - linux      hostNetwork: true      priorityClassName: system-node-critical      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni-plugin        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0        command:        - cp        args:        - -f        - /flannel        - /opt/cni/bin/flannel        volumeMounts:        - name: cni-plugin          mountPath: /opt/cni/bin      - name: install-cni        image: rancher/mirrored-flannelcni-flannel:v0.18.1        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: rancher/mirrored-flannelcni-flannel:v0.18.1        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:            add: ["NET_ADMIN", "NET_RAW"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        - name: EVENT_QUEUE_DEPTH          value: "5000"        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/        - name: xtables-lock          mountPath: /run/xtables.lock      volumes:      - name: run        hostPath:          path: /run/flannel      - name: cni-plugin        hostPath:          path: /opt/cni/bin      - name: cni        hostPath:          path: /etc/cni/net.d      - name: flannel-cfg        configMap:          name: kube-flannel-cfg      - name: xtables-lock        hostPath:          path: /run/xtables.lock          type: FileOrCreate修改net-conf.json下面的网段为上面初始化master pod-network-cidr的网段地址sed -i "s/10.244.0.0/10.240.0.0/" kube-flannel.yml#执行kubectl apply -f kube-flannel.yml#执行查看安装的状态 kubectl get pods --all-namespaces#查看集群的状态是否为readykubectl get nodes===补充卸载flannel================

1、在master节点,找到flannel路径,删除flannelkubectl delete -f kube-flannel.yml2、在node节点清理flannel网络留下的文件ifconfig cni0 downip link delete cni0ifconfig flannel.1 downip link delete flannel.1rm -rf /var/lib/cni/rm -f /etc/cni/net.d/*执行完上面的操作,重启kubelet

测试kubernetes集群

说明:创建一个pod,开放对外端口访问,这里会随机映射一个端口,不指定ns,会默认创建在default下

kubectl create deployment nginx --image=nginxkubectl expose deployment nginx --port=80 --type=NodePort

问题总结

master节点启动kubelet异常

查看kubelet状态有如下报错属正常现象,正常进行master初始化即可

master初始化问题处理

执行kubeadm init --apiserver-advertise-address=192.168.115.149 --kubernetes-version v1.21.1 --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

报错如下:

原因分析:由于国内网络原因,kubeadm init会卡住不动,一卡就是很长时间,然后报出这种问题,kubeadm init未设置镜像地址,就默认下载k8s.gcr.io的docker镜像,但是国内连不上https://k8s.gcr.io/v2/

解决方案:kubeadm init添加镜像地址,执行kubeadm init --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.1 --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

报错如下:

原因分析:拉取 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0镜像失败

解决方案:可查询需要下载的镜像,手动拉取镜像修改tag

#查询需要下载的镜像 kubeadm config images list

#查询镜像 docker images

发现已经有coredns:v1.8.0镜像但是tag不一样,修改

docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

再次执行kubeadm init --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.1 --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16

成功!!!!!

master初始化成功记录

1 [root@k8s-master ~]# kubeadm init   --apiserver-advertise-address=192.168.115.149 --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.1   --service-cidr=10.140.0.0/16 --pod-network-cidr=10.240.0.0/16 2 [init] Using Kubernetes version: v1.21.1 3 [preflight] Running pre-flight checks 4     [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 5     [WARNING Hostname]: hostname "k8s-master" could not be reached 6     [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.115.2:53: no such host 7 [preflight] Pulling images required for setting up a Kubernetes cluster 8 [preflight] This might take a minute or two, depending on the speed of your internet connection 9 [preflight] You can also perform this action in beforehand using "kubeadm config images pull"10 [certs] Using certificateDir folder "/etc/kubernetes/pki"11 [certs] Generating "ca" certificate and key12 [certs] Generating "apiserver" certificate and key13 [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.140.0.1 192.168.115.149]14 [certs] Generating "apiserver-kubelet-client" certificate and key15 [certs] Generating "front-proxy-ca" certificate and key16 [certs] Generating "front-proxy-client" certificate and key17 [certs] Generating "etcd/ca" certificate and key18 [certs] Generating "etcd/server" certificate and key19 [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.115.149 127.0.0.1 ::1]20 [certs] Generating "etcd/peer" certificate and key21 [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.115.149 127.0.0.1 ::1]22 [certs] Generating "etcd/healthcheck-client" certificate and key23 [certs] Generating "apiserver-etcd-client" certificate and key24 [certs] Generating "sa" key and public key25 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"26 [kubeconfig] Writing "admin.conf" kubeconfig file27 [kubeconfig] Writing "kubelet.conf" kubeconfig file28 [kubeconfig] Writing "controller-manager.conf" kubeconfig file29 [kubeconfig] Writing "scheduler.conf" kubeconfig file30 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"31 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"32 [kubelet-start] Starting the kubelet33 [control-plane] Using manifest folder "/etc/kubernetes/manifests"34 [control-plane] Creating static Pod manifest for "kube-apiserver"35 [control-plane] Creating static Pod manifest for "kube-controller-manager"36 [control-plane] Creating static Pod manifest for "kube-scheduler"37 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"38 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s39 [kubelet-check] Initial timeout of 40s passed.40 [apiclient] All control plane components are healthy after 64.005303 seconds41 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace42 [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster43 [upload-certs] Skipping phase. Please see --upload-certs44 [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]45 [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]46 [bootstrap-token] Using token: swshsb.7yu37gx1929902tl47 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles48 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes49 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials50 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token51 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster52 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace53 [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key54 [addons] Applied essential addon: CoreDNS55 [addons] Applied essential addon: kube-proxy56 57 Your Kubernetes control-plane has initialized successfully!58 59 To start using your cluster, you need to run the following as a regular user:60 61   mkdir -p $HOME/.kube62   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config63   sudo chown $(id -u):$(id -g) $HOME/.kube/config64 65 Alternatively, if you are the root user, you can run:66 67   export KUBECONFIG=/etc/kubernetes/admin.conf68 69 You should now deploy a pod network to the cluster.70 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:71   https://kubernetes.io/docs/concepts/cluster-administration/addons/72 73 Then you can join any number of worker nodes by running the following on each as root:74 75 kubeadm join 192.168.115.149:6443 --token swshsb.7yu37gx1929902tl \76     --discovery-token-ca-cert-hash sha256:626728b1a039991528a031995ed6ec8069382b489c8ae1e61286f96fcd9a3bfc
View Code

kernel:NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [ksoftirqd/1:14]

大量高负载程序,造成cpu soft lockup。 Soft lockup就是内核软死锁,这个bug没有让系统彻底死机,但是若干个进程(或者kernel thread)被锁死在了某个状态(一般在内核区域),很多情况下这个是由于内核锁的使用的问题。

https://blog.csdn.net/qq_44710568/article/details/104843432

https://blog.csdn.net/JAVA_LuZiMaKei/article/details/120140987

关键词: 解决方案 原因分析 关闭系统