VM Cluster

在PVE机器上创建三台虚拟机。如果用CT,k3s启动会有一点问题,新手为了避免不必要的麻烦,还是安装VM,首先需要的就是VM Template,不然每次都重新安装系统,也太麻烦了。

Cloud-Init Support

操作中间磁盘id可能会有点小区别

文中需要的Cloud Image可以从系统官网下载,比如

Cloud-Init Support

k3s

安装官网就是一句话

curl -sfL https://get.k3s.io | sh -

但是最好不要照着广告走,还是得照着文档走。虽然k3s可以用其他runtime,但为了避免麻烦,还是安装&使用docker。

安装Docker

Install Docker Engine on Ubuntu

安装k3s

接下来就是安装k3s

#server
curl -sfL https://get.k3s.io | sh -s - --docker
cat /var/lib/rancher/k3s/server/node-token
#agent
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -s - --docker

K3s Server Configuration Reference

K3s Agent Configuration Reference

这个时候,服务端的kubectl 就可以正常使用了,文档后面是安装dashboard,那个dashboard对新手来说,没用。先不装。

客户端登陆

在自己电脑使用kubectl 显然是比每次都要ssh到服务器上更方便,在自己电脑上装好kubectl ,然后把k3s server上的配置文件下载到~/.kube/config

A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml and the kubectl installed by K3s will automatically use it

本机的kubectl就可以正常使用了

第一个应用

就安装httpbin吧,创建一个资源yaml,一般这个时候,会用helm,但是新手啥都不会,先从k8s yaml开始吧。

# deployment.yaml
apiVersion: apps/v1
kind: Deployment #简单理解成镜像集合
metadata:
  name: httpbin
  labels: #deployment的标签
    app: httpbin
spec:
  replicas: 1 #部署数量
  selector: #通过选择器选择匹配的镜像部署
    matchLabels:
      app: httpbin
  template: #这是镜像模板列表
    metadata:
      labels: #这是镜像的标签
        app: httpbin
    spec:
      containers:
        - name: httpbin
          image: kennethreitz/httpbin:latest
          resources: #资源限制
            limits:
              memory: "1Gi" #Mi,Gi 写错能让你找半天bug
              cpu: "1000m"
            requests:
              memory: "256m"
              cpu: "500m"
          ports:
            - containerPort: 80 #这个镜像的端口

我们先创建一个namespace,目的只是为了保证演示和效果一致。

kubectl create namespace dev                                                                                                                                                                                    
# namespace/dev created

部署这个deployment

kubectl apply -f deployment.yaml -n dev
# deployment.apps/httpbin created
kubectl get pods -n dev
# NAME                       READY   STATUS    RESTARTS   AGE
# httpbin-849b556cbc-cb2rr   1/1     Running   0          28s

可以看到,正常运行了,但是我们暂时没有办法访问。因为它并没有暴露出来,我们现在创建一个service来暴露它。

# service.yaml
apiVersion: v1
kind: Service  #简单理解成暴露出去的业务
metadata:
  name: httpbin
spec:
  selector: #通过标签选择器与上面的deployment管理
    app: httpbin
  type: NodePort #service暴露类型,这个就是cluster里所有的ip+下面的端口都会被转发到deployment
  ports:
    - port: 8800 #service的端口
      nodePort: 31000 #nodeport端口,有范围限制
      targetPort: 80 #deployment的端口/实际就是镜像的端口

部署这个service

kubectl apply -f service.yaml -n dev
# service/httpbin created
kubectl get services -n dev
# NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
# httpbin   NodePort   10.43.177.189   <none>        8800:31000/TCP   50s
curl http://192.168.2.233:31000/ip #更换成k3s集群里任意一台机器的ip
#{
#  "origin": "10.42.2.0"
#}
# 1,有数据返回,说明能访问了。2,ip不是访问者的ip,说明中间有一次转发。

显然,通过IP和端口来访问,显然不是我们想要的,我们来给他分配一个域名。

#ingress.yaml
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: httpbin
spec:
  rules:
    - host: httpbin.h.test4x.com #域名,注意要将这个域名解析指向cluster任意一台机器
      http:
        paths:
          - pathType: Prefix #path匹配模式,这个就是全都转发
            path: /
            backend:
              service:
                name: httpbin #service name
                port:
                  number: 8800 #service port

再次测试

kubectl apply -f ingress.yaml -n dev
# ingress.networking.k8s.io/httpbin created
curl http://httpbin.h.test4x.com/ip
# {
#   "origin": "10.42.0.1"
# } 

配置Let’s Encrypt

现在已经能够通过http来访问k8s上的服务了,但是如果没有https,在很多场景下并不方便,比如搭建docker私服,没有https需要额外配置。给k8s ingress加上证书,需要添加一个cer-manager

Kubectl apply

采用最原始的kubectl apply来安装

安装完成之后,会多一个cert-manager的namespace,可以不用理会,现在要做的就是配置好cert manager

我们需要创建一个ClusterIssuer,相较于Issuer,适用范围更广。

#acme-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: cf-dns-issuer
spec:
  acme:
    email: xuguofan@live.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef: #默认会注册一个key,将会保存在这个位置
      name: issuer-account-key #这是比较简单的一种配置方法
    solvers:
      - dns01:
          cloudflare:
            email: imxgfan@live.com
            apiTokenSecretRef: #这个需要提前配置好key
              name: cloudflare-api-key-secret
              key: api-key
        selector:
          dnsZones:
            - 'test4x.com' #申请下发证书的域名列表

我们得先配置好cf的相关信息

Cloudflare

接下来,我们就要使用这个issuer,修改上面的httpbin的ingress.yaml

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: httpbin
  annotations:
    kubernetes.io/ingress.class: traefik #新增内容,ingress类型
    cert-manager.io/cluster-issuer: cf-dns-issuer #新增内容,issuer
spec:
  tls:
    - secretName: httpbing-h-test4x-com-tls #新增内容,会把签发的证书保存在这里
      hosts:
        - httpbin.h.test4x.com #新增内容,证书域名列表
  rules: #下面的不用改动
    - host: httpbin.h.test4x.com
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: httpbin
                port:
                  number: 8800

再次部署测试

kubectl apply -f ingress-tls.yaml  -n dev                                                                                                                                                                        
# ingress.networking.k8s.io/httpbin configured
kubectl get certs -n dev                                                                                                                                                                                         
# NAME                        READY   SECRET                      AGE
# httpbing-h-test4x-com-tls   False   httpbing-h-test4x-com-tls   12s
kubectl get order -n dev                                                                                                                                                                                        
# NAME                                         STATE     AGE
# httpbing-h-test4x-com-tls-6fsjs-3826221782   pending   21s

#可以看到证书还没有ready,签发order也在等待中,过一段时间再查询

kubectl get order -n dev
# NAME                                         STATE   AGE
# httpbing-h-test4x-com-tls-6fsjs-3826221782   valid   93s
kubectl get certs -n dev
# NAME                        READY   SECRET                      AGE
# httpbing-h-test4x-com-tls   True    httpbing-h-test4x-com-tls   100s

curl https://httpbin.h.test4x.com/ip
# {
#  "origin": "10.42.0.1"
# }

https就正常工作了!

搭建nexus repository manager

这里以nexus repository manager作为例子是有下面几个原因。

  1. 这个需要持久化
  2. 可能需要多端口
  3. 需要https

Volume

Volumes

由于我还有一台nas,加上服务对io要求应该不高,所以我选择了nfs挂载,如果要使用nfs挂载,就需要cluster每台机器都装了nfs相关的组件

sudo apt install nfs-common

多端口

上面提到的多端口,docker 对于 registry 是不带path的,但是oss是带path的,所以需要在oss的repository上开启独立的端口来供docker使用,而且由于oss的free版本是不支持push到grouped repository的,所以我可能需要两个额外的端口,一个用于self host,放一些制成品,一个用户proxy,解决后面CI编译的问题。

K3S@Home%20a96e0ff48a3a4d6a82ec4db9d8482b62/Untitled.png

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nexus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nexus
  template:
    metadata:
      labels:
        app: nexus
    spec:
      containers:
        - name: nexus
          image: sonatype/nexus3:latest
          resources:
            limits:
              memory: "4Gi"
              cpu: "1500m"
            requests:
              memory: "2Gi"
              cpu: "1000m"
          ports:
            - containerPort: 8081
            - containerPort: 8888 #self host docker
            - containerPort: 8889 #docker proxy
          volumeMounts:
            - name: nexus-data
              mountPath: /nexus-data
      volumes:
        - name: nexus-data
          nfs:
            path: /homelab/nexus
            server: 192.168.2.220

---
apiVersion: v1
kind: Service
metadata:
  name: nexus
# 由于nexus是对外服务,而且会配置ingress
# 所以没有必要使用nodePort,占用cluster每个机器一个端口
# 这里使用了默认的type:ClusterIP
spec:
  selector:
    app: nexus
  ports:
    - port: 8081 #由于是clusterIp模式,就没有必要分配端口了,只需要暴露出去就ok了
      name: common-registry-port
    - port: 8888
      name: docker-registry-port
    - port: 8889
      name: docker-proxy-registry-port
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: nexus
  annotations:
    kubernetes.io/ingress.class: traefik
    cert-manager.io/cluster-issuer: cf-dns-issuer
spec:
  tls:
    - secretName: registry-test4x-com-tls
      hosts:
        - registry.test4x.com #for common registry
    - secretName: docker-test4x-com-tls
      hosts:
        - docker.test4x.com #for self host docker 
    - secretName: docker-proxy-test4x-com-tls
      hosts:
        - docker-proxy.test4x.com #for docker proxy registry
  rules:
    - host: registry.test4x.com
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: nexus
                port:
                  number: 8081
    - host: docker.test4x.com
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: nexus
                port:
                  number: 8888
    - host: docker-proxy.test4x.com
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: nexus
                port:
                  number: 8889

OSS额外配置

配置Docker Bearer Token

K3S@Home%20a96e0ff48a3a4d6a82ec4db9d8482b62/Untitled%201.png

配置K8S使用docker Self hosted registry

首先需要一个docker的config.json,可以通过docker login your-registry.com 得到。但mac下好像无法拿到那个需要的config.json,所以这里用一个更简单粗暴的方法。

{
    "auths": {
        "docker.test4x.com": {
            "auth": "YWRtaW46cGFzc3dvcmQK"
        }
    }
}

中间的auth字段,实际就是由username:password base64编码得到,可以看到我的账号密码是

echo 'YWRtaW46cGFzc3dvcmQK' | base64 -d                                                                                                                                                                        
# admin:password

我们现在用这个json文件,生成一个k8s的secret

kubectl create secret generic regcred --from-file=.dockerconfigjson=config.json  --type=kubernetes.io/dockerconfigjson -n dev

我们在使用时,就只需要指定imagePullSecrets

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: saio
          image: docker.test4x.com/xgfan/saio:latest #指明registry
      imagePullSecrets:
        - name: regcred #指定使用的secret

Drone CI

Server

不使用Jenkins只是单纯的为了换个口味。选择好你的git服务提供商,然后将Drone的容器跑起来就好了。Drone需要配置一些参数,我选择使用configMap来处理这些东西。

#drone-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: drone-config
data:
  DRONE_GITHUB_CLIENT_ID: "your client id"
  DRONE_GITHUB_CLIENT_SECRET: "you client secret"
  DRONE_RPC_SECRET: "a randon text"
  DRONE_SERVER_HOST: "ci.test4x.com"
  DRONE_SERVER_PROTO: "https"
  DRONE_GIT_ALWAYS_AUTH: "true"
  DRONE_RPC_HOST: "ci.test4x.com"
  DRONE_RPC_PROTO: "https"
  DRONE_DEBUG: "true"
  DRONE_UI_DISABLE: "false"
  DRONE_UI_USERNAME: "root"
  DRONE_UI_PASSWORD: "root"
  DRONE_USER_CREATE: "username:XGFan,admin:true" #用来指定初始管理员
  DRONE_RUNNER_ENVIRON: PLUGIN_MTU:1450 #这是解决网络问题
  DRONE_REGISTRY_PLUGIN_ENDPOINT: "https://docker-proxy.test4x.com" #用来加快下载docker镜像速度,使用的前面提到的nexus

部署文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: drone
  labels:
    app: drone
spec:
  replicas: 1
  selector:
    matchLabels:
      app: drone
  template:
    metadata:
      labels:
        app: drone
    spec:
      containers:
        - name: drone
          image: drone/drone:2
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              memory: "1Gi"
              cpu: "1000m"
            requests:
              memory: "256m"
              cpu: "500m"
          envFrom:
            - configMapRef:
                name: drone-config
          ports:
            - containerPort: 80
          volumeMounts:
            - name: drone-data
              mountPath: /data
      volumes:
        - name: drone-data
          nfs:
            path: /homelab/drone
            server: 192.168.2.220

---
apiVersion: v1
kind: Service
metadata:
  name: drone
spec:
  selector:
    app: drone
  ports:
    - port: 80
      name: drone-port

---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: drone
  annotations:
    kubernetes.io/ingress.class: traefik
    cert-manager.io/cluster-issuer: cf-dns-issuer
spec:
  tls:
    - secretName: ci-test4x-com-tls
      hosts:
        - ci.test4x.com
  rules:
    - host: ci.test4x.com
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: drone
                port:
                  number: 80

Runner

接下来是部署runner,我们需要创建一个角色,以及对应的binding

Installation

apiVersion: apps/v1
kind: Deployment
metadata:
  name: drone-runner
  labels:
    app.kubernetes.io/name: drone-runner
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: drone-runner
  template:
    metadata:
      labels:
        app.kubernetes.io/name: drone-runner
    spec:
      containers:
        - name: drone-runner
          image: drone/drone-runner-kube:latest
          resources:
            limits:
              memory: "1Gi"
              cpu: "1000m"
            requests:
              memory: "256m"
              cpu: "500m"
          ports:
            - containerPort: 3000
          envFrom:
            - configMapRef:
                name: drone-config
          volumeMounts:
            - mountPath: /root/.docker/config.json
              name: config-json
              readOnly: true
      volumes:
        - name: config-json
          secret:
            secretName: regcred

DNS设置

一般来说,修改coredns的configmap就好了,但是由于我用的是k3s,在k3s每次启动的时候,都会覆盖这个configmap,所以最后直接修改k3s的配置文件了。

Advanced Options and Configuration

Customizing DNS Service

Reference

Get a Shell to a Running Container

kubectl rollout restart deployment <name>
kubectl port-forward deployment/traefik 9000:9000 -n kube-system
# http://localhost:9000/dashboard/#/