部署 Charmed Kubernetes with OpenStack Integrator

部署 Charmed Kubernetes with OpenStack Integrator
部署 Charmed Kubernetes with OpenStack Integrator

前言

Charmed Kuberenetes 是 Canonical 提供的 Kubernetes 部署方式,可以透過 juju 將 Kubernetes 部署至各種不同環境。

本篇將介紹如何部署 Charmed Kubernetes 至 OpenStack 上,並且利用 OpenStack Integrator 使用 OpenStack 提供 Persistent Volume 和 Load Balancer 給 Kubernetes 使用。

設定及部署 juju OpenStack Cloud Controller

juju add-cloud --client openstack

輸入以下資訊

  • cloud type: openstack
  • endpoint:
  • cert path: none
  • auth type: userpass
  • region: RegionOne (應該是預設值)
  • API endpoint url for the region: 跳過,會直接使用 endpoint
  • Enter another region? (y/N): N

    加入 OpenStack Credential

juju autoload-credentials

上傳 image

juju deploy glance-simplestreams-sync --to 0 --channel 2023.2/stable --config use_swift=false
juju integrate glance-simplestreams-sync:identity-service keystone:identity-service
juju integrate glance-simplestreams-sync:certificates vault:certificates
juju run glance-simplestreams-sync/leader sync-images

可以透過以下指令獲得 Image ID

openstack image list

設定 image metadata

mkdir simplestreams
export IMAGE=<IMAGE_ID>
juju metadata generate-image -d ~/simplestreams -i $IMAGE -s jammy -r RegionOne -u <OPENSTACK_API_ENDPOINT>

設定私有網路

openstack network create --internal user1_net

openstack subnet create --network user1_net --dns-nameserver 8.8.8.8 \
   --subnet-range 192.168.0/24 \
   --allocation-pool start=192.168.0.10,end=192.168.0.99 \
   user1_subnet
openstack router create user1_router
openstack router add subnet user1_router user1_subnet
openstack router set user1_router --external-gateway ext_net

在 OpenStack 上建立 juju controller

juju bootstrap --debug --config network=user1_net --config external-network=<external_network_id> --bootstrap-constraints allocate-public-ip=true --bootstrap-series jammy --bootstrap-constraints instance-type=m1.small --metadata-source $HOME/simplestreams/ openstack openstack

同時需於另外一個 terminal 在 bootstrap 的 instance 上加入 floating IP,使 client node 可以連上 juju controller

FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack server add floating ip <server_id> $FLOATING_IP

部署 Charmed Kubernetes

加入新的 juju model

juju add-model --config default-series=jammy k8s openstack
juju switch openstack:k8s

建立 openstack-overlay.yaml

description: Charmed Kubernetes overlay to add native OpenStack support.
applications:
  kubeapi-load-balancer: null
  openstack-integrator:
    annotations:
      gui-x: "600"
      gui-y: "300"
    charm: openstack-integrator
    num_units: 1
    constraints: "cores=1 mem=1G root-disk=15G"
    trust: true
relations:
  - ['openstack-integrator', 'kubernetes-control-plane:openstack']
  - ['openstack-integrator', 'kubernetes-worker:openstack']
  - ['openstack-integrator', 'kubernetes-control-plane:loadbalancer']

建立 cilium-overlay.yaml

description: Charmed Kubernetes overlay to add Cilium CNI.
applications:
  calico: null
  cilium:
    charm: cilium
  kubernetes-control-plane:
    options:
      allow-privileged: "true"
      sysctl: &sysctl "{net.ipv4.conf.all.forwarding: 1, net.ipv4.conf.all.rp_filter: 0, net.ipv4.neigh.default.gc_thresh1: 128, net.ipv4.neigh.default.gc_thresh2: 28672, net.ipv4.neigh.default.gc_thresh3: 32768, net.ipv6.neigh.default.gc_thresh1: 128, net.ipv6.neigh.default.gc_thresh2: 28672, net.ipv6.neigh.default.gc_thresh3: 32768, fs.inotify.max_user_instances: 8192, fs.inotify.max_user_watches: 1048576, kernel.panic: 10, kernel.panic_on_oops: 1, vm.overcommit_memory: 1}"
  kubernetes-worker:
    options:
      sysctl: *sysctl
relations:
- [cilium:cni, kubernetes-control-plane:cni]
- [cilium:cni, kubernetes-worker:cni]

部署 Kubernetes

juju deploy charmed-kubernetes --channel=1.28/stable --overlay openstack-overlay.yaml --trust --overlay cilium-overlay.yaml

如果資源不足,可以使用 kubernetes-core bundle 做測試

juju deploy kubernetes-core --channel=1.28/stable --overlay openstack-overlay.yaml --trust --overlay cilium-overlay.yaml

注意 Charmed Kubernetes 有些預設的 Instance constraints,OpenStack 上需要有符合的 flavor

可以額外透過 overlay 進行 override

例:

application:
  "kubernetes-worker":
    num_units: 1
    constraints: cores=2 mem=4G root-disk=20G
  "kubernetes-control-plane":
    num_units: 1
    constraints: cores=2 mem=4G root-disk=20G
  "etcd":
    num_units: 1
    constraints: "cores=1 mem=2G root-disk=20G"
  "easyrsa":
    num_units: 1
    constraints: "cores=1 mem=1G root-disk=15G"

部署完成 juju status 輸出將會像這樣(kubernetes-core 為例):

Model  Controller  Cloud/Region         Version  SLA          Timestamp
k8s    openstack   openstack/RegionOne  3.1.6    unsupported  00:46:56Z

App                       Version        Status  Scale  Charm                     Channel      Rev  Exposed  Message
cilium                    1.12.5,1.12.5  active      2  cilium                    stable        24  no       Ready
containerd                1.6.8          active      2  containerd                1.28/stable   73  no       Container runtime available
easyrsa                   3.0.1          active      1  easyrsa                   1.28/stable   48  no       Certificate Authority connected.
etcd                      3.4.22         active      1  etcd                      1.28/stable  748  no       Healthy with 1 known peer
kubernetes-control-plane  1.28.4         active      1  kubernetes-control-plane  1.28/stable  321  yes      Kubernetes control-plane running.
kubernetes-worker         1.28.4         active      1  kubernetes-worker         1.28/stable  134  yes      Kubernetes worker running.
openstack-integrator      yoga           active      1  openstack-integrator      stable        69  no       Ready

Unit                         Workload  Agent  Machine  Public address  Ports       Message
easyrsa/0*                   active    idle   0/lxd/0  252.82.3.157                Certificate Authority connected.
etcd/0*                      active    idle   0        192.168.0.82    2379/tcp    Healthy with 1 known peer
kubernetes-control-plane/0*  active    idle   0        192.168.0.82    6443/tcp    Kubernetes control-plane running.
  cilium/1*                  active    idle            192.168.0.82                Ready
  containerd/1*              active    idle            192.168.0.82                Container runtime available
kubernetes-worker/0*         active    idle   1        192.168.0.68    80,443/tcp  Kubernetes worker running.
  cilium/0                   active    idle            192.168.0.68                Ready
  containerd/0               active    idle            192.168.0.68                Container runtime available
openstack-integrator/1*      active    idle   3        192.168.0.52                Ready

Machine  State    Address       Inst id                               Base          AZ    Message
0        started  192.168.0.82  91545e2c-0bbc-475d-9528-fd4742efa0b3  ubuntu@22.04  nova  ACTIVE
0/lxd/0  started  252.82.3.157  juju-572a8e-0-lxd-0                   ubuntu@22.04  nova  Container started
1        started  192.168.0.68  4c3aaf88-05fc-4de2-95fb-d7abaf75535d  ubuntu@22.04  nova  ACTIVE
3        started  192.168.0.52  386403bf-ed3d-4efd-8206-4d77693a7e29  ubuntu@22.04  nova  ACTIVE

取得 kubeconfig

juju ssh kubernetes-control-plane/leader -- cat config > ~/.kube/config

此時 kubectl get pods -A 的輸出應該會有這些 pods:

ubuntu@juju-572a8e-k8s-0:~$ kubectl get pods -A
NAMESPACE                         NAME                                                      READY   STATUS      RESTARTS   AGE
ingress-nginx-kubernetes-worker   default-http-backend-kubernetes-worker-5c79cc75ff-cvqw7   1/1     Running     0          14m
ingress-nginx-kubernetes-worker   nginx-ingress-controller-kubernetes-worker-bc7zc          1/1     Running     0          12m
kube-system                       cilium-7ndz7                                              1/1     Running     0          14m
kube-system                       cilium-operator-577bfbbd5b-5fmvj                          1/1     Running     0          14m
kube-system                       cilium-operator-577bfbbd5b-8d4m4                          1/1     Running     0          14m
kube-system                       cilium-zb7dp                                              1/1     Running     0          14m
kube-system                       coredns-59cfb5bf46-6tpcg                                  1/1     Running     0          16m
kube-system                       csi-cinder-controllerplugin-684cfb8c48-6qcxp              6/6     Running     0          16m
kube-system                       csi-cinder-nodeplugin-7pxjl                               3/3     Running     0          14m
kube-system                       csi-cinder-nodeplugin-wsp9z                               3/3     Running     0          15m
kube-system                       hubble-generate-certs-394f790584-t7j48                    0/1     Completed   0          16m
kube-system                       kube-state-metrics-78c475f58b-8cjvv                       1/1     Running     0          16m
kube-system                       metrics-server-v0.6.3-69d7fbfdf8-xc2xv                    2/2     Running     0          16m
kube-system                       openstack-cloud-controller-manager-gdgng                  1/1     Running     0          2m24s
kubernetes-dashboard              dashboard-metrics-scraper-5dd7cb5fc-bjq29                 1/1     Running     0          16m
kubernetes-dashboard              kubernetes-dashboard-7b899cb9d9-kxmmt                     1/1     Running     0          16m

測試 OpenStack Integrator

最後測試 OpenStack Integrator 正常運作

Storage Integration

建立 PVC

kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  storageClassName: cdk-cinder
EOY

kubectl get pv 應該可以見到 pv 被建立

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-d302df77-7cbc-4a7b-af7f-5373f91abbd3   1Gi        RWO            Delete           Bound    default/testclaim   cdk-cinder              15s

openstack volume list 可以看到 cinder 建立了 volume

+--------------------------------------+------------------------------------------+-----------+------+-------------+
| ID                                   | Name                                     | Status    | Size | Attached to |
+--------------------------------------+------------------------------------------+-----------+------+-------------+
| 37734a31-5786-48c2-9757-f4782e6cdfd6 | pvc-d302df77-7cbc-4a7b-af7f-5373f91abbd3 | available |    1 |             |
+--------------------------------------+------------------------------------------+-----------+------+-------------+

Load Balancer Integration

建立測試的 pods 並且透過 LB expose

kubectl create deployment hello-world --image=gcr.io/google-samples/node-hello:1.0
kubectl scale deployment hello-world --replicas=5
kubectl expose deployment hello-world --type=LoadBalancer --name=hello --port=8080

此時 Load Balancer 將被建立,可以透過 kubectl get svc hello -o wide 看 external IP

NAME    TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE     SELECTOR
hello   LoadBalancer   10.152.183.41   192.168.99.144   8080:30777/TCP   6m16s   app=hello-world

透過 external IP 可以連到服務

curl 192.168.99.144:8080
Hello Kubernetes!

openstack loadbalancer list 也可以看到 load balancer 被建立

openstack loadbalancer list
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| id                                   | name                                                                   | project_id                       | vip_address  | provisioning_status | operating_status | provider |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+
| 4cb1c8da-3c71-4fcf-9b13-23f6f21e0336 | openstack-integrator-5a087e572a8e-kubernetes-control-plane             | 4badc745662a485b8957de81ae403ee2 | 192.168.0.78 | ACTIVE              | ONLINE           | ovn      |
| 5cc3a0ce-b798-4b38-a1aa-33f637327560 | kube_service_kubernetes-df70v6ftc5r56zmdyd68zps0cwdmizal_default_hello | 4badc745662a485b8957de81ae403ee2 | 192.168.0.46 | ACTIVE              | ONLINE           | ovn      |
+--------------------------------------+------------------------------------------------------------------------+----------------------------------+--------------+---------------------+------------------+----------+

小結

Charmed Kubernetes 在部署上並不會太困難,預設上也幫你安裝了一些實用的 add-ons,如 ingress-nginx 等,不過在設定上的彈性筆者覺得沒有比 Kops 來得好。

想暸解如何利用 Kops 部署 Kubernetes 可以參考這篇文章

Reference


Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless stating additionally.

Leave a Reply