All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

HI GIO Kubernetes

Information

This comprehensive resource is designed to help you navigate the powerful features and functionalities of managed-Kubernetes service run on the HI GIO Cloud infrastructure. This guide provides step-by-step instructions, best practices, and practical examples to enhance your Kubernetes experience.

With HI GIO Kubernetes, you can efficiently manage your containerized applications, scale seamlessly, and ensure high availability across your infrastructure.

Overview

HI GIO Kubernetes - Container management & orchestration service

A managed Kubernetes service based on the open-source system is run on the HI GIO Cloud infrastructure (VMware-based technology). By automatically managing software deployment and simplifying the management of containerized applications, software developers can focus on application development & operation in the Cloud.

Seamless Scaling, Simplified Management.

Guideline

1. Steps To Create Kubernetes Cluster on HI GIO Portal

Overview

This document explains creating a Kubernetes cluster on HI GIO, including selecting configurations, deploying nodes, and initializing the control plane.

05. Deploy demo app with persistence volume and publish app via ingress controller
  • 06. How to configure HI GIO Kunernetes cluster autoscale

  • 1. Steps To Create Kubernetes Cluster on HI GIO Portal
    2. How to resize Kubernetes Cluster on HI GIO portal
    3. Extending disk size for nodes in Kubernetes Cluster on HI GIO Portal
    4. How to upgrade Kubernetes Cluster in HI GIO Portal
    Procedure
    • Create a network for the cluster with available Static IP Pools.

    • Create firewall and SNAT rules to ensure VMs in the cluster can access the internet.

    • Make sure HI GIO Load Balancing is enabled.

    • Make sure there is at least one available public IP.

    Step 1: Log in to the HI GIO portal with tenant account > Click More > Kubernetes Container Clusters

    Step 2: Click NEW and follow the creation steps to complete the creation process to create a new HI GIO Kubernetes cluster.

    • Click NEXT

    • Enter the name of the cluster and select a Kubernetes version > NEXT.

  • Click NEXT in step 3.

  • Attaching clusters to Tanzu Mission Control is currently not supported.

    • Select oVDC and Network for nodes > NEXT.

    • In the Control Plane window, select the number of nodes and disk size, and optionally select a sizing policy, a placement policy, and a storage profile, and click NEXT.

    Configuration field

    Description

    Number of Nodes

    • Non-HA: 1

    • HA: 3

    Disk Size (GB)

    The minimum allowed is 20 GB

    Sizing Policy

    • TKG medium: If the number of Worker nodes is less than or equal to 10 nodes.

    • TKG large​: If the number of Worker nodes exceeds 10 nodes.

    Placement Policy

    Leave blank. We do not apply a placement policy for the HI GIO Kubernetes cluster.

    Storage Policy

    Select an available storage policy.

    • Configure worker pools setting > NEXT

    Configuration field

    Description

    Name

    Enter the worker pool name.

    Number of Nodes

    Enter the number of nodes of the worker pool.

    Disk Size (GB)

    The minimum allowed is 20 GB

    Sizing Policy

    • TKG small: Small VM sizing policy for a Kubernetes cluster node (2 CPU, 4GB memory)

    • TKG medium: Medium VM sizing policy for a Kubernetes cluster node (2 CPU, 8GB memory)

    • TKG large​: Large VM sizing policy for a Kubernetes cluster node (4 CPU, 16GB memory)

    • TKG extra-large

    Placement Policy

    Leave blank. We do not apply a placement policy for HI GIO Kubernetes cluster.

    Storage Policy

    Select an available storage policy.

    (Optional) To create additional worker node pools, click Add New Worker Node Pool and configure worker node pool settings.

    • Configure storage class > NEXT

    Configuration field

    Description

    Select a Storage Profile

    Select one of the available storage profiles.

    Storage Class Name

    The name of the default Kubernetes storage class. This field can be any user-specified name with the following constraints based on Kubernetes requirements:

    • Contain a maximum of 63 characters

    • Contain only lowercase alphanumeric characters or hyphens

    • Start with an alphabetic character

    Reclaim Policy

    • Delete policy: This policy deletes the PersistentVolume object when the PersistentVolumeClaim is deleted.

    • Retain policy: This policy does not delete the volume when the PersistentVolumeClaim is deleted; the volume can be reclaimed manually.

    Filesystem

    • xfs

    • ext4: This is the default filesystem used for the storage class.

    • Configure Kubernetes network > NEXT

    Option

    Description

    Pods CIDR

    Specifies a range of IP addresses to use for Kubernetes pods. The default value is 100.96.0.0/11. The pod subnet size must be equal to or larger than /24.

    Services CIDR

    Specifies a range of IP addresses to use for Kubernetes services. The default value is 100.64.0.0/13.

    Control Plane IP

    You can specify your own IP address as the control plane endpoint. You can use an external IP from the gateway or an internal IP from a subnet different from the routed IP range.

    Virtual IP Subnet

    You can specify a subnet CIDR from which one unused IP address is assigned as a Control Plane Endpoint. The subnet must represent a set of addresses in the gateway. The same CIDR is also propagated as the subnet CIDR for the ingress services on the cluster.

    You should enter the available public IP into the Control Plane IP

    • Enable Auto Repair on Errors and Node Health Check > NEXT

    Auto Repair on Errors: If errors occur before this cluster becomes available, the CSE Server will automatically attempt to repair the cluster.

    Node Health Check: Unhealthy nodes will be remediated after this cluster becomes available according to unhealthy node conditions and remediation rules.

    • Review all cluster information and click FINISH to create the cluster.

    Step 3: Wait until the cluster status is Available, then click DOWNLOAD KUBE CONFIG to download the kubeconfig file

    Please configure the VPC firewall to allow access to the Control Plane IP using port 6443.

    05. Deploy demo app with persistence volume and publish app via ingress controller

    Overview

    Step-by-step guide on how to deploy demo application on HI GIO Kubernetes

    • Install nginx ingress controller to your Kubernetes cluster. Installing the nginx ingress controller will auto-create 2 Virtual services (80, 443) in HI GIO LB.

    3. Extending disk size for nodes in Kubernetes Cluster on HI GIO Portal

    Overview

    • Step-by-step guide on how to increase the disk capacity of each node in the cluster to meet growing storage needs.

    • No downtime.

    : Extra-large VM sizing policy for a Kubernetes cluster node (8 CPU, 32GB memory)
    End with an alphanumeric character

    Deploy demo app with persistence volume into the Kubernetes cluster and publish app via ingress nginx

    Procedure

    1. Pre-requisites:

    • Helm (v3 or higher)

    • Make sure there is at least 1 available public IP

    • Have a default Storage Class

    • Permission for access to your Kubernetes cluster

    2. Procedure:

    1

    Step 1: Install nginx ingress controller to your Kubernetes cluster

    • Verify pod status is Running and service ingress-nginx-controller successfully obtained an EXTERNAL IP

    CNI driver on Kubernetes automatically creates Virtual services on HI GIO LB and 2 DNAT rules (80, 443) on vCD.

    Please modify the VPC firewall to allow access to the ingress virtual services. This provides access to your application published via the nginx ingress

    Ref

    2

    Step 2: Deploy demo app with persistence volume into the Kubernetes cluster and publish app via ingress nginx

    • Demo app folder structure

    • Create file 01-demoapp-namespace.yaml to create demoapp namespace

    Procedure

    I. Pre-requisites:

    • Make sure oVDC has an available storage quota.

    • Make sure you can access the node (ssh, node-shell…)

    II. Procedure:

    1

    Step 1: Log in to HI GIO portal > Virtual Machines > Select the node you want to extend disk size > Hard Disks > EDIT

    2

    Step 2: Enter new disk size >SAVE

    3

    Step 3: SSH to the node and run the commands below to expand the disk on the OS.

    • Check root mount point:

    eg. /dev/sda4

    • Run the commands below to extend the capacity for /dev/sda4

    4. How to upgrade Kubernetes Cluster in HI GIO Portal

    Overview

    This is a document for how to upgrade HI GIO Kubernetes Cluster.

    Step for performing the upgrade:

    • The versions of software in this guide are as follows:

    #Add repo ingress-nginx
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx 
    helm repo update ingress-nginx
    #Install ingress nginx
    helm install ingress-nginx ingress-nginx/ingress-nginx \
    		--namespace ingress-nginx \
    		--set controller.service.appProtocol=false \
    		--create-namespace
    kubectl get all -n ingress-nginx

    Create file 02-demoapp-pvc.yaml to create a Persistence Volume Claim

    • Create file 03-demoapp-deployment.yaml to create the demoapp deployment

    • Create file 04-demoapp-service.yaml to create demoapp service

    • Create file 05-demoapp-ingress.yaml to create demoapp ingress

    • Apply all manifests

    • Create a DNS record for demoapp

    • If all the configuration is correct, you can access your app with the domain http://<ingress-host>

    Using Edge Gateway firewall

    Run command to verify disk size after expansion

    Upgrade Kubernetes Cluster.

  • Validate Kubernetes Cluster after the upgrade.

  • The versions of the software are as follows

    • Kubernetes Version: 1.23.17+vmware.1

    • TKG Product Version: 2.2.0

    • Service running on the cluster: Voting web app

    • Component running the service:

    Procedure

    Step 1: From the vCD portal, choose More → Kubernetes Container Clusters

    Step 2: Choose the cluster you want to upgrade to.

    Step 3: Choose UPGRADE.

    Step 4: Verify the current usage of Kubernetes and TKG Product versions are correct.

    Step 5: Then choose the Available upgrade options → UPGRADE.

    Step 6: Wait for the cluster to perform the upgrade completed.

    Step 1: After completion, the cluster version will change to the version you choose above.

    Step 2: Test the component after performing the upgrade, and ensure that everything is running normally.

    Step 3: Testing service after the upgrade, and it is still running normally.

    The upgrade is completed.

    2. How to resize Kubernetes Cluster on HI GIO portal

    Overview

    This documentation will be used when you want to resize your cluster nodes on HI GIO Cloud (decrease or increase the number of nodes).

    Procedure

    I. Pre-requisites:

    • You already have a Kubernetes cluster on HI GIO Cloud.

    • The resource is available if you increase the number of nodes.

    • You have permission to control the Kubernetes cluster.

    II. Procedure:

    1

    Step 1: Log in to HI GIO Cloud with IAM account --> Open vCD Portal --> More --> Kubernetes Container Clusters

    2

    Step 2: Select your cluster Node Pool

    demoapp 
    ├── 01-demoapp-namespace.yaml
    ├── 02-demoapp-pvc.yaml
    ├── 03-demoapp-deployment.yaml
    ├── 04-demoapp-service.yaml
    └── 05-demoapp-ingress.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      name: demoapp
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: demoapp-pvc
      namespace: demoapp
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: default-storage-class-1 #adjust to use your storage class
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demoapp
      namespace: demoapp
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: demoapp
      template:
        metadata:
          labels:
            app: demoapp
        spec:
          containers:
          - name: demoapp
            image: paulbouwer/hello-kubernetes:1.8
            ports:
            - containerPort: 8080
            volumeMounts:
            - mountPath: /data
              name: demoapp-storage
          volumes:
          - name: demoapp-storage
            persistentVolumeClaim:
              claimName: demoapp-pvc
    apiVersion: v1
    kind: Service
    metadata:
      name: demoapp
      namespace: demoapp
    spec:
      type: ClusterIP
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: demoapp
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demoapp-ingress
      namespace: demoapp
    spec:
      ingressClassName: nginx
      rules:
      - host: demoapp.cloud.net.vn #adjust to use your domain
        http:
          paths:
          - backend:
              service:
                name: demoapp
                port:
                  number: 80
            path: /
            pathType: Prefix
    cd demoapp
    kubectl apply -f .
    Name: <ingress-host>
    Address: 42.113.xx.xx (EXTERNAL-IP of ingress nginx)
    df -h
    echo 1 > /sys/block/sda/device/rescan 
    growpart /dev/sda 4 
    resize2fs /dev/sda4
    fdisk -l
    It shows information about node type, node count, and sizing policy, …
    3

    Step 3: Select the symbol on the left node you want to resize --> Resize

    4

    Step 4: Fill in the number of nodes you want to resize SUBMIT

    Use caution when resizing node pools to 0 worker nodes. The cluster must always have at least 1 running worker node, or else the cluster will enter an unrecoverable error state.

    5

    Step 5: Waiting for the node size

    6

    Step 6: On the tab Monitor, it will show many tasks updated on your cluster

    7

    Step 7: When completed, the node count will show your location's available node/desired node.

    8

    Step 8: You can verify with a command inside your cluster.

    06. How to configure HI GIO Kunernetes cluster autoscale

    Overview

    Step-by-step guide on how to configure HI GIO Kubernetes cluster autoscale

    • Install tanzu-cli

    • Create cluster-autoscaler deployment from tanzu package using tanzu-cli

    • Enable cluster autoscale for your cluster

    • Test cluster autoscale

    • Delete cluster-autoscaler deployment and clean up test resource

    Procedure

    Pre-requisites:

    • Ubuntu bastion can connect to your Kubernetes cluster

    • Permission for access to your Kubernetes cluster

    1

    Step 1: Install tanzu-cli

    To install tanzu-cli in other environments, please refer to the documentation below:

    (Optional) If you want to configure tanzu completion, please run the command below and follow the instructions output

    tanzu completion --help

    2

    Step 2: Create cluster-autoscaler deployment from tanzu package using tanzu-cli

    • Switched to your Kubernetes context

    • List available cluster-autoscaler in tanzu package and note the version name

    • Create kubeconfig secret name cluster-autoscaler-mgmt-config-secret in cluster kube-system namespace

    Please do not change the secret name (cluster-autoscaler-mgmt-config-secret) and namespace (kube-system)

    • Create cluster-autoscaler-values.yaml file

    Required values:

    • clusterName: your cluster name

    • clusterNamespace: your cluster namespace

    • Install cluster-autoscaler

    The cluster-autoscaler will deploy into the kube-system namespace.

    Run the command below to verify cluster-autoscaler deployment:

    kubectl get deployments.apps -n kube-system cluster-autoscaler

    • Configure the minimum and maximum number of nodes in your cluster

      • Get machinedeployments name and namespace

    • Set cluster-api-autoscaler-node-group-min-size and cluster-api-autoscaler-node-group-max-size

    • Enable cluster autoscale for your cluster

    Because this step requires provider permission to perform, please notify the cloud provider to perform this step.

    3

    Step 3: Test cluster autoscale

    • Get the current number of nodes

    kubectl get nodes

    There is currently only one worker node.

    • Create test-autoscale.yaml file

    • Apply test-autoscale.yaml file to deploy 2 replicas of nginx pod in the default namespace (it will trigger to create a new worker node)

    • Get nginx deployment

    You can see there is a new nginx pod with a status of Pending and the events shown FailedScheduling and TriggeredScaleUp:

    • Waiting for a new node to be provisioned, then you can see a new worker node has been provisioned and new nginx pod status is running

    • Clean up test resource

    After deleting the nginx deployment test. The cluster waits a few minutes to delete the unneeded node (please see scaleDownUnneededTime value in cluster-autoscaler-values.yaml file)

    • Delete cluster-autoscaler deployment (Optional)

    In case you don't want your cluster to auto-scale anymore. You can delete cluster-autoscaler deployment using tanzu-cli:

    Installing and Using VMware Tanzu CLI v1.5.x
    kubectl config use-context <your context name>
    tanzu package available list cluster-autoscaler.tanzu.vmware.com
    #Install tanzu-cli to ubuntu
    sudo apt update
    sudo apt install -y ca-certificates curl gpg
    sudo mkdir -p /etc/apt/keyrings
    curl -fsSL https://storage.googleapis.com/tanzu-cli-installer-packages/keys/TANZU-PACKAGING-GPG-RSA-KEY.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/tanzu-archive-keyring.gpg
    echo "deb [signed-by=/etc/apt/keyrings/tanzu-archive-keyring.gpg] https://storage.googleapis.com/tanzu-cli-installer-packages/apt tanzu-cli-jessie main" | sudo tee /etc/apt/sources.list.d/tanzu.list
    sudo apt update
    sudo apt install -y tanzu-cli
    #Verify tanzu-cli installation
    tanzu version
    kubectl create secret generic cluster-autoscaler-mgmt-config-secret \
    --from-file=value=<path to your kubeconfig file> \
    -n kube-system
    arguments:
      ignoreDaemonsetsUtilization: true
      maxNodeProvisionTime: 15m
      maxNodesTotal: 0 #Leave this value as 0. We will define the max and min number of nodes later.
      metricsPort: 8085
      scaleDownDelayAfterAdd: 10m
      scaleDownDelayAfterDelete: 10s
      scaleDownDelayAfterFailure: 3m
      scaleDownUnneededTime: 10m
    clusterConfig:
      clusterName: "demo-autoscale-tkg" #adjust here
      clusterNamespace: "demo-autoscale-tkg-ns" #adjust here
    paused: false
    tanzu package install cluster-autoscaler \
    --package cluster-autoscaler.tanzu.vmware.com \
    --version <version available> \ #adjust the version listed above to match your kubernetes version
    --values-file 'cluster-autoscaler-values.yaml' \
    --namespace tkg-system #please do not change, this is default namespace for tanzu package
    kubectl get machinedeployments.cluster.x-k8s.io -A
    kubectl annotate machinedeployment <machinedeployment name> cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size=<number min> -n <machinedeployment namespace>
    kubectl annotate machinedeployment <machinedeployment name> cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size=<number max> -n <machinedeployment namespace>
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      namespace: default
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
          topologySpreadConstraints: #Spreads pods across different nodes (ensures no node has more pods than others)
          - maxSkew: 1 
            topologyKey: kubernetes.io/hostname
            whenUnsatisfiable: DoNotSchedule
            labelSelector:
              matchLabels:
                app: nginx
    kubectl apply -f test-autoscale.yaml
    kubectl get pods
    kubectl describe pod nginx-589656b9b5-mcm5j | grep -A 10 Events
    Warning  FailedScheduling  2m53s  default-scheduler   0/2 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
    Normal   TriggeredScaleUp  2m43s  cluster-autoscaler  pod triggered scale-up: [{MachineDeployment/demo-autoscale-tkg-ns/demo-autoscale-tkg-worker-node-pool-1 1->2 (max: 5)}]
    kubectl delete -f test-autoscale.yaml
    tanzu package installed delete cluster-autoscaler -n tkg-system -y