This comprehensive resource is designed to help you navigate the powerful features and functionalities of managed-Kubernetes service run on the HI GIO Cloud infrastructure. This guide provides step-by-step instructions, best practices, and practical examples to enhance your Kubernetes experience.
With HI GIO Kubernetes, you can efficiently manage your containerized applications, scale seamlessly, and ensure high availability across your infrastructure.

Create a network for the cluster with available Static IP Pools.
Create firewall and SNAT rules to ensure VMs in the cluster can access the internet.
Make sure HI GIO Load Balancing is enabled.
Make sure there is at least one available public IP.
Step 1: Log in to the HI GIO portal with tenant account > Click More > Kubernetes Container Clusters
Step 2: Click NEW and follow the creation steps to complete the creation process to create a new HI GIO Kubernetes cluster.
Click NEXT
Enter the name of the cluster and select a Kubernetes version > NEXT.
Click NEXT in step 3.
Select oVDC and Network for nodes > NEXT.
In the Control Plane window, select the number of nodes and disk size, and optionally select a sizing policy, a placement policy, and a storage profile, and click NEXT.
Configuration field
Description
Number of Nodes
Non-HA: 1
HA: 3
Disk Size (GB)
The minimum allowed is 20 GB
Sizing Policy
TKG medium: If the number of Worker nodes is less than or equal to 10 nodes.
TKG large: If the number of Worker nodes exceeds 10 nodes.
Placement Policy
Leave blank. We do not apply a placement policy for the HI GIO Kubernetes cluster.
Storage Policy
Select an available storage policy.
Configure worker pools setting > NEXT
Configuration field
Description
Name
Enter the worker pool name.
Number of Nodes
Enter the number of nodes of the worker pool.
Disk Size (GB)
The minimum allowed is 20 GB
Sizing Policy
TKG small: Small VM sizing policy for a Kubernetes cluster node (2 CPU, 4GB memory)
TKG medium: Medium VM sizing policy for a Kubernetes cluster node (2 CPU, 8GB memory)
TKG large: Large VM sizing policy for a Kubernetes cluster node (4 CPU, 16GB memory)
TKG extra-large
Placement Policy
Leave blank. We do not apply a placement policy for HI GIO Kubernetes cluster.
Storage Policy
Select an available storage policy.
Configure storage class > NEXT
Configuration field
Description
Select a Storage Profile
Select one of the available storage profiles.
Storage Class Name
The name of the default Kubernetes storage class. This field can be any user-specified name with the following constraints based on Kubernetes requirements:
Contain a maximum of 63 characters
Contain only lowercase alphanumeric characters or hyphens
Start with an alphabetic character
Reclaim Policy
Delete policy: This policy deletes the PersistentVolume object when the PersistentVolumeClaim is deleted.
Retain policy: This policy does not delete the volume when the PersistentVolumeClaim is deleted; the volume can be reclaimed manually.
Filesystem
xfs
ext4: This is the default filesystem used for the storage class.
Configure Kubernetes network > NEXT
Option
Description
Pods CIDR
Specifies a range of IP addresses to use for Kubernetes pods. The default value is 100.96.0.0/11. The pod subnet size must be equal to or larger than /24.
Services CIDR
Specifies a range of IP addresses to use for Kubernetes services. The default value is 100.64.0.0/13.
Control Plane IP
You can specify your own IP address as the control plane endpoint. You can use an external IP from the gateway or an internal IP from a subnet different from the routed IP range.
Virtual IP Subnet
You can specify a subnet CIDR from which one unused IP address is assigned as a Control Plane Endpoint. The subnet must represent a set of addresses in the gateway. The same CIDR is also propagated as the subnet CIDR for the ingress services on the cluster.
Enable Auto Repair on Errors and Node Health Check > NEXT
Review all cluster information and click FINISH to create the cluster.
Step 3: Wait until the cluster status is Available, then click DOWNLOAD KUBE CONFIG to download the kubeconfig file












Deploy demo app with persistence volume into the Kubernetes cluster and publish app via ingress nginx
1. Pre-requisites:
Helm (v3 or higher)
Make sure there is at least 1 available public IP
Have a default Storage Class
Permission for access to your Kubernetes cluster
2. Procedure:
Step 1: Install nginx ingress controller to your Kubernetes cluster
Verify pod status is Running and service ingress-nginx-controller successfully obtained an EXTERNAL IP
CNI driver on Kubernetes automatically creates Virtual services on HI GIO LB and 2 DNAT rules (80, 443) on vCD.
Please modify the VPC firewall to allow access to the ingress virtual services. This provides access to your application published via the nginx ingress
Ref
Step 2: Deploy demo app with persistence volume into the Kubernetes cluster and publish app via ingress nginx
Demo app folder structure
Create file 01-demoapp-namespace.yaml to create demoapp namespace
I. Pre-requisites:
Make sure oVDC has an available storage quota.
Make sure you can access the node (ssh, node-shell…)
II. Procedure:
Step 1: Log in to HI GIO portal > Virtual Machines > Select the node you want to extend disk size > Hard Disks > EDIT
Step 2: Enter new disk size >SAVE
Step 3: SSH to the node and run the commands below to expand the disk on the OS.
Check root mount point:
eg. /dev/sda4
Run the commands below to extend the capacity for /dev/sda4
#Add repo ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update ingress-nginx#Install ingress nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.service.appProtocol=false \
--create-namespacekubectl get all -n ingress-nginxCreate file 02-demoapp-pvc.yaml to create a Persistence Volume Claim
Create file 03-demoapp-deployment.yaml to create the demoapp deployment
Create file 04-demoapp-service.yaml to create demoapp service
Create file 05-demoapp-ingress.yaml to create demoapp ingress
Apply all manifests
Create a DNS record for demoapp
If all the configuration is correct, you can access your app with the domain http://<ingress-host>



Run command to verify disk size after expansion




Upgrade Kubernetes Cluster.
Validate Kubernetes Cluster after the upgrade.
Step 1: From the vCD portal, choose More → Kubernetes Container Clusters
Step 2: Choose the cluster you want to upgrade to.
Step 3: Choose UPGRADE.
Step 4: Verify the current usage of Kubernetes and TKG Product versions are correct.
Step 5: Then choose the Available upgrade options → UPGRADE.
Step 6: Wait for the cluster to perform the upgrade completed.
Step 1: After completion, the cluster version will change to the version you choose above.
Step 2: Test the component after performing the upgrade, and ensure that everything is running normally.
Step 3: Testing service after the upgrade, and it is still running normally.
The upgrade is completed.
This documentation will be used when you want to resize your cluster nodes on HI GIO Cloud (decrease or increase the number of nodes).
I. Pre-requisites:
You already have a Kubernetes cluster on HI GIO Cloud.
The resource is available if you increase the number of nodes.
You have permission to control the Kubernetes cluster.
II. Procedure:
Step 1: Log in to HI GIO Cloud with IAM account --> Open vCD Portal --> More --> Kubernetes Container Clusters
Step 2: Select your cluster Node Pool
demoapp
├── 01-demoapp-namespace.yaml
├── 02-demoapp-pvc.yaml
├── 03-demoapp-deployment.yaml
├── 04-demoapp-service.yaml
└── 05-demoapp-ingress.yamlapiVersion: v1
kind: Namespace
metadata:
name: demoappapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demoapp-pvc
namespace: demoapp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default-storage-class-1 #adjust to use your storage classapiVersion: apps/v1
kind: Deployment
metadata:
name: demoapp
namespace: demoapp
spec:
replicas: 3
selector:
matchLabels:
app: demoapp
template:
metadata:
labels:
app: demoapp
spec:
containers:
- name: demoapp
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /data
name: demoapp-storage
volumes:
- name: demoapp-storage
persistentVolumeClaim:
claimName: demoapp-pvcapiVersion: v1
kind: Service
metadata:
name: demoapp
namespace: demoapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: demoappapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp-ingress
namespace: demoapp
spec:
ingressClassName: nginx
rules:
- host: demoapp.cloud.net.vn #adjust to use your domain
http:
paths:
- backend:
service:
name: demoapp
port:
number: 80
path: /
pathType: Prefixcd demoapp
kubectl apply -f .Name: <ingress-host>
Address: 42.113.xx.xx (EXTERNAL-IP of ingress nginx)df -hecho 1 > /sys/block/sda/device/rescan
growpart /dev/sda 4
resize2fs /dev/sda4fdisk -l



Step 3: Select the symbol on the left node you want to resize --> Resize
Step 4: Fill in the number of nodes you want to resize SUBMIT
Use caution when resizing node pools to 0 worker nodes. The cluster must always have at least 1 running worker node, or else the cluster will enter an unrecoverable error state.
Step 5: Waiting for the node size
Step 6: On the tab Monitor, it will show many tasks updated on your cluster
Step 7: When completed, the node count will show your location's available node/desired node.
Step 8: You can verify with a command inside your cluster.

























Step-by-step guide on how to configure HI GIO Kubernetes cluster autoscale
Install tanzu-cli
Create cluster-autoscaler deployment from tanzu package using tanzu-cli
Enable cluster autoscale for your cluster
Test cluster autoscale
Delete cluster-autoscaler deployment and clean up test resource
Pre-requisites:
Ubuntu bastion can connect to your Kubernetes cluster
Permission for access to your Kubernetes cluster
Step 1: Install tanzu-cli
To install tanzu-cli in other environments, please refer to the documentation below:
(Optional) If you want to configure tanzu completion, please run the command below and follow the instructions output
tanzu completion --help
Step 2: Create cluster-autoscaler deployment from tanzu package using tanzu-cli
Switched to your Kubernetes context
List available cluster-autoscaler in tanzu package and note the version name
Create kubeconfig secret name cluster-autoscaler-mgmt-config-secret in cluster kube-system namespace
Please do not change the secret name (cluster-autoscaler-mgmt-config-secret) and namespace (kube-system)
Create cluster-autoscaler-values.yaml file
Required values:
clusterName: your cluster name
clusterNamespace: your cluster namespace
Install cluster-autoscaler
The cluster-autoscaler will deploy into the kube-system namespace.
Run the command below to verify cluster-autoscaler deployment:
kubectl get deployments.apps -n kube-system cluster-autoscaler
Configure the minimum and maximum number of nodes in your cluster
Get machinedeployments name and namespace
Set cluster-api-autoscaler-node-group-min-size and cluster-api-autoscaler-node-group-max-size
Enable cluster autoscale for your cluster
Because this step requires provider permission to perform, please notify the cloud provider to perform this step.
Get the current number of nodes
kubectl get nodes
There is currently only one worker node.
Create test-autoscale.yaml file
Apply test-autoscale.yaml file to deploy 2 replicas of nginx pod in the default namespace (it will trigger to create a new worker node)
Get nginx deployment
You can see there is a new nginx pod with a status of Pending and the events shown FailedScheduling and TriggeredScaleUp:
Waiting for a new node to be provisioned, then you can see a new worker node has been provisioned and new nginx pod status is running
Clean up test resource
After deleting the nginx deployment test. The cluster waits a few minutes to delete the unneeded node (please see scaleDownUnneededTime value in cluster-autoscaler-values.yaml file)
Delete cluster-autoscaler deployment (Optional)

kubectl config use-context <your context name>tanzu package available list cluster-autoscaler.tanzu.vmware.com#Install tanzu-cli to ubuntu
sudo apt update
sudo apt install -y ca-certificates curl gpg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://storage.googleapis.com/tanzu-cli-installer-packages/keys/TANZU-PACKAGING-GPG-RSA-KEY.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/tanzu-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/tanzu-archive-keyring.gpg] https://storage.googleapis.com/tanzu-cli-installer-packages/apt tanzu-cli-jessie main" | sudo tee /etc/apt/sources.list.d/tanzu.list
sudo apt update
sudo apt install -y tanzu-cli
#Verify tanzu-cli installation
tanzu version

kubectl create secret generic cluster-autoscaler-mgmt-config-secret \
--from-file=value=<path to your kubeconfig file> \
-n kube-systemarguments:
ignoreDaemonsetsUtilization: true
maxNodeProvisionTime: 15m
maxNodesTotal: 0 #Leave this value as 0. We will define the max and min number of nodes later.
metricsPort: 8085
scaleDownDelayAfterAdd: 10m
scaleDownDelayAfterDelete: 10s
scaleDownDelayAfterFailure: 3m
scaleDownUnneededTime: 10m
clusterConfig:
clusterName: "demo-autoscale-tkg" #adjust here
clusterNamespace: "demo-autoscale-tkg-ns" #adjust here
paused: falsetanzu package install cluster-autoscaler \
--package cluster-autoscaler.tanzu.vmware.com \
--version <version available> \ #adjust the version listed above to match your kubernetes version
--values-file 'cluster-autoscaler-values.yaml' \
--namespace tkg-system #please do not change, this is default namespace for tanzu packagekubectl get machinedeployments.cluster.x-k8s.io -Akubectl annotate machinedeployment <machinedeployment name> cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size=<number min> -n <machinedeployment namespace>
kubectl annotate machinedeployment <machinedeployment name> cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size=<number max> -n <machinedeployment namespace>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
topologySpreadConstraints: #Spreads pods across different nodes (ensures no node has more pods than others)
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginxkubectl apply -f test-autoscale.yamlkubectl get podskubectl describe pod nginx-589656b9b5-mcm5j | grep -A 10 EventsWarning FailedScheduling 2m53s default-scheduler 0/2 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
Normal TriggeredScaleUp 2m43s cluster-autoscaler pod triggered scale-up: [{MachineDeployment/demo-autoscale-tkg-ns/demo-autoscale-tkg-worker-node-pool-1 1->2 (max: 5)}]kubectl delete -f test-autoscale.yamltanzu package installed delete cluster-autoscaler -n tkg-system -y






