November 28, 2022

ClusterClass Intro and Demo

A look into ClusterClass with demo to install Vanilla ClusterClass in AWS on top of a Tanzu Kubernetes Grid management cluster.

What is ClusterClass?

Cluster Class is a new feature in the Kubernetes Cluster API project and, at its core, ClusterClass is a collection of templates that creates and manages the Cluster API objects that make up each cluster. This allows for multiple clusters to be built by users easily without losing track of cluster and configuration differences because, ultimately, ClusterClass defines the clusters. ClusterClass streamlines the responsibilities between the platform administrator and end user by abstracting the infrastructure away from the user. This reduces the knowledge users need to know to only the details that are specific to their cluster, and therefore helps them be successful in creating and maintaining their clusters.

How could ClusterClass help VMware Tanzu Kubernetes Grid?

Creating clusters using ClusterClass removes repetitive lines and leaves just a handful of YAML lines for the end user to write. It also simplifies the complexities of Kubernetes clusters, making it easier for users to create and maintain their clusters. Since ClusterClass aligns with the upstream, each release is tested before being included into any products. This can streamline how fast new features are available. Another benefit that comes from being open source and upstream is full transparency of the inner workings and consistent behavior across each platform running ClusterClass.

Getting a look at ClusterClass

ClusterClass is now available upstream on Vanilla Kubernetes and on vSphere 8 for anyone who wants to get a look. A quick way to get started is to use the management cluster from Tanzu Kubernetes Grid. The following steps use Amazon Web Services (AWS), since some may not have a vSphere sandbox available. The video demo is also available.

Disclaimer: Some of the steps in this demonstration are not supported to modify the Cluster API (including the use of clusterctl). It is not recommended to try these steps in a production environment. A sandbox environment that can be deleted is ideal.

  1. Download and install the following command line interface (CLI) tools to a local machine. If on Windows, using Windows Subsystem for Linux (WSL2) works for these purposes. You will need jq, AWS CLI, Tanzu CLI, Kubernetes Cluster API Provider AWS Management Utility (clusterawsadm), and the Cluster API Management Utility CLI (clusterctl). Below are the commands to get these. See the respective documentation for each one for more information.
tanzu cli:
  From a browser navigate to https://www.vmware.com/go/get-tkg for the tanzu cli
aws cli:
  sudo apt-get install jq -y
  curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
  unzip awscliv2.zip
  sudo ./aws/install sudo
  clusterawsadm
cli: curl -o clusterawsadm -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v1.5.0/clusterawsadm-linux-amd64
  chmod +x ./clusterawsadm
  sudo mv ./clusterawsadm /usr/local/bin/clusterawsadm
clusterctl cli:
  curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.2.2/clusterctl-linux-amd64 -o clusterctl
  chmod +x ./clusterctl
  sudo mv ./clusterctl /usr/local/bin/clusterctl
  1. Next, an AWS CLI profile needs to be set up. The Tanzu installer uses this profile to deploy the management cluster.

aws configure --profile tceuser

  1. Environment variables need to be set for the AWS CLI and clusterawsadm CLI. Create an environment file as shown in the example below or download it from Github. Fill out the missing information based on your AWS credentials, then source the file.
  export AWS_REGION=us-west-2
  export AWS_ACCESS_KEY_ID=
  export AWS_SECRET_ACCESS_KEY=
  export AWS_SSH_KEY_NAME=default
  export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
  export AWS_NODE_MACHINE_TYPE=t3.large
  export EXP_CLUSTER_RESOURCE_SET=true
  export CLUSTER_TOPOLOGY=true
  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials
  encode-as-profile)
source ~/environment.env
  1. Now use the clusterawsadm CLI to create a cloud formation stack, setting up the roles and permissions for Cluster API by running the bootstrap.
clusterawsadm bootstrap iam create-cloudformation-stack
  1. Record the local IP and use it to bootstrap the Tanzu Kubernetes Grid management cluster with the Tanzu CLI.
ip addr show
tanzu management-cluster create --ui --bind 172.29.240.57:5555 --browser none
  1. Connect to the installer using a browser, and continue the install as described in the documentation Deploy a Management Cluster to AWS. The T3.large instance type will work, but choose the instance type you prefer. You can find more information at Amazon EC2 Instance Types.
  2. Change the kubectl CLI context to the newly created management cluster. Typically, the management cluster context will have the cluster’s name in the format “<MGMT-CLUSTER>-admin@<MGMT-CLUSTER>” in the context list from “kubectl config get-contexts”. Otherwise, refer to View Management Cluster Details With Tanzu CLI and kubectl for detailed steps.
  3. The feature gates need to be checked and set to “true” for ClusterClass to work. Do this by editing the CAPI-CONTROLLER-MANAGER. Detailed steps can be found in the documentation Enabling Experimental Features on Existing Management Clusters. After clicking Save, the controller restarts and applies the changes, but if you need to restart the controller again, this can be done with a rollout restart “kubectl rollout restart -n capi-system deployment.apps/capi-controller-manager”.
kubectl edit -n capi-system deployment.apps/capi-controller-manager

Screenshot of the CAPI-CONTROLLER-MANAGER configuration showing the feature gates

  1. To use ClusterClass, the Cluster API version on the existing controller needs to be upgraded. This is best done with the clusterctl CLI upgrade option. More upgrade information can be found in the Cluster API documentation clusterctl upgrade.

Note that once the Cluster API is upgraded, using the Tanzu CLI to make any changes to the deployment may lead to errors that will need to be corrected with manual steps in the AWS console.

clusterctl upgrade plan
clusterctl upgrade apply --contract v1beta1

Screenshot of a Linux machine running clusterctl upgrade apply --contract v1beta1

  1. The management cluster is now ready for creating a ClusterClass. This demo uses the sample ClusterClass and accompanying templates for AWS from Github. You can also write your own class with different options (see the documentation Writing a ClusterClass for more information).
kubectl apply -f https://raw.githubusercontent.com/dbarkelew/Playground/clusterclassaws/Kubernetes/ClusterClass/AWS/templates/myfirst-clusterclass.yml
kubectl apply -f https://raw.githubusercontent.com/dbarkelew/Playground/clusterclassaws/Kubernetes/ClusterClass/AWS/templates/myfirst-templates.yml
  1. Now create a cluster by referencing the ClusterClass as shown in this example.
kubectl apply -f https://raw.githubusercontent.com/dbarkelew/Playground/clusterclassaws/Kubernetes/ClusterClass/AWS/templates/myfirst-cluster.yml
  1. We can monitor the cluster creation using the kubectl CLI or the clusterctl CLI.
kubectl get cluster
clusterctl describe cluster 'myfirst-cluster'
watch “kubectl get cluster myfirst-cluster -o json | jq ‘.status.conditions’”
  1. Once the control plane node is up and the instances are running in the AWS console, the container network interface (CNI) will need to be installed to complete the workload cluster. The kubeconfig file for the workload cluster is pulled using the clusterctl CLI (additional details on getting the kubeconfig are at clusterctl get kubeconfig). For this demo we will use the Calico CNI. The workload cluster will show the ready state after the CNI finishes deploying.
clusterctl get kubeconfig 'myfirst-cluster' > 'myfirst-cluster.kubeconfig'
kubectl --kubeconfig=/home/ubuntu/myfirst-cluster.kubeconfig apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml
  1. Once done with testing out ClusterClass, the following details how to delete ClusterClass and put the management cluster back in a configuration that the Tanzu CLI can manage again (including deletion of the management cluster). First, delete the workload clusters and ClusterClass.
kubectl delete cluster 'myfirst-cluster'
kubectl delete cc myfirst
  1. Next, downgrade the Cluster API to the original version.
clusterctl upgrade apply --core capi-system/cluster-api:v1.1.5 --bootstrap capi-kubeadm-bootstrap-system/kubeadm:v1.1.5 --control-plane capi-kubeadm-control-plane-system/kubeadm:v1.1.5 --infrastructure capa-system/aws:v1.2.0
  1. Now the Tanzu CLI can delete the Tanzu Kubernetes Grid management cluster and clean up the deployment from AWS.
tanzu management-cluster delete clusterclassmgmt

Learn more

Visit the GitHub page for additional documentation on ClusterClass. Additionally, you can watch more demos at the VMware Tanzu Kubernetes Grid TechZone any time. 

Filter Tags

Tanzu Tanzu Kubernetes Grid Blog Demo Activity Path Quick-Start Technical Overview What's New Overview Intermediate AWS