February 28, 2023

Cluster configuration-as-a-code: Reviewing a real world repository and benefits of Tanzu Mission Control's GitOps integration

I review a real world repository in-use by one of our highly technical partners to manage their Kubernetes clusters in production. Then I do a deep-dive on an example repository I built so that others can understand how to begin utilizing Flux with their clusters managed by Tanzu Mission Control.

Intro

With the recent availability of FluxCD configuration for cluster groups within Tanzu Mission Control, I wanted to review a real world repository that is currently being used by one of our highly technical partners to manage their Kubernetes clusters in production. Then I will do a deep-dive on an example repository I built so that others can understand how to begin utilizing Flux with their clusters managed by Tanzu Mission Control.

*It is important to note my repository was intended to be used with Tanzu Mission Control and has been configured in such a way that you can still deploy packages from the Catalog, while also utilizing automation with FluxCD. This is achieved by utilizing the kapp-controller that is automatically deployed by Tanzu Mission Control

A new day in the life of a Platform Engineer

Let's see how we can cut our steps to a fraction of our previous steps when installing software packages to Kubernetes clusters.

A customer was looking for a solution to help them install common services such as container security, data protection, and monitoring across hundreds of clusters while also avoiding any significant investment into developing new scripts or pipelines. The initial release of FluxCD support in Tanzu Mission Control provided Operators with the ability to configure Flux at the cluster level, which was a good first step to achieving operational efficiencies. The next step has been taken by expanding the continuous delivery feature from Clusters to Cluster Groups, eliminating the need to configure each cluster individually. This can potentially save thousands of skilled labor hours to deploy and continually maintain multiple Kubernetes clusters and the packages on them..

While the cluster provisioning process is roughly 4 steps (depending on how many kustomizations are added), this still creates a lot of additional toil as cluster count increases. Let’s review the current process:

  1. Create a new cluster through Tanzu Mission Control
  2. Deploy Flux onto the cluster
  3. Add Git repository
  4. Add kustomizations

With the addition of FluxCD support for cluster groups, the new process is a fraction of the steps:

  1. Create a new cluster through Tanzu Mission Control and place it in a cluster group that has the continuous delivery feature enabled and configured
  2. After the initial configuration, cluster-specific overrides may be applied at the cluster level, either by the Platform Operator or App Operator.

Pretending this is your life, you would be able to attach a git repository to Tanzu Mission Control and leverage your existing GitOps toolchain to achieve operation efficiencies when configuring clusters or cluster groups. YAML artifacts from your repository are automatically synchronized and package installation on clusters is streamlined, which improves configuration consistency and speeds up the delivery of ready-to-use clusters for teams that need them.

Looking at a real customer repository

Today, we are going to explore a partner who is able to automatically deploy mission-critical packages to any existing and new Kubernetes clusters in their fleet. Let’s take a moment to review what is happening within this production repository to understand how they are achieving their operational and security goals. There are 7 packages being installed automatically, and continually updated as application configs, versions, etc. are updated in Git.

The packages getting installed are:

  • Carbon Black for container security
  • Kasten for data protection
  • Bitnami sealed-secrets for storing sensitive secrets in Git
  • SMB driver for shared folder access from pods
  • Contour Tanzu Package for application ingress
  • Cert-manager for service certificate management
  • Fluent-bit for exporting logs and metrics

Here is how the repository is structured:

├── carbon-black

│  ├── cbcontainers-access-token_sealedsecret.yaml

│  ├── cbcontainers-agent_CBContainersAgent.yaml

│  ├── cbcontainers-operator.yaml

│  └── kustomization.yaml

├── flux-system

│  ├── flux-components.yaml

│  ├── flux-sync.yaml

│  └── kustomization.yaml

├── infra-apps

│  ├── kasten-io

│  │  ├── k10_helmrelease.yaml

│  │  ├── kasten_helmrepository.yaml

│  │  └── kustomization.yaml

│  ├── sealed-secrets

│  │  ├── helmrelease.yaml

│  │  └── helmrepository.yaml

│  ├── smb-csi-driver

│  │  ├── helmrelease.yaml

│  │  └── helmrepository.yaml

│  └── tanzu-system-ingress

│       ├── tlscertificatedelegation.yaml

│       └── wildcard-services-prodsite-com_sealedsecret.yaml

├── public-sealed-secrets.pem

└── tanzu-packages

    ├── cert-manager

    │  ├── cert-manager-package-install.yaml

    │  └── kustomization.yaml

    ├── contour

    │  ├── contour-data-values.yaml

    │  ├── contour-package-install.yaml

    │  └── kustomization.yaml

    ├── fluent-bit

    │  ├── fluent-bit-data-values.yaml

    │  ├── fluent-bit-package-install.yaml

    │  └── kustomization.yaml

    └── pre-reqs

        ├── cert-manager-sa.yaml

        ├── contour-sa.yaml

        ├── fluent-bit-sa.yaml

        └── packages-ns.yaml

You will notice that many of the folders contain a kustomization.yaml that defines what YAML components should be ingested by kustomize. Folders used for Helm deployments contain a helmrelease.yaml which defines the Helm chart deployment settings, and a helmrepository.yaml which defines a repository that a Helm chart should be retrieved from and deployed.

Separating each package into folders that contain all of the needed resources for installation makes organizing and updating/adding/replacing YAMLs for individual packages easy. By automatically configuring and deploying configurations and packages to clusters utilizing Flux, significant operational efficiencies will trickle down to multiple teams.

For example, the training barrier for junior team members or those with little Kubernetes skills is significantly lowered, allowing for that expanded group to participate in cluster operations. This means the primary platform engineers are now freed up to look at optimizations and improvements that can be made instead of being continually busy with cluster operations.

An example repository using the advantages of Tanzu Mission Control's integrations

Now, let’s take a moment to review how my example Git repository is structured and how the packages are configured to make it easier to understand what is happening and how these configurations can be repurposed for your own environment.

Because my repository is intended to be used with Tanzu Mission Control, I wanted to point out the native open-source integrations that we are taking advantage of that contribute to a simplified repository:

  • Velero provides data protection for workloads and is managed by Tanzu Mission Control, so these components are not present in my repository.
  • Tanzu Service Mesh provides container security which is deployed by Tanzu Mission Control, so those components are not present in my repository
  • FluxCD provides Git repository synced cluster configurations as well as automated package operations and is deployed and managed by Tanzu Mission Control, so those components are not present in my repository

The packages that get deployed automatically from this repository are:

  • Cert-manager for service certificate management
  • Contour Tanzu Package for application ingress
  • Fluent-bit for exporting pod logs and metrics
  • Grafana for data visualization
  • Harbor for providing a trusted repository for images or packages
  • Prometheus for app monitoring and metrics

Now we will review what each folder contains and how they relate to the automation process:

  • pre-reqs folder - Kustomization files that point back to the namespace and service-accounts folders for the resources needed to deploy the Tanzu packages
  • flux-config folder - Contains kustomization files for each package we are deploying that point to the package installer folder in the repository root. If you do not make any modifications, here are the order and packages that will be deployed automatically:
     cert-manager -> fluent-bit -> contour -> Harbor -> Prometheus -> Grafana
  • namespaces folder - Namespace definition used for all service accounts and PackageInstall CRDs. I am using the namespace packages
  • service-accounts folder - Service account definitions needed to deploy Tanzu packages
  • <Package name> folder - Carvel PackageInstall, data values, and kustomization YAMLs

Each stage has been configured not to progress unless the pre-reqs for the package have been met. Let’s examine how this is utilized in the example contour kustomization file below:

---

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2

kind: Kustomization

metadata:

  name: tanzu-packages-contour

  namespace: tanzu-continuousdelivery-resources

spec:

  dependsOn:

    - name: tanzu-packages-cert-manager

  interval: 1m0s

  path: ./contour

  prune: true

  sourceRef:

    kind: GitRepository

    name: fluxcd-poc

    namespace: tanzu-continuousdelivery-resources

  healthChecks:

    - apiVersion: apps/v1

      kind: Deployment

      name: contour

      namespace: tanzu-system-ingress

Using the dependsOn property, this kustomization file states that it cannot start reconciling until the tanzu-packages-contour kustomization reconciles. Utilizing the healthChecks property, we are instructing the reconcile process to only proceed if the contour deployment in the tanzu-system-ingress namespace is ready. If you only define a dependsOn without a health check you could end up with kustomizations that fail to reconcile and still proceed to the next kustomization step causing all subsequent kustomizations to fail.

To customize which packages get installed, the kustomization.yaml located in the flux-config folder can be modified to add/remove packages. Individual package values can be modified by editing the <package name>-data-values.yaml present in each package folder. If you do not create a kustomization.yaml for each directory, flux will automatically generate one for you. Utilizing a kustomization.yaml is recommended as it helps you control execution order and allows you to utilize other kustomize functionality such as the secretGenerators

Configure FluxCD for Cluster Groups with Tanzu Mission Control

In the Tanzu Mission Control console, click on the Cluster Groups view and select your desired cluster group. Then in the top right corner of the view click Actions, then click Enable continuous delivery:

image-20230301140259-3
Fig 1. Enabling Continuous Delivery

Depending on cluster resources this process can take a few minutes. If you plan on utilizing Helm charts you will also need to manually install the Helm Controller from the Catalog. You can click the Add-ons tab to see the status of the deploying flux components.

Note for vSphere with Tanzu clusters: If you are enabling continuous delivery on a vSphere with Tanzu workload cluster, you will need to create a policy that disables PodSecurityPolicies (PSPs) on the Flux source-controller namespace:

  1. In the Tanzu Mission Control console click Policies then Assignments.
  2. For this example I am going to create a policy and apply it to the Cluster group that contains all of my vSphere with Tanzu clusters, so I select the corresponding group and click Create Security Policy
  3. Next select Baseline as the security template and provide a descriptive name for the policy.
  4. Scroll down and toggle Disable native pod security policies and leave Enforcement action set to deny so there is still a security policy in place
  5. Click Create policy.image-20230301133841-2

    Fig 2. Disabling native PodSecurityPolicies

Adding the repository and kustomizations

Once Flux has been successfully enabled, refresh the page and click on the Continuous Delivery tab then click Git repositories

  1. Click add Git repository and create a repo named fluxcd-poc with a URL of https://github.com/coreydinkens/tmc-flux-poc

     

    1. Ensure to name the repository exactly as spelled above or your packages may not deploy
    2.  If you would like to select a specific branch, tag, semver, commit, or a specific Git implementation, click advanced and customize the fields as desired:

      Fig 3. Selecting Git Advanced Options

  2. Next click kustomizations on the left, then click Add
    1. For name enter tanzu-service-accounts, select the fluxcd-poc repository you just added, and enter pre-reqs as the folder path. If you would like objects created by this repository to be removed when the kustomization is removed, toggle the prune option.
  3. Wait a few moments and verify the first kustomization completed successfully then click Add again
    1. For name enter tanzu-packages, select the fluxcd-poc repository, and enter flux-config as the folder path, then toggle the prune option if you would like objects created by this repository to be removed when the kustomization is removed

 

While viewing a cluster in the Tanzu Mission Control console you can verify the status of the kustomizations under by clicking the Add-ons tab, then clicking Installed kustomizations:

Fig 4. Checking the status of kustomizations

You can verify the status of the Tanzu packages by clicking on Installed Tanzu Packages:

Fig 5. Checking the status of Tanzu Packages

You can currently only see the status of Carvel packages in this view as Tanzu Mission Control can only ingest the status of the PackageInstall object. We will need to use kubectl to get additional information about a package deployment such as the loadBalancer / external IP.

To find the external IPs of services deployed, connect to the cluster via terminal where the packages were just deployed and execute the following commands:

Grafana:

 kubectl get svc -n tanzu-system-dashboards

Harbor / Envoy:

kubectl get svc -n tanzu-system-ingress

To access the Harbor web UI successfully, you will need to add a DNS entry for the Harbor hostname that points to the LoadBalancer IP address in either your local hosts file or DNS resolver of choice. Then you need to use the Harbor hostname in the URL, ie https://harbor.tanzu.system

If you want to automatically remove the packages that were just added, make sure the prune option has been enabled, then remove the kustomizations in the following order:

  1. tanzu-packages
  2. tanzu-service-accounts

If you do not follow this order you will end up with packages that require manual removal because the service accounts needed to manage them are gone.

Automatically manage Helm deployments with Tanzu Mission Control

Prior to adding the helm directories, the Helm controller needs to be deployed on the cluster. Click on the Clusters view, then click on the name cluster you want to enable Helm on.

  1. Click the Actions button
  2. Then click Enable helm service

image-20230301141308-5

Fig 6. Installing Flux Helm Controller

The primary differences between deploying Carvel (Tanzu) packages and Helm charts utilizing Flux, is that Flux uses a Helm controller to monitor HelmReleases and handle the chart CRUD (create, replace, update, delete) process. To deploy the Carvel (Tanzu) packages, I am using the native Kubernetes object to deploy the package with a Carvel PackageInstall CRD.

I have included two example Helm deployments that can be used as a basis for adding your own deployments. Modify the values.yaml as needed, then add these folders as kustomizations:

  1. kube-prometheus-stack-helm folder - Community Prometheus + Grafana Helm chart
  2. wordpress-helm folder - Bitnami Wordpress Helm chart that includes MariaDB

Closing remarks

A recurring theme that we kept hearing from customers at VMware Explore 2022 in San Francisco was the fact that they wanted their Ops teams to be more efficient and have all needed tooling at their disposal so that cluster handoff is more complete. They do not want devs concerned with cluster config, VLANs, or adding packages/tooling to clusters that would be needed for further software deployments. Tanzu Mission Control can be, or connect to your source of truth for cluster configurations and help streamline operations.

I hope this short guide has been useful and helps you down a path to efficient production. By utilizing the GitOps approach outlined above, Ops and Dev teams can begin to eliminate the toil involved with deploying new clusters and preparing them for consumption by the desired audience, with all the packages needed.

Filter Tags

Tanzu Tanzu Mission Control Blog Deep Dive Deployment Considerations Operational Tutorial Advanced Design Deploy Manage Application Monitoring Cloud deployment Day 2 Operations K8s Kubernetes and Containers Metrics Multi-cloud Workload Automation