Running Testcontainers on Tanzu Application Platform

Intro

The Tanzu Application Platform allows developers to build, package and deploy applications through Supply Chains. The Out-Of-The-Box Testing Supply Chains leverages Tekton to run tests before the build step. Tests will be run as a Tekton task, inside the cluster.

Testcontainers is a popular technology choice to run unit tests against real, containerized dependencies, such as databases and message brokers. It works on developer machines by spinning up containers using the local Docker daemon. However, it does not work straight out of the box on a Tanzu Application Platform cluster, as one needs to provide a Docker instance. Additionally, that instance must run in “privileged” mode, which might not be desirable in your build clusters.

To run Testcontainers-managed containers outside of a Tanzu Application Platform cluster, you can provide your own Docker host with network access. If you want to use a hosted service, Testcontainers cloud is a great option too.

Using Testcontainers Cloud

Testcontainer Cloud is the official managed service for Testcontainers. Login into Testcontainer Cloud, follow this link to the Account > Service Accounts page. Generate a Token for Testcontainers cloud and copy the generated value.
 

Set up the namespace by loading the service account token

First, create a Kubernetes Secret containing the token, for example:

kubectl create secret generic \
  tc-cloud-secret \
  --namespace apps \
  --from-literal=tc_cloud_token=<YOUR-TESTCONTAINERS-TOKEN>

Consume the secret in a docker pipeline

Create a new pipeline, and mount the secret. In the example below, we use a java-based project with a Gradle task in our test pipeline:

---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: testcontainers-cloud-pipeline
  namespace: apps
  labels:
    apps.tanzu.vmware.com/pipeline: testcontainers-cloud-pipeline
spec:
  params:
    - name: source-url
    - name: source-revision
    - name: tc-cloud-secret-name
  tasks:
    - name: test
      params:
        - name: source-url
          value: $(params.source-url)
        - name: source-revision
          value: $(params.source-revision)
        #! ℹ️  This allows to pass the secret name from the Workload as Tekton task param
        - name: tc-cloud-secret-name
          value: $(params.tc-cloud-secret-name)
      taskSpec:
        params:
          - name: source-url
          - name: source-revision
          #! ℹ️  Use the parameter in the spec
          - name: tc-cloud-secret-name
        stepTemplate:
          securityContext:
            allowPrivilegeEscalation: false
            runAsUser: 1000
            capabilities:
              drop:
                - ALL
            seccompProfile:
              type: "RuntimeDefault"
            runAsNonRoot: true
        steps:
          - name: test
            image: gradle
            #! ℹ️  Change the existing script to download and run the Testcontainer agent, that
            #! connects to the Testcontainer Cloud. Note: this could be baked into the base image.
            script: |-
              cd `mktemp -d`
              # ℹ️  Obtain the testcontainers agent and start it
              sh -c "$(curl -fsSL https://get.testcontainers.cloud/bash)"
              wget -qO- $(params.source-url) | tar xvz -m
              ./gradlew test --info
            env:
              #! ℹ️  Load the token into the job, from the secret
              - name: TC_CLOUD_TOKEN
                valueFrom:
                  secretKeyRef:
                    key: TC_CLOUD_TOKEN
                    name: $(params.tc-cloud-secret-name)

(Note that this script fetches and installs the latest version of the Testcontainer agent before running tests. Alternatively, you can install the agent in your base CI image.)

Then reference the secret containing the Testcontainers Cloud token in your workload, in Workload.spec.params. Here is an example using a Java project.

---
apiVersion: carto.run/v1alpha1
kind: Workload
metadata:
  labels:
    app.kubernetes.io/part-of: testcontainers-app
    apps.tanzu.vmware.com/workload-type: web
    apps.tanzu.vmware.com/has-tests: "true"
  name: testcontainers-app
  namespace: apps
spec:
  build:
    env:
      - name: BP_JVM_VERSION
        value: "17"
  params:
    #! ℹ️  Selects the pipeline, name may vary on your setup
    - name: testing_pipeline_matching_labels
      value:
        apps.tanzu.vmware.com/pipeline: testcontainers-cloud-pipeline
    #! ℹ️  Pass the name of your secret to the pipeline params
    - name: testing_pipeline_params
      value:
        tc-cloud-secret-name: tc-cloud-secret
    - name: live-update
      value: "true"
  #! ℹ️  required to run the app, but you can omit if you just want to try the build phase
  serviceClaims:
    - name: appsso-claim
      ref:
        apiVersion: services.apps.tanzu.vmware.com/v1alpha1
        kind: ClassClaim
        name: appsso-claim
  source:
    git:
      url: https://github.com/Kehrlann/tap-workloads-testcontainers
      ref:
        branch: main

This java project is a simple client application that allows the user to log in with an AppSSO AuthServer. You can run it on your local machine with the dev profile, or simply ./gradlew bootRun

When deploying to TAP, you can inspect the Workload by using the tanzu cli: 

tanzu apps workload get testcontainers --namespace workloads

This will showcase the state of your workload, including the state of the different stages of the supply chain pipeline, the running pods, etc. You can follow the logs of the “test” part of the pipeline:

tanzu apps workload tail testcontainers \
  --component test \
  --namespace workloads \
  --since 24h

 

After the tests finish running, the end of the test output should mention that four tasks were successfully executed:

testcontainers-trvm8-test-pod[step-test] Gradle Test Executor 1 finished executing tests.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] > Task :test
testcontainers-trvm8-test-pod[step-test] Finished generating test XML results (0.041 secs) into: /tmp/tmp.yZzBPjeoSi/build/test-results/test
testcontainers-trvm8-test-pod[step-test] Generating HTML test report...
testcontainers-trvm8-test-pod[step-test] Finished generating test html results (0.093 secs) into: /tmp/tmp.yZzBPjeoSi/build/reports/tests/test
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] For more on this, please refer to https://docs.gradle.org/8.5/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] BUILD SUCCESSFUL in 3m 6s
testcontainers-trvm8-test-pod[step-test] 4 actionable tasks: 4 executed

Note: For this Workload to successfully deploy and result in a running application on TAP, you will need authserver credentials. You can find instructions in the AppSSO docs (create a simple authserver and claim credentials for an authserver), but this is beyond the scope of this tutorial. We are only interested in the test results.

 

 

 

 

Using an external Docker VM

If you prefer to operate your own Docker VM rather than use the Testcontainer’s Cloud managed service, you can configure a remote Docker VM to run your Testcontainer tests. Most of the instructions can be found in the official Docker docs.

Testcontainers for java leverage the docker-java project, which does not support Docker-over-SSH. This means we are going to make the Docker API accessible over HTTPS, secured with mTLS.

Before you start

This is an example deployment. It does not scale to thousands of tests executed in parallel, or supporting multiple teams.

It does not explain how to properly secure the remote Docker VM, e.g. with appropriate firewall rules. Lastly, it uses the example domain and port docker.example.com:2376; do not forget to use your own domain instead, and your port of choice. 

Set up the remote VM and firewall access

Before starting, you need to set up the VM, obtain its IP, and, if available, the DNS name at which it will be available. This is required to generate certificates for TLS support.

The guide assumes your docker daemon will listen on port 2376, so you will need to open that port in your firewall. You also need to open all possible ports that Testcontainers may end up using - those are all Unix ephemeral ports. Find out the range of possible ports:

sysctl net.ipv4.ip_local_port_range
# 32768 60999

This guide assumes you will install Docker on your own. For Ubuntu 22.04, at the time of this writing, the following commands install and configure ubuntu as per the official docker instructions.

sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
 "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
sudo addgroup --system docker
sudo usermod -a -G docker $(whoami)

Then fully log out of the machine, and log back in, to be sure you have the correct groups assigned. The command docker ps or docker run --rm hello-world should work.

Generate server and client certificates

In this section we will generate the certificates used for mTLS between the host and the client(s). Note that is more of a "dev mode" guide, that will generate single-purpose certificates.

💡 Certificate generation can be done either on your local machine or on the remote machine

🚨 It is not recommended to create a CA just to generate your keys and certificates. In production, please provision your certificates with a proper Public Key Infrastructure in place, using a trusted, centralized CA.

The following is straight from the official docker documentation.

Set the HOST and PUBLIC_IP env vars, then run the following interactive bash script. When asked for a password, you must input a value:

#!/usr/bin/env bash
# This script generates:
# 1. A CA private key and certificate, used to sign both the client and server certificates
# 2. A server private key and certificate, to expose over the internet over HTTPs
# 3. A client private key and certificate, to protect the docker daemon from unauthorized connections
set -euo pipefail
: "${HOST?"Please set the HOST env var to the hostname of your docker VM"}"
: "${PUBLIC_IP?"Please set the PUBLIC_IP env var to the public IP of your docker VM"}"
#~~~~~~~~~~~
# Set up initial directories
mkdir ca
mkdir server
mkdir client
#~~~~~~~~~~~
# Root CA key & cert
openssl genrsa -aes256 -out ca/key.pem 4096
openssl req -new -x509 -days 365 -key ca/key.pem -sha256 -out ca/ca.pem
#~~~~~~~~~~~
# Server CSR (Certificate signing request)
openssl genrsa -out server/key.pem 4096
openssl req -subj "/CN=$HOST" -sha256 -new -key server/key.pem -out server/server.csr
# Server Extfile
echo subjectAltName = DNS:$HOST,IP:127.0.0.1,IP:$PUBLIC_IP >> server/extfile.cnf
echo extendedKeyUsage = serverAuth >> server/extfile.cnf
# Generate server certs, signed by the CA
# Here we make the certificates valid for 10 years 🤷
openssl x509 -req -days 3650 -sha256 -in server/server.csr -CA ca/ca.pem -CAkey ca/key.pem \
 -CAcreateserial -out server/cert.pem -extfile server/extfile.cnf
#~~~~~~~~~~~
# Client auth: CSR, extefile, certs
openssl genrsa -out client/key.pem 4096
openssl req -subj '/CN=client' -new -key client/key.pem -out client/client.csr
echo extendedKeyUsage = clientAuth > client/extfile-client.cnf
openssl x509 -req -days 3650 -sha256 -in client/client.csr -CA ca/ca.pem -CAkey ca/key.pem \
 -CAcreateserial -out client/cert.pem -extfile client/extfile-client.cnf
#~~~~~~~~~~~
# Clean up signing requests
rm -v client/client.csr server/server.csr server/extfile.cnf client/extfile-client.cnf
# ~~~~~~~~~~~~
# Protect the keys so that they cannot be changed accidentally
chmod -v 0400 ca/key.pem client/key.pem server/key.pem
chmod -v 0444 ca/ca.pem client/cert.pem server/cert.pem
cp ca/ca.pem client
cp ca/ca.pem server

This will generate a client and a server directory, with keys for each. Copy the server directory to your remote machine.

The ca directory is here in case you want to generate more certificates with the same CA, e.g. to have more clients.

The client directory is used when running testcontainers tests. It is used here to verify it works, and also in the using with TAP instructions. Copy the contents to your local machine.

Running docker on the remote VM

Store the server keys and certificates on your docker VM, e.g. in /var/docker/server/. You can configure your docker daemon with command line flags, e.g. --tlscacert etc, but it is highly recommended you keep the configuration in one place. Edit the /etc/docker/daemon.json file to configure the daemon, and set it to:

{
 "tlsverify": true,
 "tlscacert": "/var/docker/server/ca.pem",
 "tlscert": "/var/docker/server/cert.pem",
 "tlskey": "/var/docker/server/key.pem",
 "hosts": ["unix:///var/run/docker.sock", "0.0.0.0:2376"]
}

Then, if you want to run the docker daemon manually, you can just launch it with: sudo dockerd

However, on most systems, it is run as a service or a proper daemon. In the case of Ubuntu, it's a systemd service. Again, instructions are lifted from the official docs and adapted to Ubuntu 22.04.

If you restart the daemon with the config as is, it would fail because the command to run the daemon in systemd also specifies the host and that brings conflict. Instructions are in the official docs:

  1. Run sudo systemctl edit docker.service
  2. Update the file with
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
  1. 🚨 Be careful when editing the file, this must be typed above a certain line that says "Comment below this line will be ignored".
  2. Run sudo systemctl daemon-reload
  3. Run sudo systemctl restart docker.service

Then verify that you can talk to the daemon on the server machine directly docker run --rm hello-world

This is very important, because it's through local (UNIX socket) communication that Testcontainers work.

Using the remote VM from your workstation

Finally, verify that you can talk to the daemon over TCP, from your workstation.

Copy the client/ directory with the certificates on your machine, e.g. in $HOME/docker/client

Then use them to talk to the remote host. Set the value of the DOCKER_HOST env var below to point to your remote host, eg tcp://docker.example.com:2376. The scheme MUST be tcp and the port MUST be present:

DOCKER_HOST=<your-docker-host> \
DOCKER_CERT_PATH=$HOME/docker/client \
DOCKER_TLS_VERIFY=1 \
   docker run --rm hello-world

 

 

Make sure the remote host is usable with testcontainers

Find a project that uses Testcontainers. For example, you could use https://github.com/Kehrlann/testcontainers-dex ; but any project with Testcontainers will do.

Clone it on your machine

Run the project's tests against your local environment

Provided you have docker installed on your machine, you should be able to run Testcontainer tests with your own docker, and make sure the tests pass locally before trying remote.

With the project above, provided you have Java 17 installed, run: ./gradlew cleanTest check --info

This should work, first by using the local docker socket, same as when you run docker ps.

Notes:

  • cleanTest ensures the tests are re-run every time
  • --info gives more comprehensive STDOUT output

Run with remote daemon

Finally, you want to run against the remote docker daemon. Usually, setting DOCKER_HOST, DOCKER_TLS_VERIFY and DOCKER_CERT_PATH, this should be enough to run your testcontainers on the remote host (tested on Java and Go).

With the project above, Java 17 installed, run:

DOCKER_HOST=<your-docker-host> \
DOCKER_CERT_PATH=$HOME/docker/client \
DOCKER_TLS_VERIFY=1 \
   ./gradlew clean check --info

Notes:

Rather than env vars, we could also use ~/.testcontainers.properties, but the env-var based approach fits well with the Kubernetes mindset for injecting data into a "resource" (Pod, workload, etc)

Debugging failures in testcontainers

If you are using the project above for your smoke tests, changing the log levels can be done in logback-test.xml.

Using the remote Host with TAP

Set up the namespace by loading the certificates in a Secret

First, create a Kubernetes Secret containing the three certificate and key files under /client, generated in the previous steps, as well as the DNS host of the remote machine (we will use tcp://my-docker-host.example.com:2376 in the following example, but be sure to replace it with your own value). The keys MUST be named:

  • ca.pem
  • cert.pem
  • key.pem
  • docker_host

For example:

kubectl create secret generic \
   docker-secret \
   --namespace apps \
   --from-literal=docker_host=tcp://my-docker-host.example.com:2376 \
   --from-file=ca.pem=$HOME/docker/client/ca.pem \
   --from-file=cert.pem=$HOME/docker/client/cert.pem \
   --from-file=key.pem=$HOME/docker/client/key.pem
   
   
 

 

Consume the secret in a docker pipeline

Create a new pipeline, and mount the secret. In the example below, we use a java-based project, with a Gradle task in our test pipeline:

---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
 name: testcontainers-pipeline
 namespace: apps
 labels:
   apps.tanzu.vmware.com/pipeline: testcontainers-pipeline
spec:
 params:
   - name: source-url
   - name: source-revision
   - name: docker-secret-name
 tasks:
   - name: test
     params:
       - name: source-url
         value: $(params.source-url)
       - name: source-revision
         value: $(params.source-revision)
       #! ℹ️  This allows to pass the secret name from the Workload as Tekton task param
       - name: docker-secret-name
         value: $(params.docker-secret-name)
     taskSpec:
       params:
         - name: source-url
         - name: source-revision
         #! ℹ️  Use the parameter in the spec
         - name: docker-secret-name
       stepTemplate:
         securityContext:
           allowPrivilegeEscalation: false
           runAsUser: 1000
           capabilities:
             drop:
               - ALL
           seccompProfile:
             type: "RuntimeDefault"
           runAsNonRoot: true
       volumes:
         #! ℹ️  Use the docker secret as a volume
         - name: docker-cert-path
           secret:
             secretName: $(params.docker-secret-name)
       steps:
         - name: test
           image: gradle
           #! ℹ️  No need to change the existing script
           script: |-
             cd `mktemp -d`
             wget -qO- $(params.source-url) | tar xvz -m
             ./gradlew test --info
           env:
           #! ℹ️  Use the "docker_host" key as DOCKER_HOST env var
           - name: DOCKER_HOST
             valueFrom:
               secretKeyRef:
                 key: docker_host
                 name: $(params.docker-secret-name)
           #! ℹ️  Enforce mutual TLS
           - name: DOCKER_TLS_VERIFY
             value: "1"
           #! ℹ️  Point the docker driver to where the certs are, see volumeMounts
           - name: DOCKER_CERT_PATH
             value: "/var/docker/certs"
           #! ℹ️  Mount the secret volume at a path; it does not need to be /var/docker/certs
           #! but the DOCKER_CERT_PATH env must match the value
           volumeMounts:
             - name: docker-cert-path
               mountPath: "/var/docker/certs"

Then reference the secret containing the docker credentials in your workload, in Workload.spec.params. Here is an example using a Java project

---
apiVersion: carto.run/v1alpha1
kind: Workload
metadata:
 labels:
   app.kubernetes.io/part-of: testcontainers-app
   apps.tanzu.vmware.com/workload-type: web
   apps.tanzu.vmware.com/has-tests: "true"
 name: testcontainers-app
 namespace: apps
spec:
 build:
   env:
     - name: BP_JVM_VERSION
       value: "17"
 params:
   #! ℹ️  Selects the pipeline, name may vary on your setup
   - name: testing_pipeline_matching_labels
     value:
       apps.tanzu.vmware.com/pipeline: testcontainers-pipeline
   #! ℹ️  Pass the name of your secret to the pipeline params
   - name: testing_pipeline_params
     value:
       docker-secret-name: docker-secrets
   - name: live-update
     value: "true"
 #! ℹ️  required to run the app, but you can omit if you just want to try the build phase
 serviceClaims:
   - name: appsso-claim
     ref:
       apiVersion: services.apps.tanzu.vmware.com/v1alpha1
       kind: ClassClaim
       name: appsso-claim
 source:
   git:
     url: https://github.com/Kehrlann/tap-workloads-testcontainers
     ref:
       branch: main

This java project is a simple client application that allows the user to log in with an AppSSO AuthServer. You can run it on your local machine with the dev profile, or simply ./gradlew bootRun

When deploying to TAP, you can inspect the Workload by using the tanzu cli:

tanzu apps workload get testcontainers --namespace workloads

This will showcase the state of your workload, including the state of the different stages of the supply chain pipeline, the running pods, etc. You can follow the logs of the “test” part of the pipeline:

tanzu apps workload tail testcontainers \
  --component test \
  --namespace workloads \
  --since 24h

After the tests finish running, the end of the test output should mention that four tasks were successfully executed:

testcontainers-trvm8-test-pod[step-test] Gradle Test Executor 1 finished executing tests.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] > Task :test
testcontainers-trvm8-test-pod[step-test] Finished generating test XML results (0.041 secs) into: /tmp/tmp.yZzBPjeoSi/build/test-results/test
testcontainers-trvm8-test-pod[step-test] Generating HTML test report...
testcontainers-trvm8-test-pod[step-test] Finished generating test html results (0.093 secs) into: /tmp/tmp.yZzBPjeoSi/build/reports/tests/test
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] For more on this, please refer to https://docs.gradle.org/8.5/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.
testcontainers-trvm8-test-pod[step-test]
testcontainers-trvm8-test-pod[step-test] BUILD SUCCESSFUL in 3m 6s
testcontainers-trvm8-test-pod[step-test] 4 actionable tasks: 4 executed

Note: For this Workload to successfully deploy and result in a running application on TAP, you will need authserver credentials. You can find instructions in the AppSSO docs (create a simple authserver and claim credentials for an authserver), but this is beyond the scope of this tutorial. We are only interested in the test results.

 

 

 

 

 

 

Learn More about Tanzu Application Platform

Tanzu Application Platform is a composable, enterprise-ready internal developer platform that facilitates collaboration between devs and ops teams with the ultimate goal to establish faster and more secure paths to production.

Learn more about it on the website, explore its capabilities in the documentation page and educate yourself on the Techzone page.

Filter Tags

Tanzu Tanzu Application Platform Document