Creating CI/CD Workflows with Tekton

21 April, 2023

Brett Milford

CI/CD stands for Continuous Integration and Continuous Delivery, and is a set of software engineering practices that aims to improve the speed and quality of software delivery.

Continuous Integration is the process of automatically building and testing code changes whenever they are committed to a shared repository. This ensures that all changes are integrated and tested continuously, preventing the introduction of issues and reducing the time and effort required for manual testing.

Continuous Delivery is the process of automating the deployment of code changes to production environments in a frequent and reliable manner. This allows teams to deliver new features, updates, and fixes to users more quickly and with greater confidence.

In practice, CI/CD involves using various tools and technologies to automate the entire software development lifecycle, from code creation to deployment. This includes source code management systems, build servers, testing frameworks, and deployment pipelines. By using these tools, developers can quickly and easily make changes to their code, and have those changes automatically built, tested, and deployed to production environments.

CI/CD is widely used in modern software development, particularly in agile and DevOps environments, as it helps teams release software faster and with higher quality while reducing the risk of errors and downtime.

Put simply, CI/CD is automation that:

Takes a set of inputs -> Does one or more things -> Produces an artefact.

With these basic constructs in place, we can program a wide range of workflows, be they CI or CD, and achieve various an useful outcomes.

## Implementations

There are a number of CI/CD tools to choose from, and they all tend to build upon similar ideas and concepts.

Ultimately, choosing an option will likely come down to what fits best with the platforms and tools already in place and the support available for you or your team’s development preferences.

For instance, Gitlab and Github’s offerings pair well with their repository hosting offerings. Argo and Tekton work well with Kubernetes, and Jenkins and Zuul include feature sets that might be beneficial to certain development projects.

This article will explore Tekton as it pairs well with the Kubernetes infrastructure and provides a powerful and flexible feature set for automating workflows on the platform.

### Tekton

Tekton is an open-source framework for building and deploying cloud-native applications. It provides a set of Kubernetes-native building blocks for creating Continuous Integration and Continuous Delivery (CI/CD) pipelines that are scalable, portable, and easily extendable. Tekton is designed to be modular and composable, allowing teams to assemble pipelines from reusable, declarative components. This makes it easy to create pipelines that can be shared and reused across different projects and environments. Additionally, Tekton integrates with a wide range of development tools and platforms, allowing teams to use the tools they prefer while still taking advantage of the benefits of a unified CI/CD pipeline.

Tekton provides a set of Kubernetes-native building blocks for creating Continuous Integration and Continuous Delivery (CI/CD) pipelines. The key building blocks of Tekton are:

  1. Tasks: Tasks are the smallest unit of work in Tekton, representing a single step in a CI/CD pipeline. They can be created using any tool or language, and can be run in any container environment.

  2. Pipelines: Pipelines are collections of tasks and resources that define a CI/CD workflow. They can be defined using a simple declarative YAML syntax, and can be versioned and shared across different projects and environments.

  3. Resources: Resources are inputs and outputs to tasks and pipelines. They can be files, Git repositories, or other Kubernetes resources, and can be dynamically provisioned and managed by Tekton.

  4. Workspaces: Workspaces are used to share files between tasks in a pipeline. They allow tasks to read and write to a shared directory, enabling complex workflows that require data persistence and collaboration.

  5. Triggers: Triggers are used to automate pipelines based on events such as code changes or external events. They can be configured to listen for events from a wide range of sources, including Git repositories, container registries, and messaging platforms.

Together, these building blocks provide a powerful and flexible platform for implementing modern CI/CD pipelines, enabling teams to build, test, and deploy applications with speed, efficiency, and reliability.

In Tekton, Tasks and Pipelines define the steps and workflows that make up a CI/CD pipeline, while TaskRuns and PipelineRuns are the instances of those steps and workflows actually being executed. Both Tasks and Pipelines are designed to be reusuable and able to be executed multiple times. TaskRun and PipelineRun represent a distinct specific instance of execution of a Task or Pipeline. They are created dynamically and executed based on the configuration of a Task or Pipeline. In essence, Tasks and Pipelines define the structure and logic of a CI/CD pipeline, while TaskRuns and PipelineRuns instantiate those objects that pipeline on demand and incorporate parameters specific to that instance of execution.

Tekton also has a concept of a Resolver which is a component that defines how to resolve the inputs and outputs of Tasks and Pipelines. Resolvers provide a flexible and extensible way to dynamically resolve input and output parameters at runtime, based on the context of the execution environment. For instance, we will see later, the use of the hub resolver which pulls a taskSpec from Tekton Hub.

### Tekton Workflows

In Tekton, a Workflow is a collection of Tasks and Pipelines that defines a complete end-to-end CI/CD pipeline or workflow. Workflows allow teams to define and automate complex workflows that involve multiple stages, tools, and environments. Here are some of the key features of Tekton Workflows:

  • Declarative: Workflows are defined using YAML files that specify the sequence of Tasks and Pipelines, along with their inputs, outputs, and parameters. This declarative approach makes it easy to version, manage, and reuse Workflows across different teams and projects.

  • Reusable: Workflows can be composed of reusable Tasks and Pipelines, allowing teams to build complex workflows using pre-existing components. This can help reduce duplication and make it easier to maintain and evolve workflows over time.

  • Event-driven: Workflows can be triggered automatically based on events such as code changes, pull requests, or manual approvals. This event-driven approach makes it easy to integrate Workflows into a broader CI/CD pipeline and automate workflows based on changes in the codebase or other external events.

Overall, Tekton Workflows provide a powerful and flexible platform for creating complex CI/CD workflows that can automate everything from code testing and deployment to release management and production monitoring.

The goals and use-cases 1 of this experimental feature include emulating .gitlab-ci.yaml and GitHub Actions.

## CI/CD Use Case: Static Website Generation

I’m going to use the example of building and deploying a static site from source.

We will use Hugo to generate the website, and the process will look like this:

  • We push a commit to our repository, which will be hosted by our local Gitea instance.
  • A Gitea webhook will call Tekton’s Event Listener on commits to our main branch.
  • The Tekton Workflow we’ve defined will be triggered and resolve the Pipeline spec from the repository.
  • The Pipeline will run the following Tasks:
    • Clone the repository into a Workspace.
    • Build a container with the Hugo binary and push it to a local registry which is an instance of Trow.
    • Run hugo against the repository in the Workspace.
    • Run hugo deploy to push the generated site and assets into an Object Bucket.

Additionally, we will have configured an ObjectBucket and an Ingress to host our assets and configure our front-end routing and termination, including:

  • Setting up DNS (external-dns), TLS (cert-manager) and tunnelling (cloudflare argo tunnel).
  • Proxy connections to Rados Gateway’s static website URL.

## Setting up the components of a CI/CD workflow

### Base components

There are a number of components in our usecase that need to be in place before we can implement our workflows. Most of them are optional in this example, however, there are fantastic open-source options that can easily be self-hosted on our Kubernetes cluster along side Tekton for a brilliant self-contained solution.

  1. Source code repository hosting Tekton has built-in support for GitHub, GitLab, Bitbucket, and some support for Gitea. Support varies from component to component; for instance, Pipeline resolver support appears to vary slightly from git resolution in workflows for authenticated repositories. Gitea can be installed via its Helm chart 2. I also customised the ALLOWED<sub>HOSTLIST</sub> for the webhook configuration 3 ‘private’ which will cover all cluster local addresses. After setting up a repository for hosting the site contents, we will need to configure a webhook 4 to trigger on commits to the main branch, and to send events to the EventListener in the tekton-workflows namespace.

  2. Container Registry Container registries are fairly simple however TLS endpoints will generally make life easier.

    Public registries such as Docker Hub should work fine; however, for this, I use Trow. It can be installed via the charts in its source repository 5.

    Registries quickly become cumbersome to use when they don’t have TLS enabled; as such, I use Trow with an Ingress to configure TLS and an external DNS entry.

  3. An Ingress controller. MicroK8s doesn’t come with an Ingress controller by default, but ingress-nginx can be installed with:

    $ microk8s enable ingress
    

    Any ingress with proxypass and TLS termination would generally make life easy for this project.

  4. Object storage with static website hosting capabilities. If you followed the article on deploying Rook Ceph on MicroK8s 6 you would have a capable object storage solution at hand which with a few tweaks, can serve this purpose. Alternately, any S3-compatible storage with DNS-addressed buckets and static website hosting should slot in here nicely, or we could opt to deliver the static assets with our custom-built image.

  5. External DNS configuration This step serves two purposes:

    1. It fixes an issue with resolving the Trow ingress from inside the cluster and,
    2. Allows us to address buckets via DNS names (a requirement for static websites).

    To resolve the Trow virtual host from inside the cluster, we will need to configure CoreDNS to forward queries to our DNS server 7.

    With MicroK8s we can do this by enabling the DNS addon 8.

    $ microk8s enable dns:192.168.1.1
    

    When this addon is enabled without specifying a server like this, all queries are forwarded to Google DNS servers (8.8.8.8) by default.

    On MicroK8s and Ubuntu 20.04, I initially tried to configure CoreDNS as per 9 with forward . /etc/resolv.conf. This resulted in CoreDNS failing to start with these errors:

    .:53
    [INFO] plugin/reload: Running configuration SHA512 = deb3871c00828b25727978460b261c74de0519acfb0c61c7813cc2cea8e445ddeb98d0e8f8c7bf945861f2a8b73a581c56067b8fe22b9acd076af72a94958de2
    CoreDNS-1.9.3
    linux/amd64, go1.18.2, 45b0a11
    [FATAL] plugin/loop: Loop (127.0.0.1:59387 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 6528390204194825117.6665545166410031413."
    

    Ubuntu 20.04 uses systemd-resolved for DNS resolution and points /etc/resolve.conf to a local resolver.

    Without delving deeper, I suspect the root of this error is to be found somewhere there.

    With name resolution now being forwarded to our local DNS, we can register a canonical name for our Trow instance and point it to the load balanced IP of our Ingress. We can also create CNAMES for our Rados Gateway DNS names to resolve to our cluster service as set up by Rook.

### Setting up Tekton

There is official documentation on how to install Tekton 10, we can grab all the components we need by utilising a simple kustomization.yaml file.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
  - https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml
  - https://storage.googleapis.com/tekton-releases/triggers/latest/interceptors.yaml
  - https://storage.googleapis.com/tekton-releases-nightly/workflows/latest/release.yaml

Simply apply with kubectl apply -k . 11 to pull in and apply these components to the cluster.

Additionally, I add configuration for:

  • A ServiceAccount and ClusterRoleBinding.
  • A build account secret 12.
  • A secret containing our Web-hook token from the Web-hook setup step.

### Static Website Infrastructure

  1. Setting Up Ceph Rados Gateway Static Site

    If you’re working with a Rook Ceph installation that includes a Rados Gateway front end, we can make use of this for hosting static web assets. Rook doesn’t support directly enabling DNS style buckets or the s3website API as yet 13, but we can easily enable this ourselves.

    1. Enable s3website API

    Include the following in your Rook Ceph configOverride:

    rgw_enable_static_website = true
    rgw_enable_apis = s3, s3website
    

    Without this, you’ll get the following error message when trying to use the PutBucketWebsite API.

    An error occurred (MethodNotAllowed) when calling the PutBucketWebsite operation: Unknown
    
    1. DNS Name Setup

    The Rados Gateway DNS name can be set with the following configuration items:

    rgw_dns_name = rgw.example.com
    rgw_dns_s3website_name = rgw-website.example.com
    

    However, this may cause issues with the rook ceph operator, and it is no longer able to address the front end via its Kubernetes Service name.

    Another option, which allows multiple names, is to use Zone Group configuration instead. From the Ceph Toolbox, perform the following

    $ radosgw-admin zonegroup get > zonegroup.json
    $ vi zonegroup.json
    ...
        "hostnames": ["<SERVICE_NAME>.rook-ceph.svc","rgw.example.com"],
        "hostnames_s3website": ["rgw-website.example.com"],
    ...
    $ radosgw-admin zonegroup set --infile zonegroup.json
    $ radosgw-admin period update --commit
    
    1. Similar to Trow, we will utilise our upstream DNS setting to resolve the rgw-website hostname within the cluster.

    This gist provides fairly comprehensive coverage around configuring Rados Gateway static site capabilities, especially in complex scenarios.

  2. Bucket creation and configuration

    Firstly, we will need to create a bucket to store our website’s assets.

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: www-ceph-bucket
      namespace: www
    spec:
      bucketName: www
      storageClassName: ceph-bucket
    

    From here on out, we will be working with S3 APIs to configure our bucket 14 15. For this, I create a pod for temporary use, which:

    1. Contains the aws cli tools and
    2. Has the ID and KEY secrets injected as environment variables to access our bucket.
    apiVersion: v1
    kind: Pod
    metadata:
      name: tools
    spec:
      containers:
      - name: tools
        image: trow.int.cirriform.au/deployment-tools:latest
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        envFrom:
          - configMapRef:
              name: www-ceph-bucket
        env:
          - name: AWS_ACCESS_KEY_ID
            valueFrom:
              secretKeyRef:
                name: www-ceph-bucket
                key: AWS_ACCESS_KEY_ID
          - name: AWS_SECRET_ACCESS_KEY
            valueFrom:
              secretKeyRef:
                name: www-ceph-bucket
                key: AWS_SECRET_ACCESS_KEY
    

    Then we need to do two things:

    1. Configure PutBucketWebsite.
    $ cat website-config.json
    {
      "IndexDocument": {
        "Suffix": "index.html"
      },
      "ErrorDocument": {
        "Key": "404.html"
      }
    }
    $ aws s3api put-bucket-website --bucket www --endpoint-url "http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc" --website-configuration file://website-config.json
    
    1. Add a bucket policy that allows public access.
    $ cat bucket-policy.json
    {
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "BucketAllow",
        "Effect": "Allow",
        "Principal": {"AWS": ["*"]},
        "Action": [
                "s3:ListBucket",
                "s3:GetObject"
        ],
        "Resource": [
          "arn:aws:s3:::www",
          "arn:aws:s3:::www/*"
        ]
      }]
    }
    $ s3cmd --host rook-ceph-rgw-ceph-objectstore.rook-ceph.svc:80 --host-bucket rook-ceph-rgw-ceph-objectstore.rook-ceph.svc:80 --no-ssl setpolicy bucket-policy.json s3://www
    

### Ingress

With our Object Bucket and website configuration in place, we simply need an ingress to expose this service. In addition to exposing the service, we can also make use of TLS management with cert-manager and letsencrypt, and external-dns to create dns records.

---
apiVersion: v1
kind: Service
metadata:
  name: rgw-website
  namespace: www
spec:
  type: ExternalName
  externalName: www.rgw-website.example.com
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/target: "..."
    external-dns.alpha.kubernetes.io/cloudflare-proxied: "true"
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/upstream-vhost: "www.rgw-website.example.com"
    nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
  name: www
  namespace: www
spec:
  ingressClassName: nginx
  rules:
  - host: &host www.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: rgw-website
            port:
              number: 80
  tls:
  - hosts:
    - *host
    secretName: www-tls

The key aspect of this configuration that functions to route to our Rados Gateway endpoint is

  1. We create an ExternalName Service for our DNS bucket.
  2. In addition to proxying requests to this service, we set the annotation for upstream-vhost to the DNS bucket name.

Note: *.rgw-website… will be resolvable by a record in our upstream DNS, which CoreDNS has been configured to forward requests to.

  1. Uploading static website assets to a Rados Gateway Bucket

    In experimenting with this setup, I initially uploaded Hugo generated assets with s5cmd.

    $ s5cmd --endpoint-url http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc cp --acl \"public-read\" public/ s3://www
    

    However, when loading the website, certain non-html assets were being returned wrapped in HTML. This caused problems with rendering CSS and loading Javascript. In researching this issue a little, I found Hugo’s own guides on deploying to S3 16 which specifically include a clause that matches css, js and a few other formats and provides the configuration cacheControl = "no-transform". I configured Hugo’s deployment section with reference to Rados Gateway as an S3 compatible storage 17 and with the deployment matches section provided in the Hugo example.

    [deployment]
    [[deployment.targets]]
    name = "rgw"
    URL = "s3://www?region=us-east-1&endpoint=rook-ceph-rgw-ceph-objectstore.rook-ceph.svc&disableSSL=true&s3ForcePathStyle=true"
    
    [[deployment.matchers]]
    # Cache static assets for 1 year.
    pattern = "^.+\\.(js|css|svg|ttf)$"
    cacheControl = "max-age=31536000, no-transform, public"
    gzip = true
    
    [[deployment.matchers]]
    pattern = "^.+\\.(png|jpg)$"
    cacheControl = "max-age=31536000, no-transform, public"
    gzip = false
    
    [[deployment.matchers]]
    # Set custom content type for /sitemap.xml
    pattern = "^sitemap\\.xml$"
    contentType = "application/xml"
    gzip = true
    
    [[deployment.matchers]]
    pattern = "^.+\\.(html|xml|json)$"
    gzip = true
    

    With this, hugo deploy will upload the website assets to the Rados Gateway bucket. I had no issues rendering the whole website after uploading assets this way.

  2. Emulated static site on Ceph Rados Gateway

    While developing this system, I read a number of guides 18, documentation 19 and bug reports 20 21 for configuring various proxies to serve resources from object storage. Without a website configuration, Rados Gateway (and most object storage endpoints) behave as if they are interfacing with object storage clients. This means behaviours that we would take for granted with a normal webserver don’t take place, like serving index files for requests to root (’/’) and redirecting to an error page when returning a HTTP 404.

    Kubernetes ingress-nginx configures nginx for the purpose of routing and proxying connections to back-end services. As such, it doesn’t perform the typical try<sub>files</sub>22 behaviour of a normal web server. Additionally, try<sub>files</sub> only works against the local filesystem, and won’t query files from a proxypass directive.

    There are a number of annotations we can use to modify the behaviour of ingress-nginx, and simulate much of the same behaviour as a regular web server.

    nginx.ingress.kubernetes.io/rewrite-target: /www/$1
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/add-base-url: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      rewrite ^/$      /www/index.html break;
      rewrite ^/(.*)/$ /www/$1/index.html break;
      rewrite ^/(.*)$  /www/$1 break;  
    

    Here we rewrite requests to the URL bucket (so DNS bucket names are not required in this instance) and proxy requests for directories (’/’) to the relevant index file by injecting a configuration snippet that utilises rewrite commands directly 23

    With this, we achieve a mostly functional design without requiring any special configuration on our Rados Gateway apart from making the bucket publicly available. However, some redirect behaviours like error pages and 404s still won’t work with this setup.

    Ideally, if the PutBucketWebsite API is available for your Object Storage service, you should just use that for the best results.

## Creating a Tekton Workflow

As described in [1.1.2](#* Tekton Workflows) Tekton Workflows describe a collection of Tasks and Pipelines that describe a generic workflow 24 For instance, you might have a workflow that always triggers on a commit to the ‘main’ branch. You might also like to store your Pipeline spec for this project along with the source code in a designated directory.

This is precisely what this spec achieves:

apiVersion: workflows.tekton.dev/v1alpha1
kind: Workflow
metadata:
  name: main
  namespace: tekton-workflows
spec:
  serviceAccountName: tekton-builder
  triggers:
    - event:
        type: push
        secret:
          secretName: webhook-secret
          secretKey: token
      filters:
        gitRef:
          regex: '^main$'
      bindings:
        - name: git_url
          value: $(body.repository.clone_url)
        - name: git_revision
          value: $(body.after)
  params:
    - name: git_url
      default: https://git.cirriform.au/foo/bar # must be stubbed out
    - name: git_revision
      default: main
  pipelineRef:
    resolver: git
    params:
      - name: url
        value: $(tt.params.git_url)
      - name: revision
        value: $(tt.params.git_revision)
      - name: pathInRepo
        value: .ci/main.yaml
  workspaces:
    - name: shared-data
      volumeClaimTemplate:
        spec:
          storageClassName: ceph-block
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 128Mi

We specify the webhook token to use, filter events for those involving the ‘main’ branch and bind the other useful parameters. We then resolve the Pipeline spec by providing parameters to ‘pipelineRef’. Finally, we setup a volumeClaimTemplate for our Pipeline’s workspace.

### Pipelines

Now that we have the generic workflow framework setup, we can get a little more concrete with the details of what we want our static website Pipeline to do.

If we reflect back on the process we charted, the Pipeline will run the following Tasks:

  • Clone the repository into a Workspace.
  • Build a container with the Hugo binary and push it to our local registry.
  • Runs `hugo` against our repository.
  • Runs `hugo deploy` to push the generated site and assets into a Rados Gateway bucket.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: main
spec:
  workspaces:
    - name: shared-data
  tasks:
    - name: clone
      taskRef:
        resolver: hub
        params:
          - name: kind
            value: task
          - name: name
            value: git-clone
          - name: version
            value: "0.7"
      workspaces:
        - name: output
          workspace: shared-data
      params:
        - name: url
          value: $(params.git_url)
        - name: revision
          value: $(params.git_revision)
    - name: tools
      runAfter:
        - clone
      taskRef:
        resolver: hub
        params:
          - name: kind
            value: task
          - name: name
            value: kaniko
          - name: version
            value: "0.6"
      params:
        - name: DOCKERFILE
          value: ./Dockerfile
        - name: IMAGE
          value: &toolsImage trow.int.cirriform.au/deployment-tools:$(params.git_revision)
        - name: EXTRA_ARGS
          value:
            - --cache=true
            - --destination=trow.int.cirriform.au/deployment-tools:latest
      workspaces:
        - name: source
          workspace: shared-data
    - name: build
      runAfter:
        - clone
        - tools
      workspaces:
        - name: source
          workspace: shared-data
      taskSpec:
        workspaces:
          - name: source
        stepTemplate:
          image: *toolsImage
          workingDir: $(workspaces.source.path)
        steps:
          - name: pre-commit
            command:
              - nix-shell
              - --command
            args:
              - hugo
    - name: put
      runAfter:
        - clone
        - tools
        - build
      workspaces:
        - name: source
          workspace: shared-data
      taskSpec:
        workspaces:
          - name: source
        stepTemplate:
          image: *toolsImage
          workingDir: $(workspaces.source.path)
          namespace: www
        steps:
          - name: pre-commit
            command:
              - nix-shell
              - --command
            args:
              - hugo deploy
            env:
              - name: AWS_ACCESS_KEY_ID
                valueFrom:
                  secretKeyRef:
                    name: www-ceph-bucket
                    key: AWS_ACCESS_KEY_ID
              - name: AWS_SECRET_ACCESS_KEY
                valueFrom:
                  secretKeyRef:
                    name: www-ceph-bucket
                    key: AWS_SECRET_ACCESS_KEY

This pipeline has 4 steps, “clone”, “tools”, “build”, “put” which correspond directly to the steps we outlined above.

  • “clone” git clones the repository into the workspace with the parameters provided to it from the workflow.
  • “tools” uses Kaniko to create an image with the Dockerfile in the repository.
  • “build” pulls this new image and runs ‘hugo’ to build the static assets.
  • “put” uploads these assets to object storage with `hugo deploy`, however, another tool like s3cmd may also be used here. Note that we inject to secrets for bucket access here.

Tekton provides a number of layers of abstraction that enable reusable components. For instance, in “clone” we use a ’taskRef’, which allows us to pull a taskSpec from another source. In this instance, we use ‘hub’ resolver, which refers to Tekton Hub and we’re using the git-clone task. We use a similar technique for the kaniko task. Our “build” and “put” tasks specify their ’taskSpec’ directly, by specifying an image to use and a command to execute, as well as environment variables. Finally, Tekton will run all tasks in a Pipeline in parallel unless told otherwise. If our tasks have a dependency on each other, we can specify this with the ‘runAfter’ method. I use this to serialise the tasks in this Pipeline.

### Triggering and Troubleshooting a Workflow run

Tekton provides a cli tool to extract and format Tekton events and information nicely. However, being a Kubernetes native application, most of the relevant information is surfaced directly in the usual places.

For instance, to test workflows and pipelines end-to-end, I triggered the workflow by testing the webhook from the Gitea interface. I then followed events in the tekton-workflows namespace to get an idea of what was happening:

$ kubectl get events -n tekton-workflows -w

Quite often, this would provide enough information to determine why a workflow failed. However, on the odd occasion it didn’t, getting the logs from the latest failed pod would provide insight into any issues.

Occasionally, it can be useful to sanity check some of the images being produced. With Trow we can list repositories and tags by curling the service endpoint 25

$ curl https://trow.int.cirriform.au/v2/deployment-tools/tags/list

We’ve comprehensively covered a fairly extensive Tekton Workflow and Pipeline, all with components that can be self-hosted on the same cluster. Going further, we could use the same techniques to build and roll out changes to Kubernetes objects themselves, on the same or remote clusters. This is a topic that I’ll explore further in a later post.


  1. https://github.com/tektoncd/community/blob/main/teps/0098-workflows.md ↩︎

  2. https://docs.gitea.io/en-us/install-on-kubernetes/ ↩︎

  3. https://docs.gitea.io/en-us/config-cheat-sheet/#webhook-webhook ↩︎

  4. https://docs.gitea.io/en-us/webhooks/ ↩︎

  5. https://github.com/ContainerSolutions/trow/tree/main/charts/trow ↩︎

  6. https://www.cirriform.au/articles/rook-ceph-microk8s/ ↩︎

  7. https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ ↩︎

  8. https://microk8s.io/docs/addon-dns ↩︎

  9. https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ ↩︎

  10. https://tekton.dev/docs/installation/pipelines/ ↩︎

  11. https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/ ↩︎

  12. https://tekton.dev/docs/pipelines/auth/ ↩︎

  13. https://github.com/rook/rook/issues/4780 ↩︎

  14. https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html ↩︎

  15. https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-website.html ↩︎

  16. https://gohugo.io/hosting-and-deployment/hugo-deploy/ ↩︎

  17. https://gocloud.dev/howto/blob/#s3-compatible ↩︎

  18. https://xahteiwi.eu/resources/hints-and-kinks/hosting-website-radosgw/ ↩︎

  19. https://kubernetes.github.io/ingress-nginx/examples/rewrite/ ↩︎

  20. https://github.com/kubernetes/ingress-nginx/issues/1809 ↩︎

  21. https://github.com/kubernetes/ingress-nginx/issues/3122 ↩︎

  22. https://nginx.org/en/docs/http/ngx_http_core_module.html#try_files ↩︎

  23. https://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite ↩︎

  24. https://github.com/tektoncd/experimental/tree/main/workflows ↩︎

  25. https://github.com/ContainerSolutions/trow/blob/main/docs/USER_GUIDE.md#listing-repositories-and-tags ↩︎