Spinnaker by Example: Part 3

John Tucker
codeburst
Published in
10 min readNov 10, 2020

--

Wrapping up our step-by-step walk-through by deploying an application to a GKE cluster.

This is part of a series (starting with Spinnaker by Example: Part 1) providing a step-by-step walk-through for installing and using Spinnaker to deploy applications to a Google Kubernetes Engine (GKE) cluster. The final set of configuration files provided throughout this series of articles is available for download. So far our focus has been on the installation and configuration aspects of Spinnaker. In this article, we wrap up this series with a simple example of deploying an application to a GKE cluster.

The bad news is that using Spinnaker is surprisingly confusing; I believe it is related to how flexible and powerful it is. The good news, however, is that Spinnaker provides a number of Codelabs that walk one through particular scenarios. The better news is that there is a Codelab, Kubernetes Source To Prod, that walks through our particular scenario of deploying an application to a Kubernetes cluster. This article is closely aligned with this Codelab; just provides a bit more detail and is updated for the latest version of Spinnaker (1.23.1).

Prerequisites

This article introduces several new prerequisites. If you are looking to follow along you can view them here:

  • GitHub account; for storing source code
  • Docker Hub account; for storing container images

Configure GitHub

The first step is to prepare a GitHub repository that we can commit changes to that houses the source code for our application. The easiest solution here is to fork the lwander/spin-kub-v2-demo repository.

Things to observe:

  • The application source code consists of a simple Go web server running on port 8000; consists of the files main.go and the content folder
  • A file, Dockerfile, provides instructions to create a container image running the application
  • A file, manifests/demo.yml, provides instructions to run the application on a Kubernetes cluster

Configure Docker Hub

Now that we have the source code repository in place, we need to create a repository to hold our container images and set up automated builds. These instructions are roughly aligned with the Docker Hub documentation Set Up Automated Builds.

First, we need to link our Docker Hub account to our GitHub account using the menus: Account Settings > Linked Accounts > GitHub.

We then create a public Docker Hub repository. For consistency with this article, we name it spin-kub-v2-demo.

For this repository, we set up automated builds using the menus: Builds > Automated Builds > GitHub.

We select our forked GitHub repository. Also, we need to change the build rules as follows.

Things to observe about this configuration:

  • Docker Hub will only trigger image builds when tags are created in the GitHub repository; this choice will become more apparent later
  • The build uses the Dockerfile in the root of the GitHub repository; something that we observed earlier
  • The built image will be tagged with the GitHub tag

Here we need to trigger an automated build by creating a tag, specifically latest, in our GitHub repository. Docker Hub will generate an initial container image to start us off.

Configure Kubernetes

While the Kubernetes Source to Prod Codelab involves setting up two Kubernetes clusters, in this article we will simplify it to only setting up one. The good news here is that we have already done this in an earlier article.

Configure Spinnaker

While we have already configured and installed Spinnaker in an earlier article, we need to update its configuration to support this particular example.

Configure GitHub Artifact Credentials

In our example, we need to enable Spinnaker to read the contents of our GitHub repository for the Kubernetes manifest; /manifests/demo.yaml. Here we roughly follow the instructions Configure a GitHub artifact account.

We start by creating a GitHub Personal Access Token with the repo scope.

We then login to the halyard GCE instance.

$ gcloud compute ssh halyard

We write the GitHub personal access token to a file.

$ echo [REPLACE] > my-github-artifact-account

We enable GitHub support with:

$ hal config artifact github enable

And configure Spinnaker with our GitHub account:

$ hal config artifact github account add my-github-artifact-account \
--token-file
my-github-artifact-account

Please note: We hold off applying the configuration change until the next step.

Configure Docker Registry Account

We also need to enable Spinnaker to pull container images from our Docker Hub repository.

From the halyard GCE instance, we enable Docker registry support by executing:

$ hal config provider docker-registry enable

Next, we configure Spinnaker with a Docker Hub account; we do not need to provide credentials as our repository is public. We do, however, need to specifically list our repository as part of the configuration.

Dockerhub hosts a mix of public and private repositories, but does not expose a catalog endpoint to programmatically list them. Therefore you need to explicitly list which Docker repositories you want to index and deploy. For example, if you wanted to deploy the public NGINX image, alongside your private app image, your list of repositories would look like:
REPOSITORIES=library/nginx yourusername/app

— Spinnaker — Docker Registry

The specific command is as follows; providing the Docker Hub repository, e.g., sckmkny/spin-kub-v2-demo.

$ hal config provider docker-registry account add my-docker-registry \
--address index.docker.io \
--repositories [REPLACE]

Please note: For other repository providers, e.g., Google Container Registry or AWS Elastic Container Registry. Thankfully, we do not need to explicitly list individual repositories.

We then apply our changes by executing:

$ hal deploy apply

Configure Webhooks

The next step is to configure both GitHub and Docker Hub to notify Spinnaker of changes using webhooks. The good news is that in a previous article we publicly exposed the spin-gate API. Also, by not implementing either authentication or authorization we made this step a simple one.

Please note: In a production system, one would be configuring authentication and authorization that protect the spin-gate API endpoints; including the webhook endpoints.

GitHub

Here we closely follow the Configuring GitHub Webhooks documentation; essentially amounts to supplying a URL to GitHub with a specific path on the spin-gate API endpoint.

GitHub webhooks do support supplying a shared secret, e.g., my-secret, that we will use later.

Docker Hub

In a similar fashion, we follow the Docker Hub Webhooks documentation; essentially amounts to supplying a URL to Docker Hub with a specific path on the spin-gate API endpoint. In this case, the path is:

/webhooks/webhook/dockerhub

Create a Spinnaker Application

We now begin to use the Spinnaker UI to create resources; starting with an application.

An application represents the service which you are going to deploy using Spinnaker, all configuration for that service, and all the infrastructure on which it will run.

— Spinnaker — Concepts

We create a new application from the Applications tab; supplying only a Name and Owner Email as shown.

Create a “Deploy to Staging” Pipeline

Next, we create a pipeline.

The pipeline is the key deployment management construct in Spinnaker. It consists of a sequence of actions, known as stages. You can pass parameters from stage to stage along the pipeline.

— Spinnaker — Concepts

From the application’s Pipelines screen we create a new pipeline.

Pipeline Automated Trigger

Now that we have a pipeline, we set up an automated trigger for it. In this case, it is triggered by merges into any branch in our GitHub repository as shown below.

Please note: I was tempted to set the branch to master as this is the only branch that we would want to trigger the pipeline from. But, to be consistent with the Codelab, I left it blank.

Before saving the trigger, we create an artifact as shown below:

Things to observe:

  • This configuration uses the GitHub artifact credentials we created earlier to access the files in the GitHub repository
  • We supply the path to the Kubernetes manifest in the repository, manifests/demo.yml. This is the artifact that will be passed onto the next stage in the pipeline
  • The selected Use prior execution option will use the artifact from a previous pipeline execution if it is missing. This required option will make more sense later

Pipeline Stage

Next, we add a first (and only) stage to our pipeline.

A Stage in Spinnaker is a collection of sequential Tasks and composed Stages that describe a higher-level action the Pipeline will perform either linearly or in parallel. You can sequence stages in a Pipeline in any order, though some stage sequences may be more common than others. Spinnaker provides a number of stages such as Deploy, Resize, Disable, Manual Judgment, and many more.

— Spinnaker — Concepts

From the pipeline screen, we add a stage of type Deploy manifest and update its configuration as shown.

Things to observe:

  • This stage takes the artifact we created earlier, i.e., the manifest file manifests/demo.yml, and deploys it to our GKE cluster using the specified account

Deploy Manifests to Staging

Now that we have configured the pipeline, we can commit a change in our GitHub repository to trigger it. In particular, we need to make the following changes:

  • In the file, manifests/demo.yml, change the apiVersion of the deployment from apps/v1beta2 to apps/v1. The pipeline will fail without this change.
  • Likewise, we will need to change the container image from index.docker.io/lwander/spin-kub-v2-demo to the container image in our Docker Hub repository, e.g., index.docker.io/sckmkny/spin-kub-v2-demo

From the application’s Pipeline screen, one will see a summary of the pipeline’s execution.

Things to observe:

  • Pipeline executions, presented as white cards, are grouped by pipeline (gray and blue banners)
  • Pipeline execution cards provide the type of trigger for the execution, e.g, GIT

From the application’s Clusters tab, we see a deployment with three replicas in the default namespace.

By inspecting the GKE cluster we confirm the deployment.

$ kubectl get all 
NAME READY STATUS RESTARTS AGE
pod/spinnaker-demo-599b4bcf5f-5gk4x 1/1 Running 0 5m52s
pod/spinnaker-demo-599b4bcf5f-bdvbw 1/1 Running 0 5m52s
pod/spinnaker-demo-599b4bcf5f-dkpgl 1/1 Running 0 5m52s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 10h
service/spinnaker-demo ClusterIP 192.168.11.159 <none> 80/TCP 163m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/spinnaker-demo 3/3 3 3 5m52s
NAME DESIRED CURRENT READY AGE
replicaset.apps/spinnaker-demo-599b4bcf5f 3 3 3 5m52s

Configure Docker Triggers

So far, our pipeline is only triggered by merges into branches in our GitHub repository (and it only uses the Kubernetes manifest; manifests/demo.yml). We now create a new trigger for this same pipeline that detects new Docker Hub container images; which in-turn is triggered with new tags in the GitHub repository.

Before saving the trigger, we create an artifact as shown below:

Things to observe:

  • This configuration uses the Docker Hub artifact credentials we created earlier to access the container images in the Docker Hub repository
  • We supply our Docker Hub repository. The tagged container image in this repository is the artifact that will be passed onto the next stage in the pipeline
  • The selected Use prior execution option will use the artifact from a previous pipeline execution if it is missing. This required option will make more sense later

From the pipeline screen, we update the Deploy manifest stage by adding the spin-kub-v2-demo docker artifact to the Required Artifacts to Bind field.

Things to observe:

  • While this is not obvious, this configuration will use the tagged container image artifact to replace the matching untagged container image in the Kubernetes manifest

Deploy Docker to Staging

We now create a new tag, e.g., 0.1.0, in our GitHub repository and observe a new pipeline execution (WEBHOOK).

Things to observe:

  • Notice that the execution has two artifacts. One being the newly created Docker Hub tagged container image and the other being the Kubernetes manifest from the previous execution (this is where the Use prior execution option on the spin-kub-v2-demo github artifact is important)

Please note: The Use prior execution option on the spin-kub-v2-demo docker artifact is used going forward when we trigger executions by changing the Kubernetes manifest.

We indeed observe that the cluster is running the updated container image; sckmkny/spin-kub-v2-demo:0.1.0.

Wrap Up

Whew, this was a lot more work than I originally imagined. Hope you found it useful.

--

--