codeburst

Bursts of code to power through your day. Web Development articles, tutorials, and news.

Follow publication

Deployment Pipeline: set it up in minutes not months

--

Nowadays, vast numbers of tech companies follow the continuous integration/delivery (CICD) process. Pertaining to the philosophy, the deployment pipeline became a crucial element for the concept to succeed. Devop team would typically spend month setting up the pipeline module, however, maintenance and scalability became major pain-points afterwards.

Let’s try a different approach, let’s try to setup the pipeline quickly and effectively while complying to the “Four A” standards below:

  1. Autonomous: Upon merging of source code into the integration (master) branch, the system will automatically triggered to build and deploy
  2. Atomicity: Having multiple builds should not contaminate one another in terms of environment variables or runtime conditions.
  3. Availability: The pipeline should be able to run whenever needed with minimal outage. (say at least 99.99% availability)
  4. Anticipatable: The system’s behaviour today should be the same as that from yesterday if no configuration change was made. The only varying parameter should be the input source code to be built.

Pre-requisite: Basic AWS concepts regarding Elastic Container Service (ECS), Elastic Container Registry (ECR) and Docker are required

Without much introduction, AWS would be a great choice to build our pipeline module. Just to clarify, AWS also has a service module called CodePipeline, although it is an excellent tool for deployment, I personally want more control over the deployment system, thus, we would not be pursuing that service.

Instead, AWS Codebuild would be the “weapon of choice”. On a high level, you can see it as a managed, containerized, step-by-step, build machine that executes your commands (in series) line by line until completion. Since AWS manages this module, we do not need to provision or cleanup the system for each operation. In addition, as the system is containerized, the acceptance criteria 3 and 2 are both satisfied, respectively.

Furthermore, for the use cases of inter-dependent builds, Codebuild can be integrated with Lambda functions to trigger downstream builds if needed.

Having an architectural diagram would help illustrate the concepts quickly!

Flowing through the entire pipeline, the steps are:

  1. The entire process starts with a branch merges into the integration (or master branch) from the source control, which sends a webhook to an API Gateway endpoint and triggers the lambda function (Acceptance criteria 1). Whether Bitbucket or Github is used as source control, a webhook can be configured to trigger upon code merge or commit. It is recommended to trigger based on pull request merge to maximize efficiency. The webhook will send a request to a URL containing basic repository information. This URL can be AWS API Gateway that is linked to the lambda function.
  2. The function parses the data (extracts the branch name and map it to the codeBuild project name using a simple switch statement) and triggers the respective Codebuild project using AWS SDK.
  3. Codebuild runs the pipeline script, including: build docker image, run docker image, run unit test in the container, push image to AWS ECR(docker repository, you can use docker hub too), set the AWS Elastic Container Service to use the new image. (More details about the configuration will be shown below)
  4. Lambda functions are triggered during each stage of the pipeline build to send notifications to Slack. Simply use “curl” during each step of the Codebuild script with information to be delivered to Slack.
  5. Engineers could also use Slack(via slash commands) to run lambda functions that in turn trigger the CodeBuild deployments. Note that for simplicity’s sake, two lambda functions are used in this architecture: one for trigger, one for notification. The lambda (trigger) would accept requests from either source control webhook or Slack request. The notification lambda would parse the curl request from the build script and format the message nicely (with colours and icons) for Slack.

The Codebuild setup is as follows:

Setup 1

In the AWS console, search for CodeBuild and create a new project. Input the project name and source provider info. You would need to login in order to give AWS access to build the source code.

Setup 2

We are building docker images and deploying them to ECS so the build environment runtime should be docker with the latest version available in the setting. It is also feasible to use other runtime environments such as custom ECR images or default runtime (like Java, Node, Golang, Ruby), but it is recommended that each source code repository is containerized (with own Dockerfile) to satisfy acceptance criteria 4.

Codebuild build command

For the build commands, there are four phases: install, pre_build, build, post_build. Each phase is illustrated as:

install: any required Ubuntu packages installation should be occuring in this step. For ECS, the “jq” and the “curl” packages may be needed in order to parse task definitions and register new ones and to notify Slack, respectively.

pre_build: a curl request can be sent to lambda (through API Gateway) to notify Slack that the build has started for the project along with the merge information

build: Since the source has been pulled automatically (see Setup 1 image above), docker build commands are issued in this section to build a local image. Then, the image will run and a container will be generated. Using shell commands, unit tests or integration tests can be ran inside the container. If all tests passed, the image is pushed to ECR, if any test(s) failed, the build stops. Following through the success path, ECS definition is downloaded and updated with the new information. Upon registering the new definition, the ECS service will refresh new containers based on the newly pushed images.

post_build: Slack notifications are sent via curl to inform parties of interests regarding the status of the build(success or fail). The Codebuild environment variable “CODEBUILD_BUILD_SUCCEEDING” would be 1 if the current build is succeeding or 0 if failed. Note that this environment variable can be used throughout the build process to deliver the build status.

The strength of this deployment pipeline is that a great amount of control the engineer has over build content, notification and lambda interaction. Furthermore, if needed, the lambda could trigger downstream CodeBuild projects. This simple, modularized architecture is easy to setup and is designed to handle high scalability and availability.

✉️ Subscribe to CodeBurst’s once-weekly Email Blast, 🐦 Follow CodeBurst on Twitter, view 🗺️ The 2018 Web Developer Roadmap, and 🕸️ Learn Full Stack Web Development.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in codeburst

Bursts of code to power through your day. Web Development articles, tutorials, and news.

Responses (3)

Write a response