
Designing a Regression Testing Solution for My Organization
A year back, somewhere during July ’19, I was given a challenge by my manager. Moving from a monolithic application to a microservices solution led to major growth in the number of REST APIs included in the application. This created the need for us to find a platform allowing us to easily create functional tests that cover those APIs and that can also be run within regression test cycles. My challenge was to design and implement a solution to this requirement.
Gaining an understanding of the task at hand
To begin to design such a solution there were first some questions to answer relating to how the solution is going to be used and by whom. For example, who will be creating and maintaining the tests? This would entail managing the test repository and keeping track of what is covered and what is not. In this case, the group in question are the QA engineers. Being the ones that perform all the manual and repetitive tests that we want to automate, this is the group that can track coverage and manage the test repository. They are also the ones who were going to be relieved from performing these manual tasks.
Next, I had to figure out who would be operating this platform? This would entail executing cycles and monitoring results. Now, I guess I had some utopian vision where everyone could do everything and be responsible enough that everything works out great. But no, in this case, the group to take on this responsibility would have to be our R&D Operation team (PMO). This team sees to it that the release content has passed all checks, and — while the tool is for any party to use in order to validate that nothing in the application has broken — at the end of the day it is the R&D Operation team that gives the stamp of approval that the candidate build is ready to become a release.
Coming up with a design
In answering these questions I was able to understand that the difficulty of writing tests should match the technical level of the average QA engineer in the organization. I was also able to see that the platform I designed should be able to run tests simultaneously, within a reasonable time frame, and generate a report that the PMO could go over with the relevant party (be that the developer, the QA engineer, or any team responsible for the cause of a regression).
In the solution, a typical test consists of multiple steps that aim to check a single functionality within our application. Each test sits under a specific scenario. A scenario is a group of tests that share a common service or feature. A plan is the collection of several — or all scenarios — that are executed together using our CI tool (we use Jenkins) per feature branch or per release branch. All scenarios and tests are maintained and versioned within version control (we use Git).
The test editor I chose was JMeter. Although it is used more in the industry for stress testing rather than functional testing, it provides the user with a simple enough interface to build a flow made of steps for invoking an API; acquiring data from it and asserting expected results against the actual data. The test runner I chose to run each scenario was Blazemeter Taurus. To save the QA engineer the hardship of installing and running BZT on their Windows machine, I built a Docker image based on BZT’s official image. Accompanied with a compose file, the container would have a volume mapped to the scenario folder and invoke BZT using a configuration .yml that is placed inside the scenario folder.
In more detail, the compose file below will start the scenario runner container, which I write about next. The container receives two environment variables; one is the IP address of the environment whose APIs are to be tested, the second is the name of the scenario folder which is mapped as volume. There is another volume to which results are to be dumped.
## docker-compose.ymlversion: '2'services:
scenario_runner:
image: ${REPO}/automation/opr_scenario_runner:7
environment:
SCENARIO: ${SCENARIO}
OPEN_PLATFORM_IP: ${OPEN_PLATFORM_IP}
volumes:
- ./results:/app/results
- ./scenarios/${SCENARIO}:/app/scenario
The Dockerfile uses BZT’s official image as its base. It includes some folder preparation and copying of the entry point script. You may also notice that it includes the copying of JAR files from one outside folder into the installation folder of JMeter inside the image. It does this in order to benefit from the usage of external libraries inside JMeter test scripts.
## DockerfileFROM blazemeter/taurus:1.14.2ENV TIME_ZONE 'Asia/Jerusalem'
ENV OPEN_PLATFORM_IP $OPEN_PLATFORM_IP
RUN apt-get install dos2unix
RUN mkdir /app
RUN mkdir /app/results
RUN mkdir /app/scenario
ADD scenarios/entrypoint.sh /app
ADD scenarios/utils/jars/*.jar /root/.bzt/jmeter-taurus/5.2.1/lib/ext/
WORKDIR /app
RUN dos2unix entrypoint.sh
ENTRYPOINT ["/bin/bash", "entrypoint.sh"]
The entry point script is executed by the scenario runner container. It invokes BZT inside the scenario volume where the configuration file is located (runner.yml). In the invocation command we specify the target IP address to be used in the JMeter scripts, and the artifacts folder where we will dump the runner results. Please note that it is directed towards the results volume.
## entrypoint.sh#!/bin/bash
set -xcd /app/scenario
bzt runner.yml -o modules.jmeter.properties.IP=${OPEN_PLATFORM_IP} \
-o settings.artifacts-dir=/app/results/%Y-%m-%d_%H-%M-%S.%
The BZT configuration file lists all JMeter tests files for this scenario. The reporting module is JUnit XML. The reason for using this format is it is supported by Jenkins’ JUnit plugin.
## runner.ymlexecution:
- write-xml-jtl: full
scenario:
script: ./tests/regression_test_0001.jmx
- write-xml-jtl: full
scenario:
script: ./tests/regression_test_0002.jmxreporting:
- module: junit-xml
filename: ${TAURUS_ARTIFACTS_DIR}/TEST-results.xml
data-source: sample-labels
modules:
local:
sequential: true
Finally, in order to run a regression cycle (A plan) in Jenkins, I’ve written two pipelines. The first is the scenario runner, given a scenario name it pulls the tests from version control and runs the single corresponding scenario by executing docker-compose. The second pipeline is the plan runner which also pulls the tests from version control but instead iterates over each scenario in the scenarios folder, filters out ignored scenarios, and executes the first pipeline.
Conclusion
Designing this platform was a big challenge but it remains an experience that taught me a lot. Since the platform was accepted and the tests repository grew its coverage, other developers have taken on assignments to improve the solution. These improvements have included parallel scenario executions, environmental cleanups, ramp-up scripts, and more. I hope that this article has provided you with good insight into the process I undertook and that it can help you with your own projects!