
A year back, somewhere during July ’19, I was given a challenge by my manager that led me to design a regression testing solution for my organization. I wrote about it in another story, so if you didn’t read it yet, then you should definitely stop right here, go and read it.
Good, you read it! — or entirely disregarded my earlier suggestion — either way, carry on.
So everything was going well with the Open Platform Regression solution — this is what we ended up calling it; OPR in short. Tests were being written to it, its coverage grew, and we added some cool features to it too. But over time, as I worked with it further, there were certain things that I knew could have been done better — and they began to gnaw at me.
Working with JMeter, more and more made me realize its user experience is found wanting. Usually, an advanced assertion will require writing a short script (groovy), but the more complex the script is, the more it is difficult to troubleshoot and get it to do what you want it to do. Working with modern IDEs have raised our expectations from tools that allow us to write code inside them. Things like auto-completion and syntax validation allow us to go fast and are considered today basic features; and these features are missing from JMeter’s script editor.
Another slight pain with using JMeter is that in OPR solution we would specify the IP address — of the server to be tested — via an environment variable, and to edit a test — if one wishes to also run it — one would have to start it from command line and only there provide the desired IP address. Switching between environments would mean closing one window and starting another; a strange procedure when one could only wish they could change the environment setup from within the application.
On the CI side of the solution, we were tolerating a discomfort with the way Blazemeter Taurus generates its JUnit XML reports. Jenkins’ JUnit plugin reads the XMLs that BZT generates and it uses the classname attribute as the title of each scenario in the final report. Unfortunately BZT’s value for the classname attribute is the string “bzt” concatenated with some random number. This of course makes it impossible to tell which scenario is which inside the overview of the final report on Jenkins; at least not without browsing each scenario.
Rediscovering Postman
It was clear that some aspects could be improved or done better, and I was looking for ways to improve the solution, but not very actively. While going over BTZ’s documentation I noticed that Postman (Newman) is listed as a supported executor. The reason for it to be listed, I thought, was because it offers some kind of assertion capability. Now, we had some ideas thrown in the R&D corridors about using Postman — sometimes corridor ideas are the best ideas — but I always thought of Postman as a simple tool for executing REST API calls and I didn’t invest much time in getting to know the tool. I knew it offers a great UI/UX, and I knew you could create and manage an entire collection of requests, but I didn’t know about any kind of scripting that it offered. Somehow, it had slipped past me, I felt like a newbie but I have discovered something new and promising.
Further googling led me to some wonderful articles at learning.postman.com. There I learnt how Postman utilizes Chai BDD for its scripting — Working with a familiar and concrete assertion library is always an advantage. I quickly began to experiment and even transition some of our existing JMeter scripts to scripts in Postman. It was clear to me that even though Postman’s script editor cannot compete with any JavaScript IDE that I normally use, it does provide those basic functionalities that I felt were missing from JMeter.
Moreover, having the option to load multiple collections into a workspace, or even being able to manage multiple workspaces, and finally, switching between different environment setups almost completely gave all the answers to the list of improvements I was aiming for.
The last thing I was hoping to take care of was the random naming in the reports on Jenkins, and using BZT to execute Postman tests does not change that. However, Postman has its own a runner application — Newman. Through the options of this command-line collection runner I was able to generate JUnit XMLs that are compatible with Jenkins’ JUnit plugin and do not suffer from the same naming issue BZT has. Instead, Newman would yield a camel case concatenation of the collection’s name.
Putting it all together
Making usage of Postman and its advantages in the solution meant that the Postman collections must be run using Newman instead of BZT, yet it would require a similar approach, as with BZT, by mapping the test folder to a Docker container that internally executes the desired runner application. During the entire plan execution in Jenkins, as the pipeline iterates over each scenario folder, it’ll distinguish between a BZT/JMeter based scenario and a Newman/Postman based scenario, and start a runner container of the relevant Docker image.
To implement what I had in mind, first I wrote a new Dockerfile for the new runner image. Based on Newman’s official Docker image, it has similar creation steps as the BZT based runner from the previous article and it also runs a dedicated shell script as entry point.
FROM postman/newman:5-alpine
ENV TIME_ZONE 'Asia/Jerusalem'
ENV OPEN_PLATFORM_IP $OPEN_PLATFORM_IP
RUN apk update && apk add dos2unix
RUN mkdir /app
RUN mkdir /app/results
RUN mkdir /app/scenario
WORKDIR /app
ADD scenarios/newman.entrypoint.sh /app/entrypoint.sh
RUN dos2unix entrypoint.sh
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]
The entry point script would go over each Postman collection inside the tests folder within the scenario folder and would check if there is a matching JSON inside a neighboring resources folder, and if there is a matching JSON, it’ll use it as the collection’s data source. This is in order to allow the option to create data driven tests like the ones we have managed to create using JMeter over the past year.
#!/bin/sh
set -x
cd /app/scenario
ls -l
for TEST in ./tests/*.json; do
RESOURCE=./resources/$(basename $TEST)
echo $RESOURCE
if [ -f "$RESOURCE" ]; then
echo "Running test $TEST with resource file $RESOURCE."
newman run "$TEST" \
-r junit,cli \
-d "$RESOURCE" \
--reporter-junit-export /app/results \
--env-var OPEN_PLATFORM_IP=$OPEN_PLATFORM_IP
else
echo "Running test $TEST without resource."
newman run "$TEST" \
-r junit,cli \
--reporter-junit-export /app/results \
--env-var OPEN_PLATFORM_IP=$OPEN_PLATFORM_IP
fi
done
Next I had to modify the Docker compose file to include a second service so that there will be one for each runner type.
version: '2'
services:
scenario_runner:
image: ${REPO}/automation/opr_scenario_runner:7
environment:
SCENARIO: ${SCENARIO}
OPEN_PLATFORM_IP: ${OPEN_PLATFORM_IP}
volumes:
- ./results:/app/results
- ./scenarios/${SCENARIO}:/app/scenario
newman_runner:
image: ${REPO}/automation/opr_newman_runner:2
environment:
SCENARIO: ${SCENARIO}
OPEN_PLATFORM_IP: ${OPEN_PLATFORM_IP}
volumes:
- ./results:/app/results
- ./scenarios/${SCENARIO}:/app/scenario
Finally, I had modified the Jenkins pipeline in charge of running a specific scenario to provide have the capability to work with either scenario type.
void testScenraio(){
def runner = this.scenarioName.startsWith("newman_") ? "newman_runner" : "scenario_runner"
script.sh "docker-compose up -d --build ${runner}"
script.sh "docker-compose logs -f ${runner}"
}
One or two known limitations
After all, not everything can be perfect. While going over what Postman can and cannot do, and judging if it indeed can replace JMeter in the solution, I found only one, or maybe two things that I had to work around. While asserting equality between two JSON objects I had expected that the assertion error will specify the difference between the two objects. Scouring the web has led me to this post, which meant for me that since I really want this descriptive output, I would have to implement this capability on my own and somehow import it into Postman.
Importing an external library into Postman was another question for which I wasn’t able to find an elegant solution. Again I scoured the web to discover that the most elegant solution was to use Postman to fetch the library from a URL, like you would a CDN, and run JavaScript’s eval(…) function on the response. So I ended up writing the following script, and saving it on our organization’s GitLab as a snippet so that it can be publicly read.
(function () {
if (pm.cw) {
return;
}
pm.cw = {};
/**
* Traverses expected and checks each nested key and value is matched within actual.
* The function will find and print all differences inside the assertion message.
* @param expected The golden object to traverse.
* @param actual The object to be matched.
* @param ignored Keys and nested keys to be ignored from compare.
* @param reason A leading description to the assertion message.
*/
pm.cw.assertDeepEquals = function (expected, actual, ignored = [], reason = '') {
if (typeof (ignored) === 'string') {
reason = ignored;
ignored = [];
}
ignored = new Set(ignored);
const finalUnmatched = findDifferences(expected, actual);
for (const mismatch of finalUnmatched) {
reason += `
Expected '${mismatch.key}' to be '${mismatch.expected}' but got '${mismatch.got}'.`
}
pm.expect(finalUnmatched, reason).to.be.empty;
function findDifferences(expected, actual, prefix) {
const unmatched = [];
const expectedEntries = Object.entries(expected);
for (const entry of expectedEntries) {
const key = entry[0];
const expected = entry[1];
const got = actual[key];
const prefixedKey = prefix ? `${prefix}.${key}` : key;
if (ignored.has(prefixedKey)) {
continue;
}
const expectedType = getType(expected);
const gotType = getType(got);
if (expectedType !== gotType) {
unmatched.push({key: prefixedKey, expected, got});
} else {
if (expectedType === 'object') {
unmatched.push(...findDifferences(expected, got, prefixedKey));
} else if (expected !== got) {
unmatched.push({key: prefixedKey, expected, got});
}
}
}
return unmatched;
}
function getType(object) {
return object ? typeof (object) : null;
}
}
})();
Conclusion
In any working solution there will be places to improve. However it is important to consider whether the effort is worth the gain. The organizational gain potentially pays for the effort as it replaces one tool with one that is more user-friendly that hopefully can drive its users to write better tests and more easily. In the acceptance plan of the solution following its redesign I had decided to introduce the developers as well into the regression tests writing process. The idea is that by involving the developers, the two groups can learn from each other — developers can improve code quality of the tests, and QA engineers can ensure their objectivity.
I hope this article has provided you with the drive to seek out your own fun and useful projects.