
Postman Scripting Tricks
Tips for using Postman for regression testing.
In my last two stories, I wrote about designing a regression testing solution for my organization. After running with it for a year I learned about scripting in Postman and decided to integrate Postman into my solution. In this article, I’ll share some of the things I discovered and learned during the process.
Flow Control
Some tests in our repository perform dry or static checks. They send one request, receive a response, and match it against a golden result. These tests are independent of each other. They may be bundled together under one collection because they share the same API, but failing at the assertion of one request doesn’t mean the others can’t also run. On the other hand, some tests are written as a flow made of multiple steps (requests) where data from the response of one request are used by the next request. A step in such a collection depends on its previous step. Therefore, when failing at one step, the runner mustn’t continue to the next.
pm.test(`${pm.info.requestName} - Check response.`, function() {
postman.setNextRequest(null); pm.expect(pm.response).to.have.property('code', 200);
const body = pm.response.json();
pm.environment.set('data_for_next_request', body); postman.setNextRequest();
});
To bail on the first failure in a flow I would ordinarily use the pattern shown in the code snippet above. The request that will follow the one above relies on data taken from the current response and so it is saved to the environment. At the beginning of the function, the next request is set to null, this will ensure that there won’t be a next request. However, if the function reaches its last statement without any assertion error or exception, the next request is set as the next request in order inside the collection; thus enabling us to bail on the first failure yet carry on when things go as they should.
Waiting in Loops
In some of our APIs, when a request is sent, a tracking number or link is returned so that a user application may be able to follow the progress of its request. Testing this behavior using Postman requires to initiate the first request, save the tracking information, and then use that information to track the request at every interval until it completes.
pm.test(`${pm.info.requestName} - initiate request.`, function () {
postman.setNextRequest(null); pm.expect(pm.response).to.have.property('code', 201);
const body = pm.response.json();
pm.expect(body.status).to.eql('PENDING'); pm.environment.set('statusCheckLink', body.tracking.href);
pm.environment.set('statusCheckCounter', 0); postman.setNextRequest();
});
In the snippet above, I extract the tracking link from the response body and save it to the environment. I also initialize a counter to be used by the next request to keep track of the number of status checks already performed.
const statusCheckCounter = pm.environment.get('statusCheckCounter');pm.test(`${pm.info.requestName} - procedure status check #${String(statusCheckCounter).padStart(2,0)}`, function() {
postman.setNextRequest(null); pm.expect(pm.response).to.have.property('code', 200);
const body = pm.response.json();
pm.expect(body.status).to.not.eql('FAILED'); if (body.status === 'COMPLETED') {
// Completion checks... pm.environment.unset("statusCheckCounter");
postman.setNextRequest();
} else {
pm.expect(statusCheckCounter).to.be.below(60, 'The procedure did not finish within a reasonable time.')
pm.environment.set('statusCheckCounter', statusCheckCounter + 1);
postman.setNextRequest(`${pm.info.requestName}`);
setTimeout(function(){}, 10000);
}
});
In this snippet, I check the response received by calling the status check link. If the result’s status is non-final, I increment the status check counter, set the next request as the current request, and wait for 10 seconds. This logic will repeat itself until the status is final, completed, failed, or when the status check counter has reached its threshold.
Data-Driven Testing
To accept Postman into the regression testing solution it was required to ensure that any feature and capability we were using in our JMeter tests has an equivalent or alternative in Postman. One of these is data-driven testing. In JMeter, we would either use a CSV data set config element or actually read the file (a JSON usually) inside the groovy script.
Postman allows the user to specify a CSV or a JSON to be used as a data file by the collection runner, or by Newman via a CLI option. I choose to use JSON format exclusively when preparing a data file as it allows for a greater diversity of data types than CSV does, and feels more appropriate as scripts are written in JS language.
[
{
"caseName": "Case #1",
"resourceId": "2040025_0001_0001_01",
"response_self": {...},
"response_parent": {...},
"response_children": [{...}, {...}]
},
{
"caseName": "Case #2",
"resourceId": "2030014_0002_0001_01",
"response_self": {...},
"response_parent": {...},
"response_children": [{...}, {...}, {...}]
}
]
The required format for the JSON data file — as demonstrated above — is of an array of objects. Postman runner will iterate over the array and execute the collection for each object inside the array.
const caseName = pm.iterationData.get('caseName');
const resourceId = pm.iterationData.get('resourceId');
const expectedResponses = pm.iterationData.toObject();pm.environment.set('caseName', caseName);
pm.environment.set('resourceId', resourceId);
pm.environment.set('expectedResponses', exepectedResponses);
The iteration data is accessible via pm.iterationData through functions like get(…) and toObject(). I would usually extract the iteration data and save it to the environment immediately during the pre-request script of the first request in the collection.
Code Reuse
Once I was sure that all requirements are answered and that Postman can be integrated into our regression testing solution, I set a meeting with all stakeholders to talk about the process of accepting the new technology, its benefits, and the ways to use it. An important comment that was made during the meeting was regarding code reuse. In the code examples that I was showing there was a pattern I did not bother to extract and generify, and frankly I did not put much emphasis on the matter. I guess I was thinking that because each collection was independent from the others then there is no reason that the collections should have a shared library. As soon as the meeting ended, I began to google furiously! Every post I read that talked about importing external libraries into Postman led to the same answer — I did not like it at first but I came to terms with it. The only way, it seems, to import an external library was by fetching it via a request — like you would with a CDN link.
pm.sendRequest('http://{{GITLAB_IP}}/snippets/3/raw', (error, response) => {
if (error) {
throw error;
} else {
pm.environment.set('utilsRaw', response.text());
}
});
So, what I ended up doing was placing the above code at the pre-request script of the first request of any collection that I wanted to use our utility code for. This utility code can be stored anywhere that is reachable publicly within the organization network. In this example, I enlisted Gitlab snippets for the job.
eval(pm.environment.get('utilsRaw'));
In the use case I was facing I decided to add additional functions to the pm object. To do so I would run this eval expression at the start of any script in which I want to use those additional functions.
Conclusion
There are many more features available that I know I did not mention or even research; such as team workspaces, forking, and mock servers. This is due to the simple reason that I did not have the need to look into these features. However, the further that I work on the revised testing solution, the more I see that I may have to dig deeper and discover some new things that will have me return and update this story. Until then, I hope you found the tips and tricks I did write about useful.