Execute JMeter Scripts Behind the VPN

     One of my colleague had some issue while running the JMeter scripts behind the VPN or proxy. I proposed four solutions. I would like to share those four solutions here, two of them within the JMeter UI itself, another one we can do along with launching JMeter, and the final one is a static configuration within the system.properties file that available within JMeter/bin. We will discuss below more on different solutions.

Solution 1: Configure the Proxy Server into each HTTP Request

     In the HTTP Request, we can find out the Proxy Server section at the Advanced tab of the HTTP Request element. There we can add the proxy server name or IP address, port number, username, and password. In some cases, username and password are optional. Below is the screenshot details,

Solution 2: Configure the Proxy Server into HTTP Request Defaults

     Suppose if we have 50 or more HTTP Requests, then it will be difficult to configure the Proxy Server details in each HTTPT Request that we discussed in Solution 1. And also, in future your proxy settings changed, we may have to change those 50 and more requests. So the idea is to configure the Proxy Server into HTTP Request Defaults. In this case, we will not input anything into the HTTP Requests, just open HTTP Request Defaults, find out the Proxy Server at the Advanced tab, and configure the details. You can add HTTP Request Defaults under Test Plan and at the Thread Group level like a global declaration. Below is the screenshot details,

Solution 3: Launch JMeter from the command line with the following parameters

-H
[proxy server hostname or IP address]
-P
[proxy server port]
-N
[nonproxy hosts] (e.g. *.apache.org|localhost)
-u
[username for proxy authentication – if required]
-a
[password for proxy authentication – if required]

Following are some examples in the Windows system:

jmeter -H [proxyAddress] -P [portNumber]

Or you can use the IP instead of the Server name
jmeter -H [IPAddress] -P [portNumber]

If your Proxy Server required a username and password, use the command below
jmeter -H [proxyAddress] -P [portNumber] -u [username] -a [password]

If a non-proxy host list is provided, use this command
jmeter -H [proxyAddress] -P [portNumber] -u [username] -a [password] -N [proxyAddress]

In he above method, no need to worry about the proxy server configuration at the JMeter UI level.

Solution 4: Setup the proxy properties into the system properties file

     Open the system.properties in edit mode, this file is located under \apache-jmeter-5.1.1\bin directory (I am using JMeter 5.1.1). Add the following properties to the end of the file,

http.proxyHost
http.proxyPort
https.proxyHost
https.proxyPort

For example:

http.proxyHost=localhost
http.proxyPort=8887
https.proxyHost=localhost
https.proxyPort=8887

Suppose if a non-proxy host list is provided, then JMeter sets the following system properties:

http.nonProxyHosts
https.nonProxyHosts

     We can use one of the above solutions to run the JMeter script behind the VPN or proxy. We can ignore the first method if you have more HTTP Requests to execute. In some situations, we may need to work on the scripts without VPN or proxy, in that case, we can go ahead with solution 2 (need to disable HTTP Request Defaults component) or solution 3.

     Please try to utilize any of the above solution if you have a situation of running JMeter scripts behind the VPN or proxy.

make it perfect!

Test Automation Code Review Factors

     A code review is a quality assurance activity to ensure that check-ins are reviewed by someone other than the author. Often practice this activity as it’s an excellent way to catch errors early in the process. The test code should be treated with the same care as feature code, and therefore it should undergo code reviews as well. Test code can be reviewed by other automation engineers or developers who are familiar with the project and codebase.

     In this article, we will discuss what exactly should we look for when reviewing automation test code? I would like to share eight specific factors here,

Does your test case verify what’s needed?

     When verifying something manually, there are a lot of hidden validations that are being made. So if anything was incorrect, we’d likely notice that our tests are not as good at this. In fact, they will only fail if the conditions that we explicitly specify are not met. During automation scripting, we usually do a minimum number of checkpoints from medium to high-level. We should add maximum low-level checkpoint at each step to get maximum test coverage, this will help you to increase the quality of your software. The test automation code review will help to identify the missing checkpoints.

Does the test case focus on one specific thing?

     Each test case should focus on a specific single thing. This may be a bit confusing than a bunch of things that should be asserted. However, all of those assertions work together to verify a single thing. If, however, the test case also verified the company’s logo or some other feature besides the actual automated one, that would be outside of the scope of this test. So that the automation code review helps to identify and differentiate the out-scope items implemented in the test scripts.

Can the test cases run independently?

     Each test should be independent, which means it should not rely on other tests at all. This makes it much easier to track down failures and their root causes, and will also enable the team to run the tests in parallel to speed up execution if needed. Sometimes the automation engineer falls into the trap while isolating the test cases, because they may be using related test runs as a setup for other tests. For example, there is a test to delete an item from the list, in this case, they need to run the cases to add an item to the list and after that, they will execute the delete that item from the list. In this case, we can recommend that the test create and delete whatever it needs, and if possible to do this outside of the GUI (like via API calls or database calls).

How is the test data managed for test case executions?

     The test cases that deal with test data can be the difference for different test suites. As each test case should be developed to run independently and all the test cases should be able to run in parallel at the exact same time, each test should be responsible for its own test data. When the test cases are running in parallel and have different expectations of the test data’s state, the test cases end up failing when there’s no real issue with the application. Recommended that the automation engineer can create whatever data is needed for the test within the test itself or can keep in external files and call those data into their specific test cases.

Whether any separation of concerns?

     You should treat your test code should be in the same care as feature code. That means that clean coding practices, such as separation of concerns, should be followed. The test method should only focus on putting the application in the desired state and verifying that state. The implementation of manipulating the application’s state should not be within the test itself. For example, you have a test case to submit an application let say your method submitApplication() should be added as a test step in the test class and the actual implementation of the submitApplication() should be in the other helper class. Also, the non-test methods should not make any assertions. Their responsibility is to manipulate the state of the application, and the test’s responsibility is to verify that state. Adding assertions within non-tests decreases the reusability of those methods.

Is there anything in the test that should be a utility for reusability?

     Separating concerns already address this in most cases, but double check that the test isn’t implementing something that can be reused by other tests. Sometimes this may be a common series of verification steps that certainly is the test’s responsibility, however, it will be duplicated across multiple tests. In these cases, it’s a good idea to recommend that the automation engineer can move this block of code into a utility method that can be reused by multiple tests.

Are the object locators reliable?

     In the case of Graphical User Interface and mobile tests, the automation engineer will need to use selectors to locate and interact with the web elements. The first thing to ensure is that these locators are not within the tests themselves, need to think of separation of concerns. Also, make sure the selectors can stand the test of time. For example, using selectors that depend on the DOM structure of the page (e.g. index) will only work until the structure of the page is changed. Encourage the automation engineer to use unique IDs to locate any elements they need to interact with, even if that means adding the ID to the element as part of this check-in or create a strong custom XPath to locate and interact with the elements.

Are they using stable wait strategy?

     Mark any hard-coded waits. Automated tests run faster than customers would actually interact with the product and this could lead to issues with the test, such as trying to interact with or verify something that is not yet in the desired state. To solve this, the code needs to wait until the application is in the expected state before proceeding. However, hard-coding a wait is not suitable. It leads to all kinds of problems such as lengthening the duration of execution, or still failing because it didn’t wait long enough. Instead of that, you can recommend that the automation engineer use conditional waits that will only pause execution for the least amount of time that is needed.

     Now you are ready to start the automation test code review. Test code review is a highly recommended process and it should be added to your automation projects life cycle. Test code review will improve the quality of your automation test scripts, that way it helps to improve your application’s quality and your customer will get a quality product that adds more value to their businesses. I hope you got a clear idea of different factors that need to be considered while test code review and try to apply them in your automation project life cycle.

make it perfect!

Automation Exceptions and Solutions

     We know that an exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program’s instructions. When an error occurs within a method, the method creates an object and hands it off to the runtime system. This block of code is called an exception handler.

     In real world scenario, while working with automation using Selenium WebDriver, automation engineer would definitely come across various exceptions which disrupts the test case execution. The automation script that you work with, sometimes work properly and sometimes it simply doesn’t work. For any script that you develop, make sure that you deliver the best quality code with proper exceptions handling techniques implemented.

     In this article, I would like to share the common exceptions while selenium automation execution and also the solutions to handle those exceptions. Following are the different types of exceptions in selenium,

WebDriverException

     WebDriverException arises when the your code tries to perform any action on the non-existing browser. For example, trying to use driver after closing the driver session.

WebDriver driver = new ChromeDriver();
driver.get(“https://journeyofquality.com/ “);
driver.close();
driver.findElement(By.id(“bu345”)).click();

Solution: You can handle this WebDriverException by using driver.close() after the completion of all tests at @AfterSuite annotation if you are using TestNG instead of using them after each test case that is after all the test execution has completed.

SessionNotFoundException

     SessionNotFoundException will occur when driver is trying to perform operations on the web application after the browser is quitted by driver.quit().

@Test
public void openJourneyOfQuality() {
driver.get(“https://journeyofquality.com/ “);
Assert.assertEquals(“Journey”, driver.getTitle());
driver.quit();
Assert.assertEquals(“https://journeyofquality.com/ “, driver.getCurrentUrl());
}

Solution: This exception can be handled by using driver.quit() after the completion of all tests instead of using them after each test case. This can lead to issues when driver instance becoming null and following test cases try to use it without initializing. We can kill the driver instance after all the test execution has completed and hence we can add them up at @AfterSuite of TestNG annotation.

StaleElementReferenceException

     User would come across StaleElemenReferenceException mainly when Selenium navigates to a different page, come backs to the same old page and performs operations on the old page which is no longer available. Technically, it occurs when the element defined in the script is not found in the cache memory and script is trying to locate it that particular element. When we inspect and locate an element on a page using Selenium, it is stored in a cache memory which gets deleted when the driver navigates to another page during execution. When user navigates back to old page and then while trying to access the cache removed element on the old page, we will get StaleElementReferenceException during the automation script execution.

Solutions:

  • Refresh the webpage and perform action on that web element.
  • Include the web element on try-catch block within for loop to get the element and perform the action. Once it perform the action, it will break from the loop.

for (int value = 0; value <= 2; value++) {
try {
driver.findElement(By.xpath(“webElement”)).click();
break;
} catch (Exception e) {
System.out.println(e.getMessage());
}
}

  • Wait for the element till available by using ExpectedConditions,

wait.until(ExpectedConditions.presenceOfElementLocated(By.id(“webElement”)));

NoSuchElementException

     NoSuchElementException occurs, when the element locators we provided in the Selenium script is unable to find that web element on the web page. Probably this will happen in two ways,

  • When we have provided an incorrect locator and trying to find the web element.
  • We have provided correct locator, but the web element related to the locator is not available on the web page. That means the action performed before loading that element on the web page.

@Test
public void testJourney() {
driver.get(“https://journeyofquality.com/ “);
try {
driver.findElement(By.id(“invalidelement”)).click();
} catch (NoSuchElementException e) {
System.out.println(“No Such Element exceptional case”);
}
}

     In the above code, trying to locate an element with id invalidelement on website page. When the element is not found, the application throws NoSuchElementException.

Solution: Make sure that the locator (XPath/Id) provided is correct or try to use the explicit wait methods for the presence of element and then perform the action on it. For example,

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.presenceOfElementLocated(By.id(“invalidelement”)));

ElementNotVisibleException

     ElementNotVisibleException is thrown when WebDriver tries to perform an action on an invisible web element, which cannot be interacted with since element is in a hidden state. For example, there is no button displayed on the web page and if there is HTML code related to the button, then on trying to click on that particular button using locators in the automation script and we will get ElementNotVisibleException. These exception can happen even if the page has not loaded completely and when user tries to interact with an element.

Solution: We have to wait until that particular element becomes visible on web page. To tackle this you can use the explicit wait methods in Selenium. For example,

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(“invisibleElement”));

NoSuchFrameException

     On the webpage, user may have to deal with HTML documents that are embedded inside another HTML document which are called as iframe. In order to work with the web elements on any iframe, we have to first switch to that particular iframe in Selenium by using frame-name or frame-ID and then inspect and locate the respective web elements inside the iframe. NoSuchFrameException occurs, when the driver in the Selenium script is unable to find the frame on the web page to switch which happens when the driver is switching to an invalid or non-existing iframe.

Solution: We have to wait until that particular frame to be available on web page. To tackle this, you can use the explicit wait methods in Selenium. For example,

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.frameToBeAvaliableAndSwitchToIt(frame_id));

NoAlertPresentException

     Alert is a pop-up which provide important information to users or asking for perform certain operation like reading the messages on the alerts or accepting the alert by pressing OK button on the alert or dismissing the alert by pressing CANCEL button etc. In order to work with Alert pop-ups, we have to first switch to Alert and then perform operations on Alert window. NoAlertPresentException occurs when the driver in the script is unable to find the Alert on the web page to switch when the driver is switching to an invalid or non-existing Alert pop-up. Sometime NoAlertPresentException exception is thrown even if the alert is not loaded completely.

Solution: To handle NoAlertPresentException include the script inside try-catch block and provide explicit wait for the alert to be available as shown below.

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.alertIsPresent());

NoSuchWindowException

     We know that Selenium automates Web applications only, and in real time we may have to deal with browser windows which will be opened when you click on buttons or links. In order to interact with the Web Elements on any browser pop-up window, we have to handle all the opened windows and then locate the respective web elements inside the pop-up window. NoSuchWindowException occurs when the driver in the script is unable to find the pop-up window on the web page to switch. NoSuchWindowException is thrown if the window handle doesn’t exist or is not available to switch.

Solution: We would iterate through all the active windows handles and perform the desired actions. For example,

for(String handle: driver.getWindowHandles()) {
try {
driver.switchTo().window(handle);
} catch (NoSuchWindowException e) {
System.out.println(“No such window exceptional case”);
}
}

TimeoutException

     Waits are mainly used in WebDriver to avoid the exception like ElementNotVisibleException which occurs while trying to click on a button before the page is completely loaded. This exception occurs when a command takes more than the wait time completion. However, if the components doesn’t load even after the wait time mentioned, the exception org.openqa.selenium.Timeoutexception will be thrown.

Solution: To avoid this exception, waits commands are added either implicit, explicit or fluent wait.

Implicit Wait:

     The Implicit Wait in Selenium is used to tell the web driver to wait for a certain amount of time before it throws a NoSuchElementException. The default setting is 0. Once we set the time, the web driver will wait for the element for that time before throwing an exception. For example,

driver.manage().timeouts().implicitlyWait(15,TimeUnit.SECONDS) ;
driver.get(“https://journeyofquality.com/ “);

     In the above code, an implicit wait of 15 seconds is added. If the page https://journeyofquality.com/ doesn’t load in 15 seconds, then TimeoutException will be thrown.

Explicit Wait:

     The Explicit Wait in Selenium is used to tell the Web Driver to wait for certain conditions (Expected Conditions) or maximum time exceeded before throwing ElementNotVisibleException exception. It is an intelligent kind of wait, but it can be applied only for specified elements. We already discussed few examples above related to explicit wait like,

wait.until(ExpectedConditions.presenceOfElementLocated(By.id(“webElement”)));
wait.until(ExpectedConditions.frameToBeAvaliableAndSwitchToIt(frame_id));
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(“invisibleElement”))
;

Fluent Wait:

     The Fluent Wait in Selenium is used to define maximum time for the web driver to wait for a condition, as well as the frequency with which we want to check the condition before throwing an ElementNotVisibleException exception. It checks for the web element at regular intervals until the object is found or timeout happens. Frequency is setting up a repeat cycle with the time frame to verify or check the condition at the regular interval of time. Below is the sample implmentaion for fluent wait,

Wait wait = new FluentWait(WebDriver driver)
.withTimeout(Duration.ofSeconds(SECONDS))
.pollingEvery(Duration.ofSeconds(SECONDS))
.ignoring(Exception.class);

Overall Exception Handling Solutions

     In order to handle these above types of Selenium exceptions, we will use following exception handling mechanism,

Throw: Throw keyword is used to throw exception to the runtime to handle it.

public static void anyMethod() throws Exception{
try{
// write your code here
}catch (Exception e){
// Do whatever you wish to do here
// Now throw the exception back to the system
throw(e);
}
}

Multiple Catch Blocks: You can use multiple catch() blocks to catch different types of exception. The syntax for multiple catch blocks looks like the following:

public static void anyMethod() throws Exception{
try{
// write your code here
}catch(ExceptionType1 e1){
//Code for Handling the Exception 1
}catch(ExceptionType2 e2){
//Code for Handling the Exception 2
}

Try/Catch: A try block encloses code that can potentially throw an exception. A catch() block contains exception-handling logic for exceptions thrown in a try block. Code within a try/catch block is referred to as protected code, and the syntax for using try/catch looks like the following:

public static void anyMethod() throws Exception{
try{
// write your code here
}catch(Exception e){
//Code for Handling the Exception
}
}

Finally: A finally block contains code that must be executed whether an exception is thrown or not:

public static void anyMethod() throws Exception{
try{
// write your code here
}catch(ExceptionType1 e1){
//Code for Handling the Exception 1
}catch(ExceptionType2 e2){
//Code for Handling the Exception 2
}
finally{
//The finally block always executes.
}
}

     I hope you got an idea on various common selenium exceptions, the solutions and the general way to handle those exceptions. Try to implement these exception handling mechanism in your automation script and handle these runtime anomalies.

make it perfect!

Automating the iOS Springboard with Appium

     Sometimes we want to automate an iOS device, but don’t want to automate any app in particular, or want to start from the Home Screen as part of a multi-app flow, or simply want to automate a set of built-in apps the way a user would approach things. In this case, it’s actually possible to start an Appium iOS session without a specific application. To do this we make use of the concept of the iOS Springboard, which is essentially another word for the home screen. The Springboard is essentially an application, though it is one that can’t be terminated. As an “app”, it has its own bundle ID: com.apple.springboard. So we can actually use this to start an Appium session without referring to any real application in particular:

capabilities.setCapability(“app”, “com.apple.springboard”);

     On its own, however, this isn’t going to work, because Appium will try to launch this app, and deep in the XCUITest code related to app launching is some logic that makes sure the app is terminated before launch. As mentioned earlier, the Springboard can’t be terminated, so trying to start an Appium session this way will lead to a hanging server. What we can do is include another capability, autoLaunch, and set it to false, which tells Appium not to bother with initializing and launching the app, but just to start a session and give back control immediately:

capabilities.setCapability(“autoLaunch”, false);

     At this point, starting an Appium session in this way will drop you at the Springboard. It won’t necessarily drop you at any particular page of the Springboard, however. If you are an iOS user, you will know that the Home Screen is really a whole set of screens, depending on how many apps you have and how you have organized them. One of the main things you would want to do from the home screen is find and interact with an icon for a given app. We will see, how can we do this,

 Let’s imagine that below is our test method implementation,

@Test
public void testSpringboard() {
wait.until(AppIconPresent(“FaceTime”)).click();
pressHome();
wait.until(AppIconPresent(“Camera”)).click();
pressHome();
}

     Here we have created a custom expected condition called AppIconPresent, which takes the app icon text, and will attempt to find that icon, navigating through the different pages of the Springboard if the icon is not already present. This is actually conceptually a bit tricky, because of how the Springboard app is implemented. No matter how many pages you have in your Springboard, all pages show up within the current UI hierarchy. This means it is easy to find an icon for an app even if it’s not on the currently-displayed page. However, if you try to tap that icon, it will not work because the icon is not actually visible. So, we need some way of figuring out how to move to the correct page before tapping. Let’s see the the implementation of AppIconPresent below:

protected ExpectedCondition AppIconPresent(final String appName) {
pressHome();
currPage = 1;
return new ExpectedCondition() {
@Override
public WebElement apply(WebDriver driver) {
try {
return driver.findElement(By.xpath(
“//*[@name=’Home screen icons’]” +
“//XCUIElementTypeIcon[” + currPage+ “]” +
“/XCUIElementTypeIcon[@name='” + appName + “‘]”
));
} catch (NoSuchElementException err) {
swipeToNextScreen();
currPage+= 1;
throw err;
}
}
};
}

     The first thing we do is call our pressHome helper method which is just another method implemented in the current class. Below is the implementation:

protected void pressHome() {
driver.executeScript(“mobile: pressButton”, ImmutableMap.of(“name”, “home”));
}

     What calling pressHome here does is ensure that we are always on the first page of the Springboard. Then, we set a class field to define what page we are on. We initialize it to 1, because after pressing the home button, we know we are on the first page. Then, in our actual condition check implementation, we try to find an icon that has the name we have been given to find.

     Here is the tricky part, we don’t want to just find any icon that has the correct name because then we would find the icon even if it’s not on the current page. We only want to find an icon on the current page and then swipe to the next page if we can’t find it. To do that, we take advantage of a fact about Springboard’s UI hierarchy, which is that each page is actually coded up as an XCUIElementTypeIcon, which contains the actual app icons as children. So we can write an XPath query that restricts our search to the XML nodes corresponding to the current page. If we are unable to find an icon on the current page, we call another helper method, swipeToNextScreen. Below is the simple implementation that just performs a swipe from an area near the right edge of the screen over to the left:

protected void swipeToNextScreen() {
swipe(0.9, 0.5, 0.1, 0.5, Duration.ofMillis(750));
}

     Once we have swiped to the next screen, we increment our page counter because we have now moved to the next screen. We are relying on the assumption that we will eventually find the app by the time we reach the last page, because we don’t have any logic to detect whether our swipeToNextScreen was actually successful. In general, AppIconPresent is a great example of a useful custom expected condition that has a side effect. We build it into an expected condition so we can use it flexibly with the WebDriverWait interface, and so we don’t need to write any of the looping or retry logic ourselves.

     This is all about automation iOS Springboard with Appium. I hope you really enjoyed to read and learn this automation workflow. Please try to utilize this reusable utility in your iOS automation flow wherever you need it.

Reference: Appium Pro

make it perfect!

API Test Reporting

     We all know that Postman is a great tool when trying to dissect RESTful APIs made by others or test ones you have made yourself. It offers a sleek user interface with which to make HTML requests, without the hassle of writing a bunch of code just to test an API’s functionality. This tool has many HTTP requests methods like GET, POST, PUT, PATCH, DELETE etc., and also it support to write the test cases to validate the response code, response body data, headers etc.,

     In this article, I would like to share the concept of different requests and different assertions that I have created using Postman. Also discussing the important part of test execution and reporting. I have created API requests with GET, POST, PUT, PATCH and DELETE methods inside a collection in Postman. In each request, added the validations to check the the response code and response body. Following are some sample test validations that I have created:

pm.test(“Verify status code is 200”, function () {
pm.response.to.have.status(200);
});

pm.test(“Verify user updated”, function () {
pm.expect(pm.response.text()).to.include(“morpheus”);
pm.expect(pm.response.text()).to.include(“zion resident”);
});

Execute Collections from Postman

     Once completed the HTTP requests and scripting under particular collection, next you can execute your collection. Here, I have created TestPOC collection and added all the requests under this collection,

     You can click on the |->icon right to your collection name and select Run to get the Collection Runner window or you have to click Runner button at top below to the menu in Postman window to get the Collection Runner window.

     In the Collection Runner window, you can see the selected collection and also the selected requests in the run order. In this window, you can set the iterations, delay to start the execution of each requests criteria. Once all set you can click on Run <your_collection_name> here Run TestPOC button to start the execution and after few seconds you will get the results in Run Results section as below,

     In the Run Results, you can see the total pass, fail, details of requests, assertions. You can also export this results in JSON format. In this case, I intentionally failed one case to see it in the results. This method of executing collections from Postman is pretty straightforward. But I have a thought of effortlessly run and test a Postman collection directly from the command-line, also generate some good HTML report with test execution details. We will see how can achieve the execution of Postman collections via command-line and also generate the HTML reports.

Execute Collections from Command-line

     Command-line execution is possible with help of newman package. Newman is a command-line collection runner for Postman. It allows effortless execution and test a Postman collection directly from the command-line. It is built with extensibility in mind so that you can easily integrate it with your continuous integration servers and build systems. Before using this package, you need to install it using npm, use the below command in your command prompt to install newman,

npm install -g newman

     Once you are ready with newman installed, next you can export your collection from Postman and it will be in the JSON format (I got the file in the name as TestPOC.postman_collection.json). Save the JSON file in a location you can access with your terminal and navigate to the path. Once you are in the directory execute below command replacing the collection_name with the name you used to save the collection,

newman run collection_name.json

You will get the results like below,

     In the command-line output, you can see the API requests details, assertion status, table view with executed and failed. You can also be iterated for the respective no# of times using below command,

newman run collection_name.json -n <count>

     Next we will see how the newman can generate HTML reports. Newman provides a different execution report of the collections in formats (like HTML, JSON). But, the accurate way to present execution status to the customer is HTML instead of JSON. To do this need to add HTML reporter for Newman using the following command,

npm install -g newman-reporter-html

     Once the newman-reporter-html installed, you can run and generate HTML report using following command,

newman run postman_collection.json -r html

     Once the execution of above command complete, the HTML report will generate in the newman directory where your collection JSON exists. The file name like newman-run-report-<date_time>. In this report, you can see the details of iterations, requests, total scripts, failed, execution duration, failures with details. Below is an excerpt of the report,

     To have the report more user-friendly, we have another a powerful tool called HTML Extra reporter that generates beautiful reports. Also makes it possible to have an overview of all test runs, and also we have description feature available for collections, folders and requests. To do this we need to add HTML Extra reporter for Newman using the following command,

npm install -g newman-reporter-htmlextra

     Once the newman-reporter-htmlextra installed, you can run and generate user-friendly HTML report using following command,

newman run postman_collection.json -r htmlextra

     Once the execution of above command complete, the HTML report will generate in the newman directory where your collection JSON exists. The file name like <your_collection_name-date_time>. Below is an excerpt of the report,

     There is also global information about the request, response, test pass, test fail information,

     I hope you really enjoyed to read this article and definitely got some idea on Postman collection execution from Postman tool and from the command-line with help of newman. Also, you got some understanding of the HTML report generation for the API collections. Try to implement this concepts in your Postman API testing activities for a better reporting.

make it perfect!