Automate and Validate QR Code Details

     I had a scenario to automate and validate the QR code data. There are some contact details encoded within the QR code, I have to validate the contact details against the expected result. I got a solution from Google’s zxing library and implemented those in my script and validated the QR code data. I have achieved this with help of the following two libraries from Google,

  • Google zxing – core
  • Google zxing – javase

Following are the details of dependencies that I used to achieve this scenario:

<dependency>
<groupId>com.google.zxing</groupId>
<artifactId>core</artifactId>
<version>3.4.1</version>
</dependency>


<dependency>
<groupId>com.google.zxing</groupId>
<artifactId>javase</artifactId>
<version>3.4.1</version>
</dependency>

We can use this capability in different ways. Here, I would like to share the following:

  • Validate QR code from the web application directly.
  • Validate QR code from the local system.
Validate QR code from the web application directly

     In this case, you have to get the QR code image from your application. Create an object of URL class, then pass the URL class object to store the file as an image. Once the image ready, then process it using Google zxing’s method and convert the image to a binary bitmap. Later capture the details of QR code using Google zxing’s decoding method and store it into a Result’s object. Finally, validate that decoded data against expected result using TestNG Assert. Below is the logic:

String QRCodeImage=”//img[@src=’https://assets.outlook.com/qrprod/-1603550372.png ‘] “;
String urlOfQRCode = driver.findElement(By.xpath(QRCodeImage)).getAttribute(“src”);
// Create an object of URL Class
URL url = new URL(urlOfQRCode);
// Pass the URL class object to store the file as image
BufferedImage bufferedimage = ImageIO.read(url);
// Process the image
LuminanceSource luminanceSource = new BufferedImageLuminanceSource(bufferedimage);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(luminanceSource));
// To Capture details of QR code
Result result = new MultiFormatReader().decode(binaryBitmap);
Assert.assertEquals(“Expected_Result”, result.getText());

Validate QR code from the local system

     In this case, you should have the QR code image ready in your local system path. Once the QR code ready, pass the QR code object to store the file as an image. Once the image ready, then process it using Google zxing’s method and convert the image to a binary bitmap. Later capture the details of the QR code using Google zxing’s decoding method and store it into a Result’s object. Finally, validate that decoded data against expected result using TestNG Assert. Below is the logic:

// Pass the QR code object to store the file as image
BufferedImage bufferedimage = ImageIO.read(new File(“your_QR_code_image_path”));
// Process the image
LuminanceSource luminanceSource = new BufferedImageLuminanceSource(bufferedimage);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(luminanceSource));
// To Capture details of QR code
Result result = new MultiFormatReader().decode(binaryBitmap);
Assert.assertEquals(“Expected_Result”, result.getText());

     Try to use the above logic in your code if you have any situation to validate the QR code details during automation execution.

make it perfect!

Progressive Web Application Testing

     Progressive Web Application (PWA) technology is exploding in the IT market due to its relative simplicity in development combined with the fast user interaction it offers. But testing PWA requires a different approach, even while the testing methods remain the same. In this article, I would like to share about PWA, why testing PWA is different, features of PWA applications, automation tools to test PWA applications and best practices.

What is Progressive Web Application (PWA)?

     A web application that combines the functionality of native apps with scalability and reach of web pages. Business that want to get fresh content and features to their users without the overhead of native applications. End users who want a fast, seamless, personalized experience; they are using Progressive Web Applications. PWA works in the following fashion:

  • Always Update:
    • Integrate new features immediately.
    • Work on any device or technology.
  • Looks Like an App:
    • They get their own shortcut.
    • Desktop icon.
    • No browser tab.
  • The Tech:
    • Service Workers
    • Cache API
    • Indexed DB
    • Fetch API
    • HTTPS
  • Direct to Customer:
    • Doesn’t have to go through an App Store.
    • Never has to be updated.
    • Always Personalized.
Why testing Progressive Web Application (PWA) is different?

     Just like any other web applications, it is imperative to test the PWAs as well. However, testing a PWA is much different from testing traditional web applications. In traditional web applications, one would test for the features provided by the application. Newer types of testing include checks to see if they render appropriately on various mobile devices and to ensure that the behavior is predictable and consistent but now with progressive web applications, the methodology in performing cross browser testing is much different. Let’s look into the process of testing a progressive web app,

Testing for reliability

     PWA reliability depends on the ability to serve pages over https. Using a tool such as Lighthouse will allow you to ensure all the web pages are served over https. Lighthouse can be used for more than just testing the reliability of the website.

Manual testing for native-like behavior

     This type of testing requires manual testing on various different browsers and devices. One of the primary behavior is the capability to add the PWA to the home screen like any other apps. With the web app being added to the home screen, it should start to behave like an app and not like a website. This includes the way it launches, and its ability to have some content even when there is no network connection.

Software-based testing for native-like behaviour

     Manual testing is recommended for the most common browsers on some of the most common devices. However, since it’s impossible to test on all browsers and all devices, you can use the tool Lighthouse to evaluate some of the native-like features. Lighthouse will also test the loading capabilities of all the web pages in offline mode. For example, the presence of offline functionality and loading of pages for offline mode can be tested using Lighthouse. The software examines these features by looking for the metadata that lets browsers know what to do when the PWA is launched in offline mode.

Making the best use of URLs

     A progressive web application as we have seen can look and behave like an app. However, one must remember that it’s still a website. For this reason, all pages must have a URL which is shareable on social media. All pages having URLs also ensures that the search engine crawlers index the entire website correctly. There might be some content which might have very similar URLs. Such cases often occur in a PWA and add a canonical tag in the head of the page.

Cross browser testing

     PWA heavily relies on the features of modern browsers as we have seen earlier. This makes it extremely important to ensure that the site loads and behaves as expected on all the different types of browsers such as Firefox, Chrome, Safari, etc. Some of the older browsers such as internet explorer 8.0 and earlier will not support PWAs. It is thus crucial to check how it behaves when loaded on such older browsers.

     Cross browser testing has been prevalent for a long time. However, its prominence has significantly grown due to the lack of standards and uniformity in browsers and devices across the world. Considering the heavy dependency of PWA on modern browsers, cross browser testing of PWA apps holds a major importance in today’s mobile first world.

Features of Progressive Web Applications
  • Responsiveness and browser compatibility: The progressive web design strategy is to provide basic functionality and content to everyone regardless of browsers and connection quality. So PWA is compatible with all browsers, screen size, and other device specifications.
  • Offline support: PWA support offline and low-quality network both.
  • Push notifications: Push notifications play important role in customer engagement if used wisely. Like Native mobile app the PWA supports push notification feature.
  • Regular updates: Like any other app, PWA can also self-update.
  • A native application like interface: These apps mimic interactions and navigation’s of native apps.
  • Discoverability: PWA applications are shared through URLs so which can be easily found. A user has to visit on the site and add that to the home screen.
Important points to keep in mind while testing PWA
  • Validate PWA manifest: A manifest file is a must for PWA. A tester should look for following in the file:
    • It has a name or short_name property.
    • It has start_url property.
    • Web App Manifest has an icon property must include a 192px and a 512px sized icons.
    • Display property should be set to standalone, full screen and minimal-UI.
  • Validate service worker: Service worker registered with a fetch event handler.
  • The website should be served fully via HTTPS: Safety is the major concern in the world of PWA and tester should always make sure that site is served via HTTPS. To test this, you can use Lighthouse tool.
  • Web pages are responsive: Make sure that your web application shows responsive behavior across all mobile and desktop devices.
  • Offline loading: All of the web pages or at least some critical pages should work offline. As a tester, you need to make sure that your web app responds with a 200 when it is offline. You can use Lighthouse or WireMock tool for testing this.
  • Metadata for ‘Add to Homescreen’: You need to test if the web app provides a metadata for ‘Add to Homescreen’. You can use Lighthouse to test for metadata for ‘Add to Homescreen’.
  • Page rendering and transitions: Transitions should be smooth and should not be snappy even on slow networks. This should be done manually on a slow network. A tester should check the responses. When a user clicks on any button, the page should render immediately without any delay.
  • Each page must have a URL: Every page on your web app must have a URL and all the URLs must be unique. The URLs can also be opened directly in new browsers.
  • Push Notifications: We should test the push notification keeping in mind that they are not overly aggressive. Also, they should be asking for permissions to the user.
  • Functionality: This is the most essential part of any testing. Functional testing covers the aspects of the application with respect to the functionality defined in the requirement document. We can do it both manually and through automation.
Automation tools to test PWA

     PWA’s are like any other mobile app. Here we are discussing high-level about following tools:

  • CloudQA.
  • Appium.
  • Lighthouse.
CloudQA

     CloudQA comes with codeless test automation tools through which a user can record the functional test cases and execute them. It also comes with the capabilities to add assertions, manage test case execution and reporting. It is a powerful tool for codeless automation, so a tester without having any coding knowledge can easily use it and automate the test cases. It support the automation for PWA applications. A user can save the set of functional test cases and later execute them at the time of regression. There are options to get the test execution report, create and manage test suites and execute test suite and get the report.

     This is good enough to start with for manual testers because it does not require much of the coding knowledge and quite interactive and easy to use. Also, it does not compromise with capabilities one can add with automation.

Appium

     Appium is suited quite well for testing PWAs. You will find that, when running on a device, there is not a whole lot that makes a PWA special, it is just a web page running in a special web browser (called a ‘context’ in the Appium world) that is wrapped by the native app.  Appium is just a connector between your test scripts and your device that runs the app so the details will depend on what test script technology you choose. The only thing that is different when using Appium/Devices is that the first step of the test needs to ‘switch contexts’ so that the automated script commands are sent to the ‘in-app browser’ that the PWA is running in.

Lighthouse

     Lighthouse is a tool provided by Google that tests app for PWA features. This is open source tool and it tests app against the PWA guidelines such as:

  • Tests the application offline or flaky network.
  • It is served from a secure origin i.e. https.
  • It is relatively fast.
  • Uses certain accessibility best practices.

     Lighthouse is available as a chrome extension and a command line tool. Running Lighthouse as chrome extension- Lighthouse can be downloaded from Chrome Web Store. Once installed it will add a shortcut to the taskbar. Then, run lighthouse on your application and choose Generate Report with the app open in the browser page. Here I have generated the report against one PWA application Starbucks (https://app.starbucks.com/). Got following reports:

     Running Lighthouse from command Line. Lighthouse is available as a Node module, which can be installed and run from command line. To install run this command:

npm install -g lighthouse

You can check Lighthouse flags and options with the following command:

lighthouse –help

     I have executed the lighthouse from command line for Starbucks application. I have executed below command in command-line:

lighthouse https://app.starbucks.com/ –view

     This will generate all the status in command-line and also you can see the application open in the browser. Once execution complete the browser will close and the report will generate in form of HTML and it open in another browser automatically. The command-line view looks like below and HTML report looks same as above one which we executed using chrome lighthouse extension,

     Progressive Web Application (PWA) technology is exploding in the IT market due to its relative simplicity in development combined with the fast user interaction it offers. PWA heavily relies on the features of modern browsers as we have seen earlier. This makes it extremely important to ensure that the site loads and behaves as expected on all the different types of browsers. This article helps the manual and automation engineers to learn PWA application, its features, and key points while testing PWA and the testing tools for PWA.

make it perfect!

Execute JMeter Scripts Behind the VPN

     One of my colleague had some issue while running the JMeter scripts behind the VPN or proxy. I proposed four solutions. I would like to share those four solutions here, two of them within the JMeter UI itself, another one we can do along with launching JMeter, and the final one is a static configuration within the system.properties file that available within JMeter/bin. We will discuss below more on different solutions.

Solution 1: Configure the Proxy Server into each HTTP Request

     In the HTTP Request, we can find out the Proxy Server section at the Advanced tab of the HTTP Request element. There we can add the proxy server name or IP address, port number, username, and password. In some cases, username and password are optional. Below is the screenshot details,

Solution 2: Configure the Proxy Server into HTTP Request Defaults

     Suppose if we have 50 or more HTTP Requests, then it will be difficult to configure the Proxy Server details in each HTTPT Request that we discussed in Solution 1. And also, in future your proxy settings changed, we may have to change those 50 and more requests. So the idea is to configure the Proxy Server into HTTP Request Defaults. In this case, we will not input anything into the HTTP Requests, just open HTTP Request Defaults, find out the Proxy Server at the Advanced tab, and configure the details. You can add HTTP Request Defaults under Test Plan and at the Thread Group level like a global declaration. Below is the screenshot details,

Solution 3: Launch JMeter from the command line with the following parameters

-H
[proxy server hostname or IP address]
-P
[proxy server port]
-N
[nonproxy hosts] (e.g. *.apache.org|localhost)
-u
[username for proxy authentication – if required]
-a
[password for proxy authentication – if required]

Following are some examples in the Windows system:

jmeter -H [proxyAddress] -P [portNumber]

Or you can use the IP instead of the Server name
jmeter -H [IPAddress] -P [portNumber]

If your Proxy Server required a username and password, use the command below
jmeter -H [proxyAddress] -P [portNumber] -u [username] -a [password]

If a non-proxy host list is provided, use this command
jmeter -H [proxyAddress] -P [portNumber] -u [username] -a [password] -N [proxyAddress]

In he above method, no need to worry about the proxy server configuration at the JMeter UI level.

Solution 4: Setup the proxy properties into the system properties file

     Open the system.properties in edit mode, this file is located under \apache-jmeter-5.1.1\bin directory (I am using JMeter 5.1.1). Add the following properties to the end of the file,

http.proxyHost
http.proxyPort
https.proxyHost
https.proxyPort

For example:

http.proxyHost=localhost
http.proxyPort=8887
https.proxyHost=localhost
https.proxyPort=8887

Suppose if a non-proxy host list is provided, then JMeter sets the following system properties:

http.nonProxyHosts
https.nonProxyHosts

     We can use one of the above solutions to run the JMeter script behind the VPN or proxy. We can ignore the first method if you have more HTTP Requests to execute. In some situations, we may need to work on the scripts without VPN or proxy, in that case, we can go ahead with solution 2 (need to disable HTTP Request Defaults component) or solution 3.

     Please try to utilize any of the above solution if you have a situation of running JMeter scripts behind the VPN or proxy.

make it perfect!

Test Automation Code Review Factors

     A code review is a quality assurance activity to ensure that check-ins are reviewed by someone other than the author. Often practice this activity as it’s an excellent way to catch errors early in the process. The test code should be treated with the same care as feature code, and therefore it should undergo code reviews as well. Test code can be reviewed by other automation engineers or developers who are familiar with the project and codebase.

     In this article, we will discuss what exactly should we look for when reviewing automation test code? I would like to share eight specific factors here,

Does your test case verify what’s needed?

     When verifying something manually, there are a lot of hidden validations that are being made. So if anything was incorrect, we’d likely notice that our tests are not as good at this. In fact, they will only fail if the conditions that we explicitly specify are not met. During automation scripting, we usually do a minimum number of checkpoints from medium to high-level. We should add maximum low-level checkpoint at each step to get maximum test coverage, this will help you to increase the quality of your software. The test automation code review will help to identify the missing checkpoints.

Does the test case focus on one specific thing?

     Each test case should focus on a specific single thing. This may be a bit confusing than a bunch of things that should be asserted. However, all of those assertions work together to verify a single thing. If, however, the test case also verified the company’s logo or some other feature besides the actual automated one, that would be outside of the scope of this test. So that the automation code review helps to identify and differentiate the out-scope items implemented in the test scripts.

Can the test cases run independently?

     Each test should be independent, which means it should not rely on other tests at all. This makes it much easier to track down failures and their root causes, and will also enable the team to run the tests in parallel to speed up execution if needed. Sometimes the automation engineer falls into the trap while isolating the test cases, because they may be using related test runs as a setup for other tests. For example, there is a test to delete an item from the list, in this case, they need to run the cases to add an item to the list and after that, they will execute the delete that item from the list. In this case, we can recommend that the test create and delete whatever it needs, and if possible to do this outside of the GUI (like via API calls or database calls).

How is the test data managed for test case executions?

     The test cases that deal with test data can be the difference for different test suites. As each test case should be developed to run independently and all the test cases should be able to run in parallel at the exact same time, each test should be responsible for its own test data. When the test cases are running in parallel and have different expectations of the test data’s state, the test cases end up failing when there’s no real issue with the application. Recommended that the automation engineer can create whatever data is needed for the test within the test itself or can keep in external files and call those data into their specific test cases.

Whether any separation of concerns?

     You should treat your test code should be in the same care as feature code. That means that clean coding practices, such as separation of concerns, should be followed. The test method should only focus on putting the application in the desired state and verifying that state. The implementation of manipulating the application’s state should not be within the test itself. For example, you have a test case to submit an application let say your method submitApplication() should be added as a test step in the test class and the actual implementation of the submitApplication() should be in the other helper class. Also, the non-test methods should not make any assertions. Their responsibility is to manipulate the state of the application, and the test’s responsibility is to verify that state. Adding assertions within non-tests decreases the reusability of those methods.

Is there anything in the test that should be a utility for reusability?

     Separating concerns already address this in most cases, but double check that the test isn’t implementing something that can be reused by other tests. Sometimes this may be a common series of verification steps that certainly is the test’s responsibility, however, it will be duplicated across multiple tests. In these cases, it’s a good idea to recommend that the automation engineer can move this block of code into a utility method that can be reused by multiple tests.

Are the object locators reliable?

     In the case of Graphical User Interface and mobile tests, the automation engineer will need to use selectors to locate and interact with the web elements. The first thing to ensure is that these locators are not within the tests themselves, need to think of separation of concerns. Also, make sure the selectors can stand the test of time. For example, using selectors that depend on the DOM structure of the page (e.g. index) will only work until the structure of the page is changed. Encourage the automation engineer to use unique IDs to locate any elements they need to interact with, even if that means adding the ID to the element as part of this check-in or create a strong custom XPath to locate and interact with the elements.

Are they using stable wait strategy?

     Mark any hard-coded waits. Automated tests run faster than customers would actually interact with the product and this could lead to issues with the test, such as trying to interact with or verify something that is not yet in the desired state. To solve this, the code needs to wait until the application is in the expected state before proceeding. However, hard-coding a wait is not suitable. It leads to all kinds of problems such as lengthening the duration of execution, or still failing because it didn’t wait long enough. Instead of that, you can recommend that the automation engineer use conditional waits that will only pause execution for the least amount of time that is needed.

     Now you are ready to start the automation test code review. Test code review is a highly recommended process and it should be added to your automation projects life cycle. Test code review will improve the quality of your automation test scripts, that way it helps to improve your application’s quality and your customer will get a quality product that adds more value to their businesses. I hope you got a clear idea of different factors that need to be considered while test code review and try to apply them in your automation project life cycle.

make it perfect!

Automation Exceptions and Solutions

     We know that an exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program’s instructions. When an error occurs within a method, the method creates an object and hands it off to the runtime system. This block of code is called an exception handler.

     In real world scenario, while working with automation using Selenium WebDriver, automation engineer would definitely come across various exceptions which disrupts the test case execution. The automation script that you work with, sometimes work properly and sometimes it simply doesn’t work. For any script that you develop, make sure that you deliver the best quality code with proper exceptions handling techniques implemented.

     In this article, I would like to share the common exceptions while selenium automation execution and also the solutions to handle those exceptions. Following are the different types of exceptions in selenium,

WebDriverException

     WebDriverException arises when the your code tries to perform any action on the non-existing browser. For example, trying to use driver after closing the driver session.

WebDriver driver = new ChromeDriver();
driver.get(“https://journeyofquality.com/ “);
driver.close();
driver.findElement(By.id(“bu345”)).click();

Solution: You can handle this WebDriverException by using driver.close() after the completion of all tests at @AfterSuite annotation if you are using TestNG instead of using them after each test case that is after all the test execution has completed.

SessionNotFoundException

     SessionNotFoundException will occur when driver is trying to perform operations on the web application after the browser is quitted by driver.quit().

@Test
public void openJourneyOfQuality() {
driver.get(“https://journeyofquality.com/ “);
Assert.assertEquals(“Journey”, driver.getTitle());
driver.quit();
Assert.assertEquals(“https://journeyofquality.com/ “, driver.getCurrentUrl());
}

Solution: This exception can be handled by using driver.quit() after the completion of all tests instead of using them after each test case. This can lead to issues when driver instance becoming null and following test cases try to use it without initializing. We can kill the driver instance after all the test execution has completed and hence we can add them up at @AfterSuite of TestNG annotation.

StaleElementReferenceException

     User would come across StaleElemenReferenceException mainly when Selenium navigates to a different page, come backs to the same old page and performs operations on the old page which is no longer available. Technically, it occurs when the element defined in the script is not found in the cache memory and script is trying to locate it that particular element. When we inspect and locate an element on a page using Selenium, it is stored in a cache memory which gets deleted when the driver navigates to another page during execution. When user navigates back to old page and then while trying to access the cache removed element on the old page, we will get StaleElementReferenceException during the automation script execution.

Solutions:

  • Refresh the webpage and perform action on that web element.
  • Include the web element on try-catch block within for loop to get the element and perform the action. Once it perform the action, it will break from the loop.

for (int value = 0; value <= 2; value++) {
try {
driver.findElement(By.xpath(“webElement”)).click();
break;
} catch (Exception e) {
System.out.println(e.getMessage());
}
}

  • Wait for the element till available by using ExpectedConditions,

wait.until(ExpectedConditions.presenceOfElementLocated(By.id(“webElement”)));

NoSuchElementException

     NoSuchElementException occurs, when the element locators we provided in the Selenium script is unable to find that web element on the web page. Probably this will happen in two ways,

  • When we have provided an incorrect locator and trying to find the web element.
  • We have provided correct locator, but the web element related to the locator is not available on the web page. That means the action performed before loading that element on the web page.

@Test
public void testJourney() {
driver.get(“https://journeyofquality.com/ “);
try {
driver.findElement(By.id(“invalidelement”)).click();
} catch (NoSuchElementException e) {
System.out.println(“No Such Element exceptional case”);
}
}

     In the above code, trying to locate an element with id invalidelement on website page. When the element is not found, the application throws NoSuchElementException.

Solution: Make sure that the locator (XPath/Id) provided is correct or try to use the explicit wait methods for the presence of element and then perform the action on it. For example,

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.presenceOfElementLocated(By.id(“invalidelement”)));

ElementNotVisibleException

     ElementNotVisibleException is thrown when WebDriver tries to perform an action on an invisible web element, which cannot be interacted with since element is in a hidden state. For example, there is no button displayed on the web page and if there is HTML code related to the button, then on trying to click on that particular button using locators in the automation script and we will get ElementNotVisibleException. These exception can happen even if the page has not loaded completely and when user tries to interact with an element.

Solution: We have to wait until that particular element becomes visible on web page. To tackle this you can use the explicit wait methods in Selenium. For example,

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(“invisibleElement”));

NoSuchFrameException

     On the webpage, user may have to deal with HTML documents that are embedded inside another HTML document which are called as iframe. In order to work with the web elements on any iframe, we have to first switch to that particular iframe in Selenium by using frame-name or frame-ID and then inspect and locate the respective web elements inside the iframe. NoSuchFrameException occurs, when the driver in the Selenium script is unable to find the frame on the web page to switch which happens when the driver is switching to an invalid or non-existing iframe.

Solution: We have to wait until that particular frame to be available on web page. To tackle this, you can use the explicit wait methods in Selenium. For example,

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.frameToBeAvaliableAndSwitchToIt(frame_id));

NoAlertPresentException

     Alert is a pop-up which provide important information to users or asking for perform certain operation like reading the messages on the alerts or accepting the alert by pressing OK button on the alert or dismissing the alert by pressing CANCEL button etc. In order to work with Alert pop-ups, we have to first switch to Alert and then perform operations on Alert window. NoAlertPresentException occurs when the driver in the script is unable to find the Alert on the web page to switch when the driver is switching to an invalid or non-existing Alert pop-up. Sometime NoAlertPresentException exception is thrown even if the alert is not loaded completely.

Solution: To handle NoAlertPresentException include the script inside try-catch block and provide explicit wait for the alert to be available as shown below.

WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.alertIsPresent());

NoSuchWindowException

     We know that Selenium automates Web applications only, and in real time we may have to deal with browser windows which will be opened when you click on buttons or links. In order to interact with the Web Elements on any browser pop-up window, we have to handle all the opened windows and then locate the respective web elements inside the pop-up window. NoSuchWindowException occurs when the driver in the script is unable to find the pop-up window on the web page to switch. NoSuchWindowException is thrown if the window handle doesn’t exist or is not available to switch.

Solution: We would iterate through all the active windows handles and perform the desired actions. For example,

for(String handle: driver.getWindowHandles()) {
try {
driver.switchTo().window(handle);
} catch (NoSuchWindowException e) {
System.out.println(“No such window exceptional case”);
}
}

TimeoutException

     Waits are mainly used in WebDriver to avoid the exception like ElementNotVisibleException which occurs while trying to click on a button before the page is completely loaded. This exception occurs when a command takes more than the wait time completion. However, if the components doesn’t load even after the wait time mentioned, the exception org.openqa.selenium.Timeoutexception will be thrown.

Solution: To avoid this exception, waits commands are added either implicit, explicit or fluent wait.

Implicit Wait:

     The Implicit Wait in Selenium is used to tell the web driver to wait for a certain amount of time before it throws a NoSuchElementException. The default setting is 0. Once we set the time, the web driver will wait for the element for that time before throwing an exception. For example,

driver.manage().timeouts().implicitlyWait(15,TimeUnit.SECONDS) ;
driver.get(“https://journeyofquality.com/ “);

     In the above code, an implicit wait of 15 seconds is added. If the page https://journeyofquality.com/ doesn’t load in 15 seconds, then TimeoutException will be thrown.

Explicit Wait:

     The Explicit Wait in Selenium is used to tell the Web Driver to wait for certain conditions (Expected Conditions) or maximum time exceeded before throwing ElementNotVisibleException exception. It is an intelligent kind of wait, but it can be applied only for specified elements. We already discussed few examples above related to explicit wait like,

wait.until(ExpectedConditions.presenceOfElementLocated(By.id(“webElement”)));
wait.until(ExpectedConditions.frameToBeAvaliableAndSwitchToIt(frame_id));
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(“invisibleElement”))
;

Fluent Wait:

     The Fluent Wait in Selenium is used to define maximum time for the web driver to wait for a condition, as well as the frequency with which we want to check the condition before throwing an ElementNotVisibleException exception. It checks for the web element at regular intervals until the object is found or timeout happens. Frequency is setting up a repeat cycle with the time frame to verify or check the condition at the regular interval of time. Below is the sample implmentaion for fluent wait,

Wait wait = new FluentWait(WebDriver driver)
.withTimeout(Duration.ofSeconds(SECONDS))
.pollingEvery(Duration.ofSeconds(SECONDS))
.ignoring(Exception.class);

Overall Exception Handling Solutions

     In order to handle these above types of Selenium exceptions, we will use following exception handling mechanism,

Throw: Throw keyword is used to throw exception to the runtime to handle it.

public static void anyMethod() throws Exception{
try{
// write your code here
}catch (Exception e){
// Do whatever you wish to do here
// Now throw the exception back to the system
throw(e);
}
}

Multiple Catch Blocks: You can use multiple catch() blocks to catch different types of exception. The syntax for multiple catch blocks looks like the following:

public static void anyMethod() throws Exception{
try{
// write your code here
}catch(ExceptionType1 e1){
//Code for Handling the Exception 1
}catch(ExceptionType2 e2){
//Code for Handling the Exception 2
}

Try/Catch: A try block encloses code that can potentially throw an exception. A catch() block contains exception-handling logic for exceptions thrown in a try block. Code within a try/catch block is referred to as protected code, and the syntax for using try/catch looks like the following:

public static void anyMethod() throws Exception{
try{
// write your code here
}catch(Exception e){
//Code for Handling the Exception
}
}

Finally: A finally block contains code that must be executed whether an exception is thrown or not:

public static void anyMethod() throws Exception{
try{
// write your code here
}catch(ExceptionType1 e1){
//Code for Handling the Exception 1
}catch(ExceptionType2 e2){
//Code for Handling the Exception 2
}
finally{
//The finally block always executes.
}
}

     I hope you got an idea on various common selenium exceptions, the solutions and the general way to handle those exceptions. Try to implement these exception handling mechanism in your automation script and handle these runtime anomalies.

make it perfect!