Test Automation Code Review Factors

     A code review is a quality assurance activity to ensure that check-ins are reviewed by someone other than the author. Often practice this activity as it’s an excellent way to catch errors early in the process. The test code should be treated with the same care as feature code, and therefore it should undergo code reviews as well. Test code can be reviewed by other automation engineers or developers who are familiar with the project and codebase.

     In this article, we will discuss what exactly should we look for when reviewing automation test code? I would like to share eight specific factors here,

Does your test case verify what’s needed?

     When verifying something manually, there are a lot of hidden validations that are being made. So if anything was incorrect, we’d likely notice that our tests are not as good at this. In fact, they will only fail if the conditions that we explicitly specify are not met. During automation scripting, we usually do a minimum number of checkpoints from medium to high-level. We should add maximum low-level checkpoint at each step to get maximum test coverage, this will help you to increase the quality of your software. The test automation code review will help to identify the missing checkpoints.

Does the test case focus on one specific thing?

     Each test case should focus on a specific single thing. This may be a bit confusing than a bunch of things that should be asserted. However, all of those assertions work together to verify a single thing. If, however, the test case also verified the company’s logo or some other feature besides the actual automated one, that would be outside of the scope of this test. So that the automation code review helps to identify and differentiate the out-scope items implemented in the test scripts.

Can the test cases run independently?

     Each test should be independent, which means it should not rely on other tests at all. This makes it much easier to track down failures and their root causes, and will also enable the team to run the tests in parallel to speed up execution if needed. Sometimes the automation engineer falls into the trap while isolating the test cases, because they may be using related test runs as a setup for other tests. For example, there is a test to delete an item from the list, in this case, they need to run the cases to add an item to the list and after that, they will execute the delete that item from the list. In this case, we can recommend that the test create and delete whatever it needs, and if possible to do this outside of the GUI (like via API calls or database calls).

How is the test data managed for test case executions?

     The test cases that deal with test data can be the difference for different test suites. As each test case should be developed to run independently and all the test cases should be able to run in parallel at the exact same time, each test should be responsible for its own test data. When the test cases are running in parallel and have different expectations of the test data’s state, the test cases end up failing when there’s no real issue with the application. Recommended that the automation engineer can create whatever data is needed for the test within the test itself or can keep in external files and call those data into their specific test cases.

Whether any separation of concerns?

     You should treat your test code should be in the same care as feature code. That means that clean coding practices, such as separation of concerns, should be followed. The test method should only focus on putting the application in the desired state and verifying that state. The implementation of manipulating the application’s state should not be within the test itself. For example, you have a test case to submit an application let say your method submitApplication() should be added as a test step in the test class and the actual implementation of the submitApplication() should be in the other helper class. Also, the non-test methods should not make any assertions. Their responsibility is to manipulate the state of the application, and the test’s responsibility is to verify that state. Adding assertions within non-tests decreases the reusability of those methods.

Is there anything in the test that should be a utility for reusability?

     Separating concerns already address this in most cases, but double check that the test isn’t implementing something that can be reused by other tests. Sometimes this may be a common series of verification steps that certainly is the test’s responsibility, however, it will be duplicated across multiple tests. In these cases, it’s a good idea to recommend that the automation engineer can move this block of code into a utility method that can be reused by multiple tests.

Are the object locators reliable?

     In the case of Graphical User Interface and mobile tests, the automation engineer will need to use selectors to locate and interact with the web elements. The first thing to ensure is that these locators are not within the tests themselves, need to think of separation of concerns. Also, make sure the selectors can stand the test of time. For example, using selectors that depend on the DOM structure of the page (e.g. index) will only work until the structure of the page is changed. Encourage the automation engineer to use unique IDs to locate any elements they need to interact with, even if that means adding the ID to the element as part of this check-in or create a strong custom XPath to locate and interact with the elements.

Are they using stable wait strategy?

     Mark any hard-coded waits. Automated tests run faster than customers would actually interact with the product and this could lead to issues with the test, such as trying to interact with or verify something that is not yet in the desired state. To solve this, the code needs to wait until the application is in the expected state before proceeding. However, hard-coding a wait is not suitable. It leads to all kinds of problems such as lengthening the duration of execution, or still failing because it didn’t wait long enough. Instead of that, you can recommend that the automation engineer use conditional waits that will only pause execution for the least amount of time that is needed.

     Now you are ready to start the automation test code review. Test code review is a highly recommended process and it should be added to your automation projects life cycle. Test code review will improve the quality of your automation test scripts, that way it helps to improve your application’s quality and your customer will get a quality product that adds more value to their businesses. I hope you got a clear idea of different factors that need to be considered while test code review and try to apply them in your automation project life cycle.

make it perfect!

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: