In the last post on AI’s effect on testing (if you missed that post check out, we talked about how the adoption of AI and how testers will need to learn how to test AI will change testing. In this post, I’m going to add my thoughts on smarter test automation.

One of the main ways testing is already changing is using AI to help determine how the automated test should function. One of the primary causes for automation failure is incorrect/invalid locators.

A number of years ago I built a framework that used the “name” of each element rather than a specific locator. When the test would run for the first time it would find the element based on the name and then record ways of finding that element. This essentially meant that if one way would break it could use one of the other ways, it would then look at the element found and confirm that this element had the same name. If none of the methods to find it worked, the framework would re-find the element. This made the tests more robust but a lot slower.

This new breed of smarter automated tests uses AI to power the finding of elements. Tests are usually recorded and during this time elements are selected and all the information about those elements is then stored. When a test is running and looking to interact with an element it will look at the page (source) and from this try to determine which element most matches the element that was previously found.

These smarter tests because of the way they find elements are more robust than other automated tests and would thus require less maintenance. I haven’t used the new tools enough to fully determine if there are any pitfalls to their use, but I will update this with full reviews of the tools later. At Appsurify we are working on adding AI to my old framework and open sourcing this in the future.

The next stage of utilizing AI within test automation may be to just describe what the test is doing, similar to BDD and then have the automation discover how to do this itself. If this is the case the automation would need to be supplemented with additional testing.

Finally, we can add additional smarts to the bot itself maybe using AI powered screenshot comparison like Applitools. I’m sure more of this type of functionality is coming in the near future.

Again, what I would like to see is this new smarter testing combined with targeted prioritized tests, at least until we have had more experience or discovered any possible shortfalls of ai powered test automation. In this case, you would still have a smaller number of “less” smart, standard automation tests to maintain and triage but this should lessen the burden of these test cases.

If you are interested in implementing AI within your organization, we can help you reach out to us at

Contact Appsurify
close slider