Opensource automated tests such as Selenium, Appium, Serenity, etc, can take a long time to complete. No wonder why development teams are frequently researching methods for faster automated testing. Parallel Testing and Machine Learning for Smart Software Testing both present valid methods worth exploring. What if you combined the two?
Whether your test suite consists of Unit tests, Integration tests, and or Functional tests (UI), any amount of time the team spends waiting on test results hurts output. It’s no surprise automated test suites are the culprit for slower cycle times, delayed releases, laborious rework,…and ultimately team frustration.
After 15 years in the software testing space, here we will dive into the two options that we’ve implemented in the past to improve timely feedback from testing. We will also address combining them for even greater results.
1. Parallel Testing or Multi-threaded Testing:
Parallel testing is a method to execute several test scripts concurrently, with each test consuming different resources. The objective of parallel testing is to resolve the constraints of time by distributing tests across available resources. In theory, the more threads you have the more tests you can run simultaneously. Speed, Better Coverage, Optimization of your CI/CD Processes are all accomplished when parallelism is meticulously executed. This method works especially well with opensource test frameworks, such as Selenium, Appium, and or Serenity.
Many testers find it difficult to run tests in parallel because of dependencies between various test cases. If tests are dependent on each other, they need to be run in a particular order – AKA not in parallel or else things begin to break. So even if you wanted multiple threads running tests in sync, it doesn’t do much good because many tests rely on each other passing. Therefore there is a cap on how much you can test in parallel. And the more threads you open make no difference in time saved.
The more threads you have running, be prepared for a major resource demand from your test infrastructure. To achieve faster feedback by running more processes simultaneously, something’s gotta give. And in this case it’s very high CPU and Memory utilization. If you are testing locally, allocate additional resources and make sure systems are up to date. If you push up to the cloud, expect high utilization and a sobering monthly bill. At the end of the day, you’d have to calculate the tradeoff that if the time saved via parallel testing is worth the additional cost. The Law of Diminishing Returns very well applies to the singular use of parallel testing.
However, what if you added Intelligence?
2. Machine Learning to Prioritize your Automated Tests:
There’s been vast improvements in leveraging machine learning in the QA software testing space in the last 18-24 months. One area of significant value ML can play is building out a classification model that learns from prior test failures. By analyzing failures, ML can build a surprisingly accurate risk-based model to predict the likelihood of future failures. After any change to the code, the ML model can accurately predict and automatically run just the subset of affected tests associated with the code change area. Essentially saying, why run 500 Serenity tests when only 50-60 were affected?
Teams now only run a small portion of their full test suite after each developer commit for faster automated software testing. For example, by only running 10% of their Selenium test-suite after each change, teams can expect to get results back in 90% less time than it would take for the whole test-suite to complete.
Software testers will likely be concerned the ML model will miss defects.
ML tools are meant to help QA get higher quality feedback to the team rapidly after each commit – thereby saving precious downtime waiting for results and alleviating the bottleneck test suites create. The tradeoff occurs at the confidence level you want to select the ML model tool to run. TestBrain, for example, is able to catch 98.5% of bugs by only running 10% of the test-suite.
A good ML tool for testing is meant to enhance existing test practices so that QA can test more frequently and accurately throughout the day. Best practice would have teams running the prioritized tests throughout the day, and then only running the whole test suite once a week, or month, or before major release. This is how tech titans, such as Google and Facebook, conduct their testing.
3. Both Parallel Testing and Machine Learning Combined:
ML adds significant value to pre-existing test strategies such as Parallel Testing. There are certain ML tools currently available, such as TestBrain, that are completely agonistic to current test practices. With both ML and Parallel Testing at work, only the subset of the important tests are pushed through the multi-threaded environment – achieving even great results that either could have achieved alone.
For Example, with 500 UI tests that take 3 hours to complete…parallel testing may only cut this time by 30%…down to roughly 2.2 hours. However, if you additionally plug in automatic Prioritized Testing, you’d be able to cut this time down by over 97% because only 40-50 tests are now being run through the multi-threaded environment instead of all 500. Both strategies by themselves help cut back on time to complete, but when combined – they provide a synergetic approach for faster results. From a 3 hour completion time, down to under 5 minutes.
Various strategies exist to enhance current test practices and speed up results from opensource test frameworks. Some may be more advanced, realistic, and easier to adopt than others for their open source test frameworks. After years of experience in QA, we’ve found either parallel testing and or Machine Learning to be the most effective strategies. Additionally, if a team already has Parallel Testing in place and introduces Machine Learning, they can achieve far greater results.
With so many issues arising from long test suites, adopting one of these strategies will only help your productivity. Delayed feedback, context switching, flaky failures, can all be alleviated through enhanced software testing practices. ML has the potential to address the bulk of these issues in an economical and easily implemented fashion. That’s why we developed TestBrain, to give time and resources back to the software development community.