How does Appsurify’s Model determine Test Selection #
Simple answer, based on recent developer changes and the trained model!
When Appsurify connects the Repository and begins to receive Incoming Commit data – Appsurify initially analyzes the Repository for basic structuring.
Once the Automated Tests are connected, Appsurify leverages Patent-Pending proprietary AI-Risk Based Testing Technology to train each company’s unique AI-Model based off of incoming commits and the corresponding test results. Over a period of 2-3 weeks, Appsurify trains a robust AI Model that links Code to Tests – because it will know which Commits impacted which Tests to form a union between the two.
Once the AI Model is Trained, when a new Commit comes in, Appsurify will know which Tests are associated with that Commit – and subsequently; select and execute the relevant tests by order of Priority automatically in the CI/CD based on the Parameters set by the team
The AI Model is recalibrated on a rolling basis to ensure it’s always up to date. For more information on on our Test Selection please see our Test Selection Whitepaper on the Appsurify website.
Will Appsurify catch every Failed Test? #
It isn’t designed to.
The AI-Model is designed to catch as many defects as possible running as few tests as possible.
Appsurify is designed to be efficient in finding bugs in its Smart Subset while leaving room to find other potential defects that cause other test failures.
For example, if 1 bug causes 10 Tests to Fail, the AI-Model only needs to pick up 1 of those 10 Failed Tests to raise the underlying Defect. That leaves room in the smart subset selection to find other underlying bugs efficiently. This is explained in more detail in the following example: