Developers are in a tricky situation. The pressure pot of customer demands requires teams to release new iterations at an increasingly speedy pace. Currently, top performers are releasing over four deploys each day. Nearly 50 percent of teams believe they will need to release even faster to meet demands in the future. Let’s increase the pressure: 22 billion records were exposed in 2020, and that number creeps up day by day. You don’t just need to release faster. You need to release faster and safer with measuring testing.
Between this proverbial ocean of apps and solutions, the increasingly dense cybersecurity landscape, and immense pressure to release faster, smarter, and with more purpose lies a tangible threat: a lack of app quality. Nearly a quarter of DevOps teams believe they are under too much pressure to release quality code, and over a quarter believe that the amount of time they spend on detecting code quality issues reduces their innovation time.
Sure. There are strategies and tools aimed at improving speed, security, and quality simultaneously, like DevSecOps (literally Development, Security, and Operations), Infrastructure-as-Code, and hyper-intelligent automated code monitoring solutions. But how do you really know if they’re working? How can you tell if that awesome shift-left strategy is producing meaningful and impactful quality improvements?
Get this: you can measure code quality. Better yet, you can measure quality over a period of time to help demystify your tools and strategies. Imagine having speed + quality + security! It’s the dream. You need fast, secure, and well-functioning apps capable of impressing customers, wowing clients, and upgrading the morale of employees. Here’s how measuring testing can help you stay on top of your code quality while you hunt down your white whale of speed and non-stop improvement.
Why Should You Measure Testing?
Why not? You invest in next-gen monitoring tools, bake them into your early SDLC, and leverage all of these strategies and processes to bind teams together, boost responsibility, and get code out faster. Shouldn’t you measure how well that monitoring solution is working? Better yet, shouldn’t you ensure it’s improving code quality — not just speed?
We firmly believe every component of your tech stack should be vetted. If you invest in a new monitoring solution, you need to prove (to both your DevOps team and the C-level boardroom) that it’s producing value. Measuring it helps you understand that value, contextualize it, and build upon it over time to create even faster iterations with higher quality.
How to Measure Testing
There are plenty of ways to abstractly conceptualize testing success (see: Mike Cohn’s Test Pyramid or the not-so-beloved Ice Cream Cone), but the vast majority of dev fail to actually measure testing using concrete data and metrics. In this post, we’re not going to give step-by-step instructions (everyone has a different DevSecOps strategy in our current environment, and testing isn’t plug-and-play). Instead, let’s discuss the core metrics and tests you should be running on your automated testing solutions and why they are important in the long run.
Ideally, you want to create high-quality code. And you want to test for quality without interrupting your iteration speed.
Static Code Quality
Let’s start with the basics: measuring code quality. Now, traditional code quality testing often involves measuring defects (e.g., number of reports, defect density, etc) and code complexity (e.g., length, volume, etc). We still recommend running these static code analysis tests on each CI/CD deployment. But automated testing tools that immediately live-test code after each CI/CD implementation are certainly preferred. At the end of the day, static code tests catch snapshots. You need to test code rapidly across each iteration. If you use an intelligent automation solution, you can bench that solution against traditional code quality static testing to figure out if your new solution is giving you momentum.
You can also use the code defect rate (i.e., the number of defects that slip into launch) by tracking tickets. Ideally, your automated testing solutions should reduce tickets and improve traditional quality metrics.
Want to hack the entire process? Test your code based on risk. You can use static quality tests and long-cycle automation tools for big iterations and changes. Then, you can leverage risk-based testing for bursts of speedy iterations. This produces high-quality code, but you don’t have to spend hours after each small change waiting for the “OK”.
Code Coverage and Test Coverage
There are two primary measures of how “test automated” you are. Code coverage is the total percentage of code that is covered by automated testing solutions (you could include manual testing — but most don’t). In practice, code coverage tells you which chunks of code have hit your automated testing software and which ones have not. So, when you think about this in relation to code quality and testing automation, code coverage tells you what you need. Do you need a new testing solution to tackle a specific environment (e.g., VDIs, hybrid, cloud, browsers, mobile, etc)? Or do you possibly need to expand your testing suite to include a better and more functional solution? If your code coverage is great but your quality is poor, your automated testing solution isn’t good at its job.
Code coverage falls under the test coverage umbrella. Your test coverage tells you how much of your total application was tested using automation. Again, this is relatively easy to test. You can use open-source software or point solutions to test out both. There are hundreds of open-source solutions available across programming languages. You can also break down code coverage into containers (i.e., statement, function, branch, and line coverage), but most testing tools do this for you.
It’s important to understand code coverage alone isn’t always the biggest barometer of testing strength. You don’t necessarily need each line of code tested every time you iterate (can you imagine how long that would take)? Instead, use code testing and code coverage to get an idea of your overall landscape. Then, you can start to break down your coding chunks based on risk.
Lead Time to Deployment
Your lead time to deployment is a measure of how long it takes to go from committing to a production environment.
Ok. Bear with us. Lead time to deployment, which is, by all means, a measure of speed, is also an important measure of quality. We like to think of this as “actual quality”. If you have great code coverage with a tool that’s producing fantastic code quality, you’re in the golden zone, right? Well…. maybe. But remember: you need to iterate FAST with quality. If your lead time to deployment is horrible, you still have a problem.
How does this work in practice? Start by testing code quality using traditional tools. Then, check how your code defect rate is once you automate code testing, use code coverage to build out your automated testing stack, and use the lead time to deployment to ensure it’s fast. This gives you a good idea of speed and quality with measuring testing. You can also introduce a risk-based testing automation tool to bridge speed, quality, and security together.
Are You Ready to Inject Adrenaline Into Your CI/CD Pipeline?
Everyone wants to iterate faster, smarter, safer, and with more quality. We’ve got your back. AppSurify breeds quality into your CI/CD pipeline. Our intelligent plug-and-play testing solution can help you execute tests on risk-centric components of each iteration. We can shorten your CI/CD pipeline while improving your quality and reducing your risk.
Don’t believe us? Use those metrics above to test us out. Try it out for free. Test us against the best. We’re ready.