The round table of the Quality Engineering Unit met to share the theme “How to communicate the value of software quality?”.
Convincing and investing continuously in a quality approach requires perceiving its value.
Often associated with a support function and risk reduction, software quality is considered difficult to hang on to traditional business issues.
Justifying a return on investment is also usually requested by organizations, an approach almost to be reversed.
We shared the following themes:
- What to communicate, to whom, at what level, how often?
- Which indicators are relevant to use? In what context?
- What formats, tools and visualizations can be used?
I thank all participants for their participation and contribution
- Luís Vicente, QA Lead at Link Mercedes-Benz.io
- João Santos, Freelance QA Engineer
- Filipe Sousa, QA Senior Consultant at Noesis
You can find the episode on video and audio.
Software quality must bring value to the company
Being able to communicate a business value implies its creation beforehand.
In fact, aligning the value of software quality with objectives valued by the company remains a real challenge.
This is also a problem that can be found more generally in IT, business alignment being a real priority for CIOs and architects, among others.
Is this possible for software quality in reality?
It remains an exercise of communication and collaboration, specific to each context to identify the relevant points.
One thing is certain: without a focus on bringing value to the organization, a quality approach will not last long, with a lack of contribution, support and visibility.
The identification and communication of value is therefore a major theme to address.
What parameters are valued by the customer?
We have come to focus on customer expectations.
The first priority is to understand what is valued, expected and imagined for the stakeholders.
On a first iteration, the expected value does not emerge directly.
Support indicators or self-centered can sometimes emerge on a first run.
The objectives mentioned above are not the only ones to consider.
Demonstrating active listening, empathy, reformulation is key skills in this exercise.
Two questions are useful for a first exchange: “Why?” and “For what (for)?”.
The next step is to translate the objectives into indicators.
Good quality indicators are not of QA
Measuring the performance of a system is not about measuring a subsystem.
Traditional QA metrics are useful for this function as such but need to be contextualized more broadly.
Take the example of test coverage.
Some developers, for example, will seek to optimize it to 100% in their unit tests.
In addition, a tester will rather focus on the coverage of the matrix of functional cases, being manual or automated.
The question must be asked the other way around: can test coverage helps us achieve a key business goal valued by the customer?
If we are looking for the stability of the customer experience, we could for example combine several indicators:
- The coverage of tests on the most frequent use cases and with the highest added value
- The number of bugs encountered by the customer must be close to zero in these cases
- The number of bugs detected before delivery and analyzed by the environment, in order to judge the relevance of our integration
- The time is taken to detect and resolve a critical use case (aka MTTI & MTTR)
This combination of indicators makes it possible to judge the performance of the quality system, across the entire chain and players.
What about automation in all of this?
Automation does not add value as such
Filipe shared an experience with us where the number of automated tests alone was the customer’s performance indicator.
In this case, it is advisable to fulfill one’s duty of advice to identify the reasons for this special focus, and if other issues cannot be more relevant.
In this specific case, we can find a strong need for automation pushed by management to industrialize processes and speed up deliveries.
However, to achieve this goal it can be counterproductive to automate excessively, especially where it is not relevant.
The “5 Why” method can be used, formally or informally depending on your audience, and even as a personal reflection exercise.
As with indicators, automation criteria must be analyzed as a whole.
Automation is often linked to the need for acceleration, cost optimization and quality of process execution.
Several hypotheses are to be considered for automation:
- What is the investment required to initiate automation? (human, organizational, tools)
- Is the automation scope mature in terms of process, with relative stability?
- What is the number of executions and the lifespan of the processes to be automated?
Finally, what is the value of automation versus that of non-automation?
The automation decision requires careful consideration of the context.
An organization with a solid foundation of automation can, with reduced initial investment, more easily capture value, even in small areas.
But is automation a synonym for acceleration?
Team velocity, a key value in software quality
Quality is often seen as a slowing factor in software development cycles.
Admittedly, adding manual or automatic tests that delay the ability to validate the business value of a feature does not seem appealing.
Experiences of test bypassed at the first unstable and pressuring release are common.
This is why a quality approach must keep in its prism the achievement of short cycles integrating quality.
This desired velocity represents real difficulties for the various actors.
Developers need fast feedback in their development iterations, in order to quickly deliver a product increment.
Subsequently, the qualification and business teams must be able to quickly assess the response to the requirements and the non-regression.
Finally, the team must be able to measure the value for the end customer of the functionalities delivered.
It is this cycle that must be accelerated while guaranteeing the quality of the product to achieve real velocity.
An exercise can be to share a triangle to balance between speed, quality and functionality.
Is it still necessary that the tests are useful?
How many bugs did your tests found?
This is sometimes a poorly measured question, to the detriment of “vanity metrics” maximizing code or test coverage.
The additional cost of developing and maintaining unnecessary tests is rarely taken into account.
Yet this is what easily happens in organizational models in silos with local indicators to be optimized.
A good way to take perspective is to consider several factors in judging the usefulness of the tests:
- Are they relevant to your customer experience or objectives valorized by the organization?
- How many times have they been executed in the last month, at which stability ratio?
- How many times have they been able to detect a bug, with what criticality, false-positive rate?
If your indicators tend to be negative for these points, the relevance of the tests is questionable.
As a background note, the tests aimed at validating cases of errors and exceptions should not be analyzed by their volume of use, in any case, I hope so for customers.
Calculating a cost per use can also help highlight the cost of premature optimizations.
Can testing in production be useful?
A good reason not to test in production would be to have an identical non-production environment.
Being an unrealistic ideal, I am convinced that the answer is in the affirmative by clarifying the point.
Performing production-only verifications that should have been done upstream is clearly not recommended.
On the other hand, they can be very relevant and complementary, if well chosen.
The notion of value-based production tests must be differentiated between:
- Real exploratory tests aimed at discovering defects through experimentation at scale,
- Regular automated functional tests of non-regression, often associated with customer experience monitoring
- Availability, resistance and reliability tests based on “chaos engineering” practices
- Verification tests of response time requirements, the loading time of elements or security for example
Those that can be carried out upstream in the chain must be while keeping for production the complementarity of the specificities of this environment.
An incremental process to create and communicate metrics
You may be familiar with a “Quality Dashboard V1” project that has been underway for months, in a tunnel effect, without any dashboard available?
Agility is more easily approached for projects with a business component, but it is neglected for internal projects.
Defining, collecting and communicating metrics is a process that is more likely to be successful through an iterative approach.
Many assumptions are initially present, it is, therefore, necessary to prioritize actions in order to be able to adapt quickly.
Select the metrics with the highest added value first, and try to measure them as quickly as possible.
The first collection of data can reveal structural problems of quality, availability or even relevance.
There is therefore no need to dwell on automatic data flows and colored dashboards, the focus must be on validating the indicators.
The second step is to validate their perception of value with stakeholders, who are focused on customer and business objectives.
Using existing team update meetings, retrospectives and other rituals is the best way to get feedback, in addition to informal requests.
Once this first version is reached, after a few iterations, the automation, the deployment on other products and the addition of metrics can start.
What supports promoting more global distribution?
Are the reports of value to stakeholders?
In the event that the objectives have been well defined, the communication media must be adapted to the audience.
I would almost add, tailored to the person or your internal “personas”.
Like in real life, it’s hard to please everyone.
A manager heavily on the move will likely prefer a simple format available on an application, unless they prefer to have a phone call update directly.
A development team normally likes colorful dashboards in “dark mode”. Other profiles will prefer a spreadsheet in order to explore the data.
Working in stages, delivering as many as possible, and then handling exceptions is an 80/20 approach.
A simple dashboard can be built and made available to everyone, as a source of truth.
If possible, add an export capacity, and send alerts by email or phone to satisfy as many people as possible.
A good practice, when possible, is to display this dashboard in a place where teams pass, share it with the daily stand-up, and suggest that teams add it as an automatic tab for opening their browser.
By definition, dashboards have value if they are shared, up-to-date, regularly pushed to the teams, adapted to the usage channels and used in piloting.
Using existing mechanisms to communicate quality
One can quickly want to create specific instances to talk about quality.
A pragmatic approach, promoting the message of transversality, is to add quality to the existing meetings.
Let’s take the team’s daily stand-up as an example, it’s a great opportunity to share and communicate on quality.
Regular retrospective and planning points should also be used to integrate quality as a systematic theme.
Adding indicators and actions to an incident retrospective process is also a good way to identify the need for quality throughout the chain.
Finally, with a good internal work of influence, the company meetings must leave room for quality.
Like advertising, quality must be present, communicated and valued regularly to have an impact.