The traditional approach to quality assurance often focused on testing as a separate phase, conducted just before software deployment. It was intended to catch “last-minute” errors. However, in the modern, agile world of software development, where changes are introduced quickly and frequently, such a reactive approach is insufficient. Waiting until the end of the development cycle to check for quality leads to delays, costly fixes and the risk of introducing bugs into production. That’s why, at ARDURA Consulting, we adhere to a philosophy of continuous quality monitoring that permeates the entire software development lifecycle - from the first lines of code, through all stages of testing, to the production environment. It’s a proactive approach that allows us not only to detect problems, but more importantly to prevent them, ensuring the high quality and reliability of the delivered solutions at every step.
Continuous quality monitoring - Proactive reliability watchdog at ARDURA Consulting
“Software bugs cost the U.S. economy an estimated $59.5 billion annually.”
— NIST, The Economic Impacts of Inadequate Infrastructure for Software Testing | Source
At ARDURA Consulting, we understand continuous quality monitoring as much more than just passively observing the performance of an application after it has been deployed to production. For us, it is a comprehensive, dynamic and multifaceted system that involves **the systematic collection, in-depth analysis and, most importantly, rapid, adequate response to data on various aspects of software quality throughout the entire software development process and subsequent operation **. Our overriding goal is to have a constantly updated picture, based on hard facts and objective data, of the actual state of the quality of both the product itself and the effectiveness of its development processes. Such knowledge allows us to make informed, well-founded decisions, proactively identify potential risks and quickly respond to any disturbing signals or negative trends before they turn into serious problems. In practice, this means for us that we need to constantly monitor several key interrelated areas, such as the quality of the source code, the results of testing processes, the performance and stability of the application and, crucially, the experience and feedback of the end users themselves. We believe that only such a holistic, integrated and data-driven approach allows us to build truly reliable and valuable IT systems.
From Response to Prevention: Evolving approaches to quality assurance in modern IT
Traditional software development models, such as the waterfall model, often treated quality assurance as a separate, final phase of the project, following the completion of all development work. Testers received the “finished” product and their job was to find as many bugs as possible before handing it over to the customer. Such a “quality gate” (quality gate) model at the end of the process was characterized by a number of drawbacks. Errors detected at this late stage were extremely costly to fix, as their causes were often deep in the architecture or design of the system and required significant modifications and retesting. Quality feedback reached the development team with a long delay, making it difficult to learn and make corrections quickly. What’s more, such a model often generated tensions and conflicts between the development team and the QA team, which was perceived as “the bad guy,” pointing out errors. The emergence of agile methodologies (Agile) and DevOps culture forced a fundamental change in this paradigm. Principles such as iterative development, short cycles, continuous integration, automation and, above all, shared responsibility of the entire team for the product, became incompatible with the idea of QA as the last stage. It became necessary to shift QA activities “to the left” (shift-left), i.e. as close to the beginning of the software life cycle as possible, and extend them “to the right” (shift-right), i.e. to the post-implementation stage, by monitoring application performance in production. Continuous quality monitoring is a natural consequence and extension of these modern approaches, emphasizing prevention and early detection of problems, rather than costly after-the-fact response. This helps build a quality culture in which every team member feels responsible for delivering a valuable and reliable product.
A multidimensional view of quality: key areas for continuous monitoring at ARDURA Consulting
To ensure comprehensive and effective quality management, at ARDURA Consulting we take a multidimensional approach to its continuous monitoring, covering several key interrelated areas. Each of them provides valuable data and information, which together form a complete picture of the technical and functional state of the software under development.
One of the absolutely fundamental elements of our approach is the continuous monitoring of the quality of the source code itself at the earliest stage of its development by developers. We firmly believe that high internal code quality, its readability, understandability, compliance with standards and good organization, provides a solid foundation for the subsequent stability, performance, security and maintainability of the entire application. Therefore, as a standard practice, we integrate into our automated continuous integration and continuous deployment (CI/CD) processes advanced static code analysis (SAST - Static Application Security Testing) tools, such as the widely recognized SonarQube or other similar solutions tailored to the specific technologies used. These tools scan the source code of an application automatically, often every time a new change is made to the code repository (e.g. every commit or pull request). Their task is to detect a wide spectrum of potential problems, such as programming errors, possible security vulnerabilities (e.g. vulnerabilities from the OWASP Top 10 list), violations of team-accepted coding standards, excessive cyclomatic complexity of individual modules or functions, as well as so-called “code smells” (code smells), i.e. code fragments that, although they may work correctly, are poorly designed and may lead to problems in the future. The results of such automatic analysis are immediately available to developers, often directly in their development environment (IDE) or CI/CD system. This allows them to quickly identify problem areas and make necessary corrections, effectively preventing the uncontrolled accumulation of technical debt and degradation of code quality. As part of this process, we systematically monitor and strive to continuously improve key code quality metrics, such as the number and categorization of detected issues (e.g., critical bugs, vulnerabilities, code smells), the percentage of code coverage by unit tests, and the level of code duplication in the project.
Equally important to us, as well as attention to the quality of the code itself, is the continuous, systematic monitoring of the results of all tests performed, both those performed manually by our QA specialists and, increasingly, those fully automated. At ARDURA Consulting, we place a very strong strategic emphasis on intelligent test automation at various complementary levels, from unit tests written by developers, to integration tests of components and services, API tests, to complex user interface (UI) and end-to-end (E2E) tests. All of these automated tests are run regularly, often with every single change made to the code, as an integral part of the automated CI/CD pipeline. The results of these numerous, cyclically executed tests - such as the total number of executed scenarios, the percentage of successful tests (pass rate), details of any failures (failed tests) with reasons, or the execution time of each test set - are automatically aggregated, processed and visualized on dedicated, easily accessible quality dashboards. Such dashboards allow the entire project team, including the Product Owner and business stakeholders, to keep track of the current state of product quality in real time and identify any regressions very quickly. A regression is a situation in which a newly introduced change to the code accidentally broke previously working correctly functionality, which is a common risk in dynamically developed systems. In addition to the current status, we also monitor long-term trends related to test results - is the overall number of errors detected by tests increasing or decreasing in successive iterations? Is the percentage coverage of code by automated tests sufficient and steadily increasing? Are the automated tests executing fast enough without excessively slowing down the CI/CD process? Are we not seeing excessive instability (flakiness) in the tests? The answers to these questions, based on hard data, are absolutely crucial to objectively assessing the stability of the product, to confidently make decisions about future implementations, and to identify areas in the testing process that need improvement.
Another extremely important dimension of continuous quality monitoring is the systematic tracking and analysis of the application’s performance, and not only after its deployment to the production environment, but already at much earlier stages of the development cycle, as part of dedicated performance tests. On a regular, scheduled basis, we conduct various types of performance tests (such as load tests, overload tests, stress tests or spike tests) in specially prepared, controlled test environments that replicate the characteristics of the production environment as faithfully as possible. The purpose of these tests is to see how the application behaves under expected, as well as significantly increased load, and to precisely measure key performance indicators (KPIs). These include the average and maximum server response time to requests, the loading time of individual pages or application screens, system throughput (the number of transactions handled per second), and the consumption of key infrastructure resources (such as CPU, RAM, disk I/O, and network bandwidth). The results of these tests allow us to detect potential bottlenecks in the system very early, identify inefficient pieces of code or infrastructure configuration issues, and then implement the necessary optimization measures before a performance problem has time to negatively impact the end-user experience and the reputation of the product. Once an application is deployed to a production environment, the performance monitoring process continues, of course, often in an even more intensive ma
er, using specialized Application Performance Monitoring (APM) class tools. These tools, such as Dynatrace, New Relic, Datadog or Prometheus, combined with Grafana, allow tracking application performance in real time, collecting detailed performance metrics, automatically detecting anomalies and alerting on any performance drops, increasing execution errors or availability problems with individual infrastructure components.
We must also not forget the extremely valuable source of information that is the continuous monitoring of the actual experience, behavior and direct feedback of the end users themselves, especially after a product or new version has been rolled out to the market. To this end, we systematically analyze data from web or mobile analytics tools (such as Google Analytics, Mixpanel or Hotjar) to gain an in-depth understanding of how users actually use the application, which features they use most often and which they skip, where they encounter difficulties or abandon further interaction (so-called drop-off points), and what their typical navigation paths look like. An equally important source of information for us are the requests coming into the technical support or helpdesk teams - we thoroughly analyze the types and frequency of problems, questions or suggestions reported by users. We also track, to the extent possible, opinions, comments and reviews about our software that appear in social media, online forums, or mobile app stores. This rich, both qualitative and quantitative feedback from users is an absolutely invaluable source of information about the real, perceived quality of the product from the perspective of those people for whom the product was, after all, created. It is an extremely important input in the process of planning further development, prioritizing improvements and deciding on the future shape of the app.
Data in the service of quality: Effective use of monitoring information at ARDURA Consulting
However, all this valuable data and information, collected in a systematic and multifaceted maer as part of the continuous quality monitoring process, is not an end in itself. Their collection only makes sense when they are effectively used to make informed decisions and initiate specific improvement actions. At ARDURA Consulting, we place great importance on ensuring that information about the current state of product and process quality is not only collected , but also easily accessible, transparent and understandable to the entire project team and our customers. We often present these in the form of easy-to-read, interactive quality dashboards that visualize key metrics and trends over time. We also set up advanced alerts and notification systems that immediately notify the appropriate people or teams of any critical issues - for example, the failure of key automated tests in the CI/CD pipeline, a sudden spike in the number of bugs on a production environment, or a significant drop in application performance. Most importantly, the collected data is regularly and thoroughly analyzed during recurring team meetings, such as sprint reviews, dedicated quality meetings, or retrospective sessions. They then become a solid, fact-based basis for making specific decisions, for identifying areas of the system or process that require special attention and intervention, and for accurately planning necessary corrective actions and long-term improvements. In this way, we create and continually nurture a continuous, dynamic feedback loop that allows us to proactively, intelligently and effectively manage quality at every single stage of the software lifecycle, minimizing risk and maximizing the value delivered to our customers.
In summary, the philosophy and practice of continuous quality monitoring is an absolutely fundamental part of a modern, mature approach to the software development and maintenance process. It allows a strategic shift away from the outdated, reactive model of “firefighting” and last-minute bug detection to a much more effective, proactive prevention of problems and building quality into the product from the very beginning. At ARDURA Consulting, we confidently implement this philosophy in our day-to-day practice, using the right modern tools, carefully selected metrics and proven agile processes to comprehensively track and analyze on an ongoing basis the status of source code quality, the results of testing processes, application performance and stability and, just as importantly, the actual experience and satisfaction of end users. Thanks to this holistic approach, we are able to identify potential risks much earlier, respond more quickly and effectively to emerging issues, and consistently, iteratively deliver the highest possible quality software to our customers - software that is not only fully functional and compliant with specifications, but also highly reliable, efficient, secure and, above all, fully trustworthy.
Need testing support? Check our Quality Assurance services.
See also
- 10 technology trends for 2025 that every CTO needs to know
- 4 key levels of software testing - An expert
- 5G and 6G - How will ultrafast networks change business applications?
Do you want to make sure that the quality of your software is monitored and ensured at every stage of development and operation? Are you looking for a partner that takes a proactive approach to quality management, based on data and continuous monitoring? Contact the QA team at ARDURA Consulting. We’ll tell you more about our methods for continuous quality monitoring and how we can help you build and maintain reliable IT systems.