Magda, head of QA at the rapidly growing e-learning platform, was proud of her team. Over the past year, they had successfully implemented a “shift-left” philosophy. Automated unit testing, integration testing and security scanning have become an integral part of the CI/CD pipeline. Developers, thanks to immediate feedback, caught many more bugs early, and the number of defects reported by manual testers dropped by 40%. They celebrated success - quality was no longer a bottleneck. However, a few months after the implementation of the new flagship functionality of interactive courses, worrying signals began to come in. The customer support department was inundated with notifications: users complained that the new feature worked very slowly on older tablets, the interface was unintuitive, and some, key learning paths were completely ignored by them. Magda was astonished to discover that although the functionality technically worked flawlessly and passed all tests, in the real world it proved frustrating and ineffective. She realized a painful truth: her team had become adept at answering the question “Did we build the software correctly?” but completely ignored the question “Did we build the right software and how does it actually work in the hands of users?”.
Read also: Automated vs. Manual Testing in 2026: Where Does AI Change t
Magda’s story perfectly illustrates the limitations of one-sided thinking about quality. “Shift-Left,” or the integration of quality into the early stages of the software development lifecycle, is absolutely fundamental, but it’s only half the equation. A truly mature and resilient quality strategy in modern IT requires the creation of a full, closed feedback loop. A loop that not only proactively prevents errors to the left of deployment, but also proactively learns from data flowing from production to the right of deployment. This article is a strategic guide to a sustainable quality philosophy that combines the best of both worlds: the rigor of “shift-left” and the wisdom of “shift-right.” We’ll show you how to move beyond simple “testing” and start building a holistic QA system that ensures that you deliver not only working, but most importantly, valuable and usable software.
Why is the traditional QA model, as the final stage, a fundamental brake on agility?
“Automated tests are a safety net that gives you the courage to refactor, to add features, and to fix bugs.”
— Brian Marick, New Models for Test Development | Source
In order to understand the revolution brought by “shift-left” and “shift-right,” we must first recall the model that dominated the industry for years, and which, unfortunately, is still being humbled in many organizations. In the traditional waterfall approach to software development, Quality Assurance (QA) was treated as a separate, isolated phase that followed the entire development process. The QA team was the “last gate” before implementation.
This model, in the world of agile development and DevOps, is the source of fundamental problems:
-
The astronomical cost of bug fixes: As we mentioned in the context of DevSecOps, the later in the software lifecycle a bug is discovered, the more exponentially more expensive it is to fix. A bug found by a developer within minutes of writing the code costs virtually nothing. The same bug, found by the QA team two months later, already requires analysis, restoration, repair in old code (whose context the developer has already lost), retesting and integration. The cost increases by tens or even hundreds of times.
-
Creating a bottleneck (bottleneck): All development work must wait in line for a “blessing” from the QA team. As the number of changes increases, this stage becomes longer and longer, completely nullifying the benefits of agile development. Instead of a smooth flow of value, we have a stop-and-go model that generates huge delays.
-
A culture of conflict and shifting responsibility: This model naturally creates a wall between developers and testers. Developers see QA as “the ones who pick on and mess everything up.” Testers see developers as “the ones who write code full of bugs.” Instead of shared responsibility for quality, there is a flip-flopping of the problem “over the fence,” which is toxic and ineffective.
-
Superficial testing under time pressure: As the implementation deadline approaches and the testing phase drags on, the QA team is under tremendous pressure to finish the job “faster.” This leads to cutting the scope of testing, forgoing testing of less critical paths and accepting risk, which often ends up releasing low-quality software into production.
The traditional QA model treats quality as something that can be “tacked on” or “tested” at the end. The modern approach, embodied in the “shift-left” philosophy, starts from a completely different premise: quality caot be tested at the end. Quality must be built into the product from the very beginning.
What is the shift-left philosophy and what specific practices does it encompass?
“Shift-Left Testing” is not a specific technique or tool. It’s a fundamental change in philosophy and culture that involves shifting quality assurance activities as early as possible (“left”) in the software development life cycle (SDLC). Instead of waiting for the finished product, quality is built in and verified at every stage, even the earliest, from idea to design to implementation.
The goal of “shift-left” is to create the shortest possible feedback loops, so that errors (both those in code and those in business logic) are detected and fixed immediately, when the cost is lowest.
Specific “Shift-Left” Practices:
At the requirements and design stage:
-
Collaboration and shared understanding: Instead of receiving a finished specification, QA engineers actively participate in meetings with business analysts and product managers. They help clarify requirements, identify ambiguities and edge cases before any code is written.
-
Behaviour-Driven Development (BDD) techniques: Teams collaboratively create scenarios in a “Given-When-Then” format (e.g., using the Gherkin tool) that describe the expected behavior of the system from the user’s perspective. These scenarios become both the specification, documentation and basis for future automated testing.
-
Threat Modeling: As we described in the DevSecOps guide, jointly analyzing the architecture for potential security threats is a key “shift-left” practice.
In the coding and building stages:
-
Unit Tests (Unit Tests): Developers write small, quick tests that verify the correctness of individual pieces of code (methods, classes) in isolation. This is the most fundamental and most important “shift-left” practice.
-
Pair Programming and Code Reviews: Jointly writing and reviewing code by other developers (including QA engineers with programming skills) allows logical, architectural and stylistic errors to be detected before the code makes its way to the main branch.
-
Static Code Analysis (SAST & Linters): Automated tools, integrated with the developer’s IDE and CI pipeline, analyze code for bugs, vulnerabilities and standards incompatibilities.
In the integration and testing stage:
-
Continuous Integration (CI): Every code change is automatically built and tested in an isolated environment. Pipeline CI runs unit tests, followed by component and integration tests that verify that the various parts of the system work together correctly.
-
Contract testing: In microservice architectures, these tests verify that the interfaces (APIs) between services are compatible with each other, allowing independent testing and deployment of individual services.
The implementation of “shift-left” transforms the role of the QA engineer. Instead of being a “bug catcher” at the end of the production line, he or she becomes a “quality coach” who supports the entire team in building better software every step of the way.
What are the business benefits (speed, cost) of early error detection?
Implementing a shift-left philosophy is not just a technical engineering fad. It’s a strategic investment with very tangible and measurable business benefits. Technology leaders who can articulate these benefits in a language that management can understand are much more likely to get support and budget for QA process transformation.
1. drastic cost reduction: This is the most tangible benefit. As mentioned earlier, the cost of fixing a bug increases exponentially as the software lifecycle progresses. Data from industry reports (e.g., from NIST - National Institute of Standards and Technology) show that a bug fixed at the coding stage is 5 to 30 times cheaper to fix than the same bug found at the acceptance testing stage, and up to 100 times cheaper than a bug found in production. On an a
ual basis, for a large organization, these savings can run into the millions. The investment in automation and early testing pays for itself extremely quickly in the form of avoided costs.
2 Accelerating the delivery cycle (Time-to-Market): Paradoxically, investing more time in quality at the beginning, leads to much faster value delivery at the end.
-
Less unplanned work: When bugs are detected late, they generate unplanned, urgent work that disorganizes scheduled sprints and distracts developers from creating new functionality. Early bug detection makes work more predictable and smoother.
-
Shortening the testing phase: When most bugs are eliminated at earlier stages, the final acceptance and regression testing phase becomes much shorter and simpler. Instead of a multi-week “testing hell,” it becomes a quick verification.
-
Greater confidence in deployments: An automated safety net that works in the CI/CD pipeline gives teams confidence that their changes will not break existing functionality. This allows for more frequent and smaller deployments, which is the essence of DevOps and agile.
3. increased developer productivity and morale: Nothing frustrates a developer more than having to fix a bug in code he wrote months ago. It’s like trying to fix a cold engine - it takes a lot of time to “warm up” again and get into context. Quick feedback allows you to fix a bug immediately, while the context is still fresh. Moreover, developers who feel they share responsibility for quality and have the tools to ensure it are more engaged and derive more satisfaction from their work.
4 Better product quality and higher customer satisfaction: Ultimately, fewer errors going into production means a better, more stable and reliable product. This translates directly into higher customer satisfaction and loyalty, as well as lower service costs (fewer support calls).
In summary, “shift-left” is not a cost, it’s an investment in efficiency. It’s a way to stop wasting money on fixing problems and start investing it in value creation.
What is “shift-right” and why is it a necessary complement rather than the opposite of “shift-left”?
While “shift-left” focuses on proactively preventing bugs before deployment, “shift-right” is a philosophy that involves continuing to test and collect quality information after deployment, right in the production environment. At first glance, this may sound like heresy - “testing in production?”. However, in modern IT, it is an absolutely crucial and necessary part of building a complete feedback loop.
“Shift-right” is not the opposite of “shift-left.” It is its natural and synergistic complement. “Shift-left” helps us answer the question, “Have we built the system correctly?” “Shift-right” helps answer the question, “Did we build the right system and how does it actually work?”.
Why is pre-production testing never enough?
No test environment, no matter how well prepared, will ever be able to 100% replicate the complexity and unpredictability of a production environment. There are entire classes of problems that only reveal themselves in the real world:
-
Scalability and performance issues: How will the application behave under the load of thousands of simultaneous users from different parts of the world, on different networks?
-
Diversity of user environments: How will the interface look and function on hundreds of different combinations of devices, operating systems and browsers?
-
Unpredictable user behavior: Users in the real world often use software in ways that no one anticipated at the design stage.
-
Complex data interactions: How will the system behave when interacting with a huge and “dirty” production database?
“Shift-right accepts this reality. Instead of pretending to be able to predict everything, we are implementing mechanisms that allow us to test safely and learn quickly directly from production.
The main goals of “Shift-Right”:
-
Validation of business hypotheses: Verify that the new functionality actually solves the customer’s problem and produces the expected results (e.g., increased conversions).
-
Performance and reliability monitoring: Continuously track how an application performs in a real-world environment and under real-world load.
-
Gather usage data: Understand which features are popular and which are ignored, and how users actually navigate the app.
-
Quickly detect and respond to incidents: Instantly identify problems in production, often before users even report them.
The combination of “shift-left” and “shift-right” creates a powerful, continuous learning loop: early testing ensures that quality code goes into production, and data from production provides invaluable information that feeds into the next planning and development cycle, allowing you to build better and better products.
What techniques and tools are used in “shift-right” practice?
The shift-right practice is based on a set of advanced techniques and tools to safely implement changes, monitor the system and collect real-time production data. The goal is to maximize learning while minimizing risk to users.
1. progressive deployment techniques (Progressive Delivery): Instead of deploying a new version of an application to 100% of users at once (known as a “big bang release”), techniques are used that allow changes to be made available gradually and in a controlled ma
er:
-
Canary Releases: A new version is first made available only to a small percentage of users (e.g. 1% or 5%). The team observes performance metrics and errors. If everything is fine, traffic is gradually switched to the new version. If there are problems, the traffic is immediately withdrawn.
-
Blue-Green Deployments: Two identical infrastructures are maintained in the production environment: “blue” (old version) and “green” (new version). All traffic is initially directed to the blue version. Once the new version is deployed to the green environment and tested on it, traffic is switched at the router level to the green version. In case of problems, the switch back to the blue version is immediate.
2 Feature Management / Feature Flags: This is one of the most powerful techniques. It involves “wrapping” new code snippets into switches (flags) that allow you to enable and disable a given functionality at runtime, without having to re-implement the application. This makes it possible to:
-
Testing in production: Enabling a new feature for company employees only or for a small group of beta testers.
-
A/B testing: Incorporating different variants of the same function for different user segments and measuring which one better achieves business goals.
-
“Safety Switch” (Kill Switch): The ability to immediately disable a new feature if it is found to be causing critical problems in production.
3. observability & monitoring (Observability & Monitoring): These are the eyes and ears of the shift-right strategy. Without comprehensive insight into what is happening in production, none of the above techniques make sense. Key tools are:
-
APM (Application Performance Monitoring): Tools such as Flopsar Suite (offered by ARDURA Consulting) monitor application performance from the inside out, tracking response times, resource consumption and errors at the code level.
-
Log Management: Aggregate and analyze logs from all system components in one place.
-
Real User Monitoring (RUM): Collecting performance data directly from users’ browsers (e.g., page load time, JavaScript errors).
4. chaos engineering (Chaos Engineering): This is an advanced discipline popularized by Netflix. It involves the controlled and deliberate introduction of failures into a production system (e.g., shutting down random servers, introducing network delays) to proactively test its resilience and uncover hidden weaknesses before they become apparent during a real failure.
These techniques, combined into a coherent system, transform the manufacturing environment from a dangerous “minefield” into the world’s most valuable laboratory for learning and product improvement.
How do you create a sustainable quality strategy that effectively combines both philosophies?
Creating a strategy that harmoniously combines “shift-left” proactivity with “shift-right” reactive intelligence requires thinking about quality not as a series of separate activities, but as an integrated, continuous loop. The goal is to maximize the speed and quality of feedback at each stage of the software life cycle.
Step 1: Map your current process and identify feedback loops. Draw your current software delivery process, from idea to production. Ask yourself questions: Where and when do we first verify quality? How long is the feedback loop at each stage? Does the developer learn about a bug in seconds (from a linter in the IDE), minutes (from the CI pipeline), hours (from E2E testing) or weeks (from a customer ticket)? Where do we have “blind spots”?
Step 2: Invest in a solid “Shift-Left” foundation. You can’t think about “shift-right” if you don’t have the basics mastered.
-
Build a culture of quality accountability in development teams. Quality is everyone’s job, not just the QA department.
-
Create a fast and reliable CI/CD pipeline that includes unit testing, static analysis (SAST) and dependency analysis (SCA). This is your first and most important safety net.
-
**Invest in integration and contract test automation ** to ensure that system components work properly together.
Step 3: Gradually introduce “Shift-Right” practices. Start with the simplest and most valuable practices.
-
Implement comprehensive observability: This is an absolute must. You must have visibility into logs, metrics and traces from your production environment. Without this, you are blind.
-
Start using Feature Flags: Enter a library to manage feature flags. Start by “wrapping” all new risky changes in flags. This will give you a “safety switch.”
-
Experiment with Canary Releases: Instead of deploying for 100% of users, start with small, controlled deployments for 1% or 5% of traffic and watch the metrics closely.
Step 4: Build bridges between worlds. The most important element is to create a flow of information from production back to the development teams.
-
Analyze production data: Regularly (e.g., at sprint planning meetings) analyze monitoring data, error logs and customer requests. What can we learn from them? What new test cases should we add to our regression?
-
Incorporate usage data into prioritization: Use analytics data about which features are used most often to make decisions about which areas of the application need the most attention in terms of testing and refactoring.
-
Create “error budgets” (Error Budgets): As part of SRE (Site Reliability Engineering) practices, define what level of errors in production is acceptable. If the team exceeds this budget, it must stop working on new features and focus on improving stability and quality.
The role of the QA team in an integrated strategy: In this model, the role of the QA team is evolving again. They become quality engineers for the entire system. They help developers build better tests “on the left side,” while becoming experts in analyzing production data “on the right side.” They are the gatekeepers and facilitators of the entire quality loop.
What role does culture and collaboration (DevOps, DevSecOps) play in shift-left and shift-right strategies?
Implementing an integrated quality strategy is 80% cultural change and only 20% technological change. Tools are important, but without the right mindset and collaboration, they will remain just an expensive toy. DevOps and DevSecOps cultures are not something that exists alongside a quality strategy - they are its absolute and necessary foundation.
DevOps as a foundation for collaboration: The DevOps philosophy of breaking down walls between developers and operations is a prerequisite for successful implementation of “shift-left” and “shift-right.”
-
Shared responsibility: DevOps promotes a “you build it, you run it” culture, in which the development team is responsible for its code from conception to operation in production (and any failures). This naturally motivates developers to care about quality and testability from the very beginning (“shift-left”).
-
Common tools and processes: DevOps creates a common, automated CI/CD pipeline that becomes the backbone for all quality practices. It’s where testing, security scanning and deployment mechanisms are integrated.
-
Enabling “Shift-Right”: It is the DevOps culture and tools (infrastructure as code, deployment automation) that enable advanced techniques such as Canary Releases and Blue-Green Deployments, which are the heart of “shift-right.”
DevSecOps as an extension to security: As we discussed in detail in our hands-on guide to DevSecOps, it is a natural extension of DevOps that integrates security into the same loop of collaboration and automation. In the context of quality strategy:
-
Security as a dimension of quality: DevSecOps teaches that security is not a separate discipline, but one of the key attributes of product quality, just like performance or usability.
-
“Shift-Left Security”: All “shift-left security” practices (SAST, SCA, threat modeling) are excellent examples of the application of the “shift-left” philosophy.
-
“Shift-Right Security: Continuous monitoring of security in production, penetration testing and incident response, in turn, are examples of “shift-right” practices.
Building a culture of quality: The ultimate goal is to create a culture in which quality is not the work of one person or one department, but a shared obsession of the entire organization.
-
No Blame Culture (Blameless Culture): When an error occurs in production, the goal is not to find the culprit, but to understand the systemic cause of the problem and implement mechanisms (e.g., new tests, better monitoring) to prevent it from happening again in the future.
-
Data-driven decisions: Instead of relying on opinions and hunches, product and quality decisions are made based on hard data from testing and production monitoring.
-
Continuous improvement (Kaizen): The team regularly reviews its process and asks itself, “How can we build and deliver software of even higher quality, even faster?”
Without a healthy, collaborative DevOps culture, any attempt to implement a modern quality strategy will only be a superficial imitation, doomed to fail when confronted with the walls of old, siloed habits.
What metrics measure the effectiveness of an integrated quality strategy?
In order to know whether our investment in “shift-left” and “shift-right” is paying real dividends, we need to move away from traditional, often misleading QA metrics (such as number of tests performed or percentage of code coverage) and start measuring what really matters: speed, stability and business value. A modern QA strategy requires modern metrics that reflect business goals, not just team activity.
DORA (DevOps Research and Assessment) Metrics: The DORA Group (now part of Google) has studied thousands of companies over the years and has identified four key metrics that correlate most strongly with high performance in IT organizations. They are an excellent starting point for measuring the effectiveness of quality strategies.
-
Deployment Frequency: How often do we deploy changes to production? A high frequency is a sign of a healthy, automated and safe process.
-
Lead Time for Changes: How long does it take from code approval to deployment to production? Short time means an efficient, bottleneck-free process.
-
Change Failure Rate: What percentage of deployments cause a failure in production and require immediate intervention? A low rate indicates high quality and effective testing.
-
Time to Restore Service (MTTR): How quickly can we restore a service after a disaster? A short MTTR is a measure of system resilience.
Shift-Left Metrics:
-
Cost of Quality (Cost of Quality): An analysis of how much time and money is spent on preventing bugs (e.g., writing tests) versus the cost of fixing them (e.g., debugging, hotfixes). The goal is to shift investment toward prevention.
-
Defect Escape Rate: What percentage of bugs were found at later stages of the cycle (e.g., how many bugs that could have been found by unit tests “escaped” to the E2E test stage or to production)?
-
Mean Time to Remediate (MTTR for Vulnerabilities): How much time elapses between the detection of a security vulnerability by an automated scan and its remediation?
“Shift-Right” Metrics:
-
SLI/SLOs (Service Level Indicators / Objectives): Hard, measurable indicators of reliability and performance in production (e.g., availability, latency) that define what “good quality” means from the user’s perspective.
-
Error Budgets (Error Budgets): A defined, acceptable level of unavailability or errors. Allows decisions on whether the team should focus on new features or stability improvements.
-
Business Metrics (A/B Testing): The impact of new functionality on key business metrics such as conversion, retention and user engagement.
-
Customer Satisfaction (CSAT, NPS): The ultimate measure of quality - are our customers satisfied with the product?
The shift to these metrics is a shift in thinking: from measuring “how busy we are” to measuring “what real value and quality we deliver.”
What does a sustainable quality strategy look like in the SDLC cycle?
The table below synthesizes how shift-left and shift-right practices break down into the different phases of the software development life cycle (SDLC), creating a complete, closed feedback loop.
| SDLC phase | "Shift-Left" (Proactive) actions. | "Shift-Right" (Reactive/Exploratory) Activities. | Key tools | Responsibility |
| **Pla
ing/Desig ** | Collaborative requirements definition (BDD). Risk modeling. Defining acceptance criteria. | Analyze customer feedback and usage data from previous versions to plan new features. | Jira, Confluence, Gherkin. | Product Owner, Analyst, QA, Developer. |
| Coding/Build | Unit tests. Code reviews. Static analysis (SAST, linters). Dependency scanning (SCA). | - | IDE, Git, SonarQube, Snyk. | Developer, QA Engineer. |
| Testing/Staging | Component and integration tests. Contract tests. E2E tests. Performance tests. | - | Jenkins, GitLab CI, Cypress, Selenium, Postman, JMeter. | Developer, QA Engineer, DevOps Engineer. |
| Implementation/Release | Deployment automation (CI/CD). Scaing of container images. IaC scanning. | Canary Releases. Blue-Green deployments. Incremental Releases (Feature Flags). | Kubernetes, Terraform, Spi
aker. | DevOps Engineer, SRE Team. |
| Operations/Monitoring | - | Observability (logs, metrics, traces). User experience monitoring (RUM). A/B testing. Chaos engineering. | Prometheus, Grafana, ELK, Jaeger, Flopsar Suite. | SRE team, DevOps Engineer, Product Owner, Analyst. |
Looking for flexible team support? Learn about our Staff Augmentation offer.
Let’s discuss your project
Have questions or need support? Contact us – our experts are happy to help.
How does ARDURA Consulting’s QA and testing expertise support the construction of a complete quality strategy?
At ARDURA Consulting, we understand that in today’s technological world, quality is not a luxury, but a requirement for survival. We also know that building a modern, integrated quality strategy that combines the best of “shift-left” and “shift-right” is a complex transformational challenge. As your strategic partner, we offer comprehensive support that draws on our years of experience and deep expertise.
1 Strategic Quality Advisory (QA Advisory): We act as a trusted advisor (Trusted Advisor). We start with an audit of your current processes and tools, helping to identify weaknesses and bottlenecks. Then, together with you, we design a pragmatic and customized roadmap for QA transformation - from implementing the basics of automation to advanced testing practices in production.
2. building and optimizing test processes: Our experts have in-depth knowledge of the entire spectrum of modern QA practices. We help implement and optimize:
-
Test automation: we build from scratch or modernize existing test automation frameworks (UI, API, mobile), using the latest tools and technologies, including those based on artificial intelligence.
-
Performance Testing: We design and execute comprehensive load and performance testing, ensuring that your systems are scalable and reliable.
-
Security Testing: We help you implement a DevSecOps culture by integrating automated security scanning into your CI/CD pipeline.
3 Comprehensive testing services and flexible support: We understand that you often lack the internal resources to implement an ambitious quality strategy. As part of our flexible cooperation models, we offer:
-
Managed Testing Services: We take full responsibility for ensuring the quality of your products.
-
Staff Augmentation: We provide world-class QA engineers, automation engineers and performance testers who integrate seamlessly into your teams, bringing the necessary competencies and supporting you in your daily work.
ARDURA Consulting ‘s goal is to help you build a culture and system in your organization where quality stops being a problem and becomes an integral part of the value creation process. We want to give you confidence that the software you deliver is not only bug-free, but also valuable, useful and loved by your customers.
If you want to take the quality in your company to the highest, world-class level, consult your project with us. Together we can build your competitive advantage.