Shift left vs shift-right: A DevOps mystery solved

The DevOps approach to developing software aims to speed applications into production by releasing small builds frequently as code evolves. As part of the continuous cycle of progressive delivery, DevOps teams are also adopting shift-left and shift-right principles to ensure software quality in these dynamic environments.

All this shifting may sound abstract, but I’ll explain how this quality assurance approach benefits DevOps methods and outcomes-and makes software more reliable.

In DevOps, what is shift-left? And what is shift-right?

To understand shift left and shift right, consider the software development cycle as a continuum, or infinity loop, from left to right. On the left side of the loop, teams plan, develop, and test software in pre-production. The main concern in pre-production on the left side of the loop is building software that meets design criteria. When teams release software into production on the right side of the loop, they make the software available to users. The concern in production is to maintain software that meets business goals and reliability criteria.

Shift-left is the practice of moving testing, quality, and performance evaluation early in the software development process, thus the process of shifting to the “left” side of the DevOps lifecycle. This concept has become increasingly important as teams face pressure to deliver software faster and more frequently with higher quality. Shift-left speeds up development efficiency and reduces costs by detecting and addressing software defects earlier in the development cycle before they get to production.

Likewise, shift-right is the practice of performing testing, quality, and performance evaluation into production under real-world conditions. Shift-right methods ensure that applications running in production can withstand real user load while ensuring the same high levels of quality. With shift right, DevOps teams test a built application to ensure performance, resilience, and software reliability. The goal is to detect and remediate issues that would be difficult to anticipate in development environments.

Both shift-left and shift-right testing have become important components of Agile software development, enabling teams to develop and release software incrementally and reliably but also test software at various points in the lifecycle.

We’ve already had some conversations about shift-left, so let’s take a closer look at shift-right.

Why shift-right is important

With shift-right, teams can test code in an environment that mimics real-world production conditions that can’t be simulated in development. This practice enables teams to catch runtime issues before customers do. To automate part of the process, teams can use application programming interface calls. Organizations can also apply shift-right testing to code that gets configured or is monitored in the field.

Similar to shift-left testing, the objective of shift-right testing is to fail small and fail fast. The assumption is that problems caught early in the pre-deployment environment are easier to solve than issues caught by customers in live production.

Once established, shift-right becomes part of the continuous feedback loop that characterizes DevOps and more closely aligns development and operations activities.

Shift-right testing is especially useful for organizations practicing progressive delivery, wherein developers release new software features incrementally to minimize the impact of unforeseen issues. Testing in a production-ready environment is a crucial final phase before declaring features ready for prime-time.

Why shift to shift-left and shift-right testing?

The shift-left/shift-right mentality differs in some important ways from how testing is handled in traditional “waterfall” methodologies.

The waterfall method follows a structured process in which requirements are translated into specifications and then into code in a series of handoffs. In this scenario, testing is usually left until a project is ready to be released into production. By waiting to test until the end, teams can miss issues that developers could quickly fix while they are still actively working on a feature. This approach wastes time is error-prone and often misses the opportunity to address production-environment issues before deploying.

Shift-left testing can reduce software defects and speed software’s time to market. In a shift-left scenario, teams incorporate testing early, often before any code is written, and throughout development. Rather than testing for functionality, shift-left testing checks that software adheres to the specifications created by the business.

On the other side of the equation, shift-right practices can better ensure production reliability by testing software in production and under real-world conditions. As a result, teams get more comprehensive testing coverage that better addresses user experience concerns.

Why shift-right is critical for microservice architecture

Testing in production is especially important for software built from microservices. The performance of microservices-based applications depends on the responsiveness of individual services, which makes testing in a simulated environment difficult. Shifting right enables teams to observe real-world forces and measure their impact.

Shift-right tests typically cover functionality, performance, failure tolerance, and user experience. Teams often automate such production-environment testing and translate feedback into technical specifications for developers. Testers can isolate issues to the greatest degree possible so teams can fix and tackle improvements in parallel. As an application becomes more stable, teams can start testing and optimizing performance.

Types of shift-right tests

A shift-right approach may enlist various types of test suites. Here are a few your team might find useful.

A/B testing. This method is commonly used in web design. Users are presented with one of two versions of a page and the results are measured to determine which generates a greater response. This type of test is almost always conducted in a production environment so real-world feedback can be gathered.

Synthetic monitoring. Another variety of shift-right testing is synthetic monitoring, which is the use of software tools to emulate the paths users might take when engaging with an application. Synthetic monitoring can automatically keep tabs on application uptime and tell you how your application responds to typical user behavior. It uses scripts to generate simulated user behavior for various scenarios, geographic locations, device types, and other variables.

Chaos testing. With chaos engineering, developers intentionally “break” the application by introducing errors to determine how well it recovers from disruption. DevOps and IT teams set up monitoring tools so they can see precisely how the application responds to different types of stresses. This test is usually performed in a controlled production environment to minimize the impact on mission-critical systems.

Canary releases. This strategy is named for the canaries that miners use to lower into coal mines to detect toxic gases. Technology has thankfully rendered this inhumane tactic obsolete, but the term survives to describe a slow rollout of changes to a small subset of instances for testing before applying them to the full infrastructure. Closely related to this controlled, iterative method of updating software is.

Blue-green deployment. With a blue-green deployment, an organization runs two nearly identical production environments, shifting users (real or synthetic) between the two as they make small changes to one or the other. This practice is important to shift-right methodology as it can minimize downtime and provide a mechanism for rapid rollback should something go wrong with the latest version.

The application security dividend of shift-right and shift-left

An important benefit of shifting right is improved application security. “Scanning a static image, either in a repository or in a development environment, can’t give you the same rich insights you can get if you observe the application running in production,” a Dynatrace report on security evolution in the cloud notes. “For example, you don’t get to see what libraries are actually called, how they are used, whether a process is exposed to the Internet, or whether a process interacts with sensitive corporate data.”

The rapidly proliferating use of software containers has complicated aspects of cybersecurity. Containers can obscure the processes running in them, and attackers even containerize exploits. Production testing exposes the behavior of container-based software, even if the contents of containers are obscured. Shift-right testing can also be used to test for the presence of “zero-day exploits,” which are attacks that haven’t been seen before.

From a shift-left perspective, security testing during development helps identify vulnerabilities as early in the life cycle as possible, when they are easiest to remediate.

Shift-right done right with full-stack monitoring

Automated full-stack monitoring is an important tool in shift-right testing. It gives developers, operations teams, and testers a way to discover and monitor all requests and processes from all services across sprawling and complex multi-cloud applications – from a single interface. Testers can push deployment information and metadata to the monitoring environment using scripts and track builds, revisions, and configuration changes. The better a platform understands the full context of an issue, the better it will be able to detect root causes, flag with the proper parties, and even implement self-healing measures.

Whether your organization has shifted testing left to the development phase or right in production – or simply wants to monitor performance in the field – an AI-driven, full-stack observability solution can take your software development to the next level.

To learn more about how Dynatrace helps developers automate testing and release, join us for the on-demand performance clinic, Why Devs Love Dynatrace – Episode 3 – Automated Release Comparison.


Dynatrace Inc. published this content on 31 January 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 31 January 2022 19:51:06 UTC.

Leave a Reply