Armchair Architects: Exploring the relationship between Cost and Architecture

Welcome to season two of the Armchair Architects series! We received great feedback – and lots of good questions and discourse – in season 1, so we’re back for another ten posts and videos from now through mid-Summer.

Uli Homann and Eric Charran continue to be our resident experts alongside David Blank-Endelman, host of the Azure Enablement Show. However, this season will feature customer speakers and other activities leading up to the second Backstage Tour virtual event in September.

This post focuses on cost considerations when designing your cloud architecture. Too often we hear about architects or developers striving to design the perfect architecture only to realize it’s too expensive once it’s in production. Moreover, the move to cloud makes it possible to rethink costs at the point of design, given that you can refactor the infrastructure choices, which can be more difficult in on-premises environments (the infrastructure is already bought and paid for). It’s one of the key benefits to OpEx versus CapEx models.

You can also take advantage of modularity, where some application components (or workloads) require higher availability or security than others so you can more easily optimize for some aspects of the application while saving costs for others.

Cloud platforms provide configurable elasticity to grow and shrink infrastructure based on demand. Azure currently has over 70 regions and relationships with partners, like AT&T, who have edge zones that make it possible to scale out to whichever audience or use case your app must support. This flexibility doesn’t come only in the form of actual provisioning, but also in planning the rollout of your app. You can use cost and capacity calculators to determine timing, which could impact the overall cost/benefit structure of your app.

These benefits can be awesome, but you still need to determine what your OpEx expenditures will be ahead of time. You need to work forwards from the design to project operating costs for the solution design. It’s almost like a negotiation: if you want x performance or y reliability, it’s going to cost you z amount.

Scaling-out isn’t instantaneous or automatic

Architects also need to consider the time it takes to scale-out the provisioning of resources. You need to think about the configuration and control planes for each component to hit the temporal milestones related to the predicted load. For instance, if I’m in a seasonal business where my orders might spike around the Super Bowl, then stepwise plan to scale up in time to hit the demand. In today’s world, you still need to look at each component to flip some switches and turn some dials to do that – but do so that limits the risk of prematurely scaling out or in.

Another key consideration is that once you’ve scaled out one component, you may put additional pressure on other components (e.g., a persistent data tier). Considerations include understanding how to scale individual components.  For example, does the database need to scale as well when scaling the compute tier? Or if you scale your overall solution, your monitoring capacity will also need to increase.

The concept of scale units, which describe all the resources you need (i.e.: management, application, compute, storage, etc.) to satisfy demand for a specific capacity help in this regard for planning. You can then take advantage of the dynamic scaling capabilities built into the cloud services by triggering the necessary scaling in the respective tier but in an orchestrated manner.

The scale unit concept means you plan for scaling dependent application components together according to a predefined methodology, rather than attempt to configure them in real time.  This provides specific slices of capacity that can be scaled-out when the situation demands it. It allows you to scale-out the solution symmetrically.

For instance, if you’re in the messaging business (i.e., mail), for 5000 users who need a specific amount of storage, you may need this a symmetric amount of Exchange servers and accompanying monitoring capabilities. Then you take that configuration as a single scale-unit for 5000 and “stamp” it out for however many multiples of 5000 users you have. You could do this for whatever measure of your application.

What do you do about unexpected demand?

SaleBestseller No. 1
Acer Aspire 3 A315-24P-R7VH Slim Laptop | 15.6" Full HD IPS Display | AMD Ryzen 3 7320U Quad-Core Processor | AMD Radeon Graphics | 8GB LPDDR5 | 128GB NVMe SSD | Wi-Fi 6 | Windows 11 Home in S Mode
  • Purposeful Design: Travel with ease and look great...
  • Ready-to-Go Performance: The Aspire 3 is...
  • Visibly Stunning: Experience sharp details and...
  • Internal Specifications: 8GB LPDDR5 Onboard...
  • The HD front-facing camera uses Acer’s TNR...
Bestseller No. 2
HP Newest 14" Ultral Light Laptop for Students and Business, Intel Quad-Core N4120, 8GB RAM, 192GB Storage(64GB eMMC+128GB Micro SD), 1 Year Office 365, Webcam, HDMI, WiFi, USB-A&C, Win 11 S
  • 【14" HD Display】14.0-inch diagonal, HD (1366 x...
  • 【Processor & Graphics】Intel Celeron N4120, 4...
  • 【RAM & Storage】8GB high-bandwidth DDR4 Memory...
  • 【Ports】1 x USB 3.1 Type-C ports, 2 x USB 3.1...
  • 【Windows 11 Home in S mode】You may switch to...

So, you might know what your email service will do over time, but what about spiky, unexpected demand – like a flash sale or news event? How do you ensure that it won’t bring down the entire service if these triggers are activated?

Implementing some core architecture patterns will help with this condition.  The throttling pattern applied to your architecture will assist so that it doesn’t become overwhelmed. Depending on your implementation, throttling behavior will safeguard key experiences, while potentially shedding operations.  In the end, depending on your implementation, you may shed some customers, but you won’t bring down your overall application along the way.

You also need to architect in the notion of resiliency, so that if a spike does occur and the service is brought down, the user experience is minimally impacted.

Avoiding geo-redundancy naivety

It’s easy to expect that cloud hyperscaler zone and region redundancy will help applications remain available in the event of a catastrophic failure.  In some instances, the cloud provider will fail over to other zones within the region.  In an escalated catastrophe, either within the application, infrastructure, or datacenter itself may require a failover to other regions entirely.  In these scenarios, architects must be vigilant about how their application will behave under these circumstances and how users will get routed to the closest and next available region.

A prepared standard operating procedure will help for this condition.  For example, all traffic from the impacted region should be rerouted to the next closest (physically) region. Another ideal situation is to have implemented active-active management of the resources, where there is a hot link between them. The second solution is to use Azure Traffic Management or Azure Front Door to effectively manage the load balance. You’re using traffic shaping and traffic management to make sure there is balance between the regions. This allows you to avoid ‘failover’, given that the other regions just take over. It can be more expensive unless you have a global footprint.

But how do you keep the costs down with an active-active scenario? Well, it depends on the SLA and business expectations. If an active-passive scenario is deployed, and the time to bring up the other environment isn’t materially different, then this scenario would make more sense.

Summary

New
Naclud Laptops, 15 Inch Laptop, Laptop Computer with 128GB ROM 4GB RAM, Intel N4000 Processor(Up to 2.6GHz), 2.4G/5G WiFi, BT5.0, Type C, USB3.2, Mini-HDMI, 53200mWh Long Battery Life
  • EFFICIENT PERFORMANCE: Equipped with 4GB...
  • Powerful configuration: Equipped with the Intel...
  • LIGHTWEIGHT AND ADVANCED - The slim case weighs...
  • Multifunctional interface: fast connection with...
  • Worry-free customer service: from date of...
New
HP - Victus 15.6" Full HD 144Hz Gaming Laptop - Intel Core i5-13420H - 8GB Memory - NVIDIA GeForce RTX 3050-512GB SSD - Performance Blue (Renewed)
  • Powered by an Intel Core i5 13th Gen 13420H 1.5GHz...
  • Equipped with an NVIDIA GeForce RTX 3050 6GB GDDR6...
  • Includes 8GB of DDR4-3200 RAM for smooth...
  • Features a spacious 512GB Solid State Drive for...
  • Boasts a vibrant 15.6" FHD IPS Micro-Edge...
New
HP EliteBook 850 G8 15.6" FHD Laptop Computer – Intel Core i5-11th Gen. up to 4.40GHz – 16GB DDR4 RAM – 512GB NVMe SSD – USB C – Thunderbolt – Webcam – Windows 11 Pro – 3 Yr Warranty – Notebook PC
  • Processor - Powered by 11 Gen i5-1145G7 Processor...
  • Memory and Storage - Equipped with 16GB of...
  • FHD Display - 15.6 inch (1920 x 1080) FHD display,...
  • FEATURES - Intel Iris Xe Graphics – Audio by...
  • Convenience & Warranty: 2 x Thunderbolt 4 with...

So, what’s the takeaway? Cost is now an active ingredient in planning the workload – because it can be. Take advantage of the cloud by planning for elasticity correctly and responsibly the first time.

Look for the next episode in season two of Armchair Architects. We’ll be talking about how to become a successful cloud architect. In the meantime, dive a little deeper into this topic by watching the video below.

Original Post>