Foundational best practices for securing your cloud deployment

As covered in our recent blog posts , the security foundations blueprint is here to curate best practices for creating a secured Google Cloud deployment and provide a Terraform automation repo for adapting, adopting, and deploying those best practices in your environment.

Source: Foundational best practices for securing your cloud deployment

In today’s blog post, we’re diving a little deeper into the security foundations guide to highlight several best practices for security practitioners and platform teams to use with setting-up, configuring, deploying, and operating a security-centric infrastructure for their organization.

The best practices described in the blueprint are a combination of both preventative controls and detective controls, and are organized as such in the step-by-step guide. The first topical sections cover preventative controls, which are implemented through architecture and policy decisions. The next set of topical sections cover detective controls, which use monitoring capabilities to look for drift, anomalous or malicious behavior as it happens.

House

If you want to follow along in the full security foundations guide as you read this post, we are covering sections 4-11 of the Step-by-step guide (chapter II).

Preventative controls

The first several topics cover how to protect your organization and prevent potential breaches using both programmatic constraints (policies) and architecture design. 

Organization structure

One of the benefits of moving to Google Cloud is your ability to manage resources, their organization and hierarchy, in one place! The best practices in this section give you a resource hierarchy strategy that does just that. As implemented, it provides isolation and allows for segregation of policies, privileges, and access, which help reduce risk of malicious activity or error. And while this sounds like you might be doing more work, the capabilities in GCP make this possible while easing administrative overhead.

Example
The step-by-step guide’s recommended organization structure

The best practices include:

  • using a single organization for top-level ownership of resources,
  • implementing a folder hierarchy to group projects into related groups (prod, non-prod, dev, common, bootstrap) where you can create segmentation and isolation, and subsequently apply security policies and grant access permissions, and
  • establishing organizational policies that define resource configuration constraints across folders and projects.

Resource deployment

Whether you are rolling out foundational or infrastructure resources, or deploying an application, the way you manage your deployment pipeline can provide extra security, or create extra risk. The best practices in this section show you how to set up review, approval, and rollback processes that are automated and standardized. They limit the amount of manual configuration, and therefore, reduce the possibility of human error, drive consistency, allow revision control, and enable scale. This allows for governance and policy controls to help you avoid exposing your organization to security or compliance risks. 

The best practices described include:

  • codifying the Google Cloud infrastructure into Terraform modules which provides an automated way of deploying resources,
  • using private Git repositories for the Terraform modules,
  • initiating deployment pipeline actions with policy validation and approval stages built into the pipeline, and
  • deploying foundations, infrastructure, and workloads through separate pipelines and access patterns.
couples
Access patterns outlined in the security foundations blueprint

Authentication and authorization

Many data breaches come from incorrectly-scoped or over-granted privileges. Controlling access precisely allows you to keep your deployments secure by permitting only certain users access to your protected resources. This section delivers best practices for authentication (validating a user’s identity) and authorization (determining what that user can do) in your cloud deployment. Recommendations include managing user credentials in one place (for example, either Google Cloud Identity or Active Directory) and enabling syncs so that the removal of access and privileges for suspended or deleted user accounts are propagated appropriately.  

This section also reinforces the importance of using multi-factor authentication (MFA) and phishing-resistant security keys (covered more in-depth in the Organization structure chapter).  Privileged identities especially should use multi-factor authentication and consider adding multi-party authorization as well since, due to their access, they are frequently targets and thus at higher risk.

Throughout all the best practices in this section, the overarching theme is the principle of least privilege: only necessary permissions are to be granted. No more, no less

A few more of the best practices include:

  • maintaining user identities automatically with Cloud Identity federated to your on-prem Active Directory (if applicable) as the single source of truth,
  • using Single sign-on (SSO) for authentication,
  • establishing privileged identities to provide elevated access in emergency situations, and
  • using Groups with a defined naming convention, rather than individual identities, to assign permissions with IAM.
Additional video resource on how to use Groups with IAM
Additional video resource on how to use Groups with IAM

Networking 

As your network is the communication layer between your resources and to the internet, making sure it is secure is critical in preventing external (also known as north-south) and internal (east-west) attacks. This section of the step-by-step guide goes into how to secure and segment your network so that services that store highly sensitive data are protected. It also includes architecture alternatives based on your deployment patterns. 

The guide goes deeper to show how best to configure the networking of your cloud deployment so that resources can communicate with each other, with your on-prem environment, as well as the public internet. And it does all that while maintaining security and reliability. By keeping network policy and control centralized, implementing these best practices is easier to manage.

SaleBestseller No. 1
Acer Aspire 3 A315-24P-R7VH Slim Laptop | 15.6" Full HD IPS Display | AMD Ryzen 3 7320U Quad-Core Processor | AMD Radeon Graphics | 8GB LPDDR5 | 128GB NVMe SSD | Wi-Fi 6 | Windows 11 Home in S Mode
  • Purposeful Design: Travel with ease and look great...
  • Ready-to-Go Performance: The Aspire 3 is...
  • Visibly Stunning: Experience sharp details and...
  • Internal Specifications: 8GB LPDDR5 Onboard...
  • The HD front-facing camera uses Acer’s TNR...
Bestseller No. 2
HP Newest 14" Ultral Light Laptop for Students and Business, Intel Quad-Core N4120, 8GB RAM, 192GB Storage(64GB eMMC+128GB Micro SD), 1 Year Office 365, Webcam, HDMI, WiFi, USB-A&C, Win 11 S
  • 【14" HD Display】14.0-inch diagonal, HD (1366 x...
  • 【Processor & Graphics】Intel Celeron N4120, 4...
  • 【RAM & Storage】8GB high-bandwidth DDR4 Memory...
  • 【Ports】1 x USB 3.1 Type-C ports, 2 x USB 3.1...
  • 【Windows 11 Home in S mode】You may switch to...

This section is robust in providing detailed, opinionated guidance, so if you would like to dive in further to this topic, head to section 7 of the full step-by-step guide to learn more. A few of the high-level best practices in this section are:

  • centralizing network policies and control through use of Original Postroducts/gcp/getting-started-with-shared-vpc" target="_blank" rel="noreferrer noopener">Shared VPC, or a hub-and-spoke architecture if this fits your use case,
  • separating services that contain sensitive data in separate Shared VPC networks (base and restricted) and using separate projects, IAM, and a VPC-Service Control perimeter to limit data transfers in or out of the restricted network,
  • using Dedicated Interconnect (or alternatives) to connect on-prem with Google Cloud and using Cloud DNS to communicate with on-prem DNS servers,
  • accessing Google Cloud APIs from the cloud and from on-premises through private IP addresses, and
  • establishing tag-based firewall rules to control network traffic flows.

Key and secret management

When you are trying to figure out where to store keys and credentials, it is often a trade-off between level of security and convenience. This section outlines a secure and convenient method for storing keys, passwords, certificates, and other sensitive data required for your cloud applications using Cloud Key Management Services and Secret Manager. Following these best practices ensure that storing secrets in code is avoided, the lifecycles of your keys and secrets are managed properly, and the principles of least privilege and separation of duties are adhered to.

The best practices described include:

  • creating, managing, and using cryptographic keys with Cloud Key Management Services,
  • storing and retrieving all other general-purpose secrets using Secret Manager, and
  • using prescribed hierarchies to separate keys and secrets between the organization and folder levels.

Logging

Logs are used by diverse teams across an organization. Developers use them to understand what is happening as they write code, security teams use them for investigations and root cause analysis, administrators use them to debug problems in production, and compliance teams use them to support regulatory requirements. The best practices in this section keep all those use cases in mind to ensure the diverse set of users are supported with the logs they need.

The guide recommends a few best practices around logs including:

  • centralizing your collection of logs in an organization-level log sink project,
  • unifying monitoring data at the folder-level,
  • ingesting, aggregating, and processing logs with the Cloud Logging API and Cloud Log Router, andF
  • exporting logs from sinks to Cloud Storage for audit purposes, to BigQuery for analysis, and/or to a SIEM through Cloud Pub/Sub.
Log INgestions
Logging structure described in the step-by-step guide

Detective controls

The terminology “detective controls” might evoke the sense of catching drift and malicious actions as they take place or just after. But in fact, these latter sections of the step-by-step guide cover how to prevent attacks as well using monitoring capabilities to detect vulnerabilities and misconfigurations before they have an opportunity to be exploited.

Detective controls

Much like a detective trying to solve a crime may whiteboard a map of clues, suspects, and their connections, this section covers how to detect and bring together possible infrastructure misconfigurations, vulnerabilities, and active threat behavior into one pane of glass. This can be achieved through a few different options: using Google Cloud’s Security Command Center Premium; using native capabilities in security analytics leveraging BQ and Chronicle; as well as integrating with third-party SIEM tools, if applicable for your deployment.

The guide lists several best practices including:

  • aggregating and managing security findings with Security Command Center Premium to detect and alert on infrastructure misconfigurations, vulnerabilities, and active threat behavior,
  • using logs in BigQuery to augment detection of anomalous behavior by Security Command Center Premium, and
  • integrating your enterprise SIEM product with Google Cloud Logging.
SFB002 Blog Visuals (5).png
Security Command Center in the Cloud Console

Billing setup

Since your organization’s cloud usage flows through billing, setting up billing alerts and monitoring your billing records can work as an additional mechanism for enhancing governance and security by detecting unexpected consumption.

New
Naclud Laptops, 15 Inch Laptop, Laptop Computer with 128GB ROM 4GB RAM, Intel N4000 Processor(Up to 2.6GHz), 2.4G/5G WiFi, BT5.0, Type C, USB3.2, Mini-HDMI, 53200mWh Long Battery Life
  • EFFICIENT PERFORMANCE: Equipped with 4GB...
  • Powerful configuration: Equipped with the Intel...
  • LIGHTWEIGHT AND ADVANCED - The slim case weighs...
  • Multifunctional interface: fast connection with...
  • Worry-free customer service: from date of...
New
HP - Victus 15.6" Full HD 144Hz Gaming Laptop - Intel Core i5-13420H - 8GB Memory - NVIDIA GeForce RTX 3050-512GB SSD - Performance Blue (Renewed)
  • Powered by an Intel Core i5 13th Gen 13420H 1.5GHz...
  • Equipped with an NVIDIA GeForce RTX 3050 6GB GDDR6...
  • Includes 8GB of DDR4-3200 RAM for smooth...
  • Features a spacious 512GB Solid State Drive for...
  • Boasts a vibrant 15.6" FHD IPS Micro-Edge...
New
HP EliteBook 850 G8 15.6" FHD Laptop Computer – Intel Core i5-11th Gen. up to 4.40GHz – 16GB DDR4 RAM – 512GB NVMe SSD – USB C – Thunderbolt – Webcam – Windows 11 Pro – 3 Yr Warranty – Notebook PC
  • Processor - Powered by 11 Gen i5-1145G7 Processor...
  • Memory and Storage - Equipped with 16GB of...
  • FHD Display - 15.6 inch (1920 x 1080) FHD display,...
  • FEATURES - Intel Iris Xe Graphics – Audio by...
  • Convenience & Warranty: 2 x Thunderbolt 4 with...

The supporting best practices described include:

  • setting up billing alerts are used on a per-project basis to warn at key thresholds (50%, 75%, 90%, and 95%), and
  • exporting billing records to a BigQuery dataset in a Billing-specific project.

If you want to learn more about how to set up billing alertsexport your billing records to BigQuery, and more, you can also check out the Beyond Your Bill video series.

Bringing it all together and next steps

This post focused on the best practices provided in the blueprint for building the foundational infrastructure for your cloud deployment, including preventative and detective controls. 

While the best practices are many, they can be adopted, adapted, and deployed efficiently using templates provided in the Terraform automation repository.  And of course, the non-abbreviated details of implementing these best practices is available in the security foundations guide itself. Go forth, deploy and stay safe out there.