Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

🌸 25% OFF All Reviewers on our International Women's Month Sale! Save 10% OFF All Subscriptions Plans & 5% OFF Store Credits/Gift Cards!

Google Cloud Spanner

Home » Google Cloud » Google Cloud Spanner

Google Cloud Spanner

Last updated on March 16, 2026

Google Cloud Spanner Cheat Sheet

  • A fully managed, horizontally scalable relational database service with strong consistency, global availability, and multi-model support.

 

Features

  • SLA availability up to 99.999% for multi-regional instances with 10x less downtime than four nines.
  • Provides transparent, synchronous replication across region and multi-region configurations.
  • Optimizes performance by automatically sharding the data based on request load and size of data so you can spend less time thinking about scaling your database and more time scaling your business.
  • You can run instances on a regional scope or multi-regional where your database is able to survive regional failure. 
  • All tables must have a declared primary key (PK), which can be composed of multiple table columns.
  • Can make schema changes like adding a column or adding an index while serving live traffic with zero downtime.
  • Multi-model database: Supports relational, graph, key-value, and vector search workloads on a single database.
  • Spanner Graph: Query complex relationships and connections in your data using graph processing.
  • Built-in full-text search and vector search for semantic search and AI applications.
  • TrueTime: Global distributed clock that guarantees strong, external consistency across regions.
  • ZeroETL integration with Vertex AI for building AI-enabled applications directly on operational data.
  • Data Boost: On-demand, isolated compute resources for running analytical queries without impacting transactional workloads.
  • Tutorials dojo strip
  • Automatic database sharding based on request load and data size for optimal performance.
  • Geo-partitioning: Place data closer to users for lower latency while maintaining global consistency.
  • Three editions available:
  • Standard edition: Regional configurations with core capabilities
  • Enterprise edition: Multi-model capabilities with enhanced operational simplicity
  • Enterprise Plus edition: Highest availability (99.999% SLA), performance, and compliance for mission-critical workloads
  • Database Center: Centralized fleet management with performance and security recommendations.
  • Point-in-time recovery (PITR) for operational peace of mind.
  • Migration tools for sharded MySQL and Cassandra workloads with minimal downtime.

 

Editions

  • Cloud Spanner Standard:
    • Regional (single-region) configurations
    • Core database capabilities
  • Cloud Spanner Enterprise:
    • Multi-model capabilities (graph, search, vector)
    • Enhanced operational simplicity
    • Regional, dual-region, and multi-region configurations
  • Cloud Spanner Enterprise Plus:
    • Highest availability: 99.999% SLA
    • Highest performance and compliance standards
    • Ideal for mission-critical, global workloads
    • Advanced governance and security features

Compute capacity is provisioned in processing units or nodes (1 node = 1000 processing units). Capacity is billed per replica.

 

Spanner Graph & Search

  • Spanner Graph: Build knowledge graphs that capture complex relationships between entities (nodes) and their connections (edges). Ideal for recommendation engines, fraud detection, and knowledge base systems.
  • Vector search: Perform semantic similarity search on embeddings for AI and machine learning applications.
  • Full-text search: Integrated keyword-based search capabilities.
  • Combined queries: Blend semantic understanding, keyword retrieval, and graph traversal for comprehensive results.

 

Pricing

Cloud Spanner pricing is based on compute capacity, database storage, backup storage, replication, and network usage. Committed use discounts reduce compute costs.

  • Compute capacity:
    • Provisioned in processing units or nodes (1 node = 1000 processing units)
    • Billed per hour per replica (each replica in a configuration incurs compute charges)
    • Three editions available with different price points:
    • Standard edition
    • Enterprise edition
    • Enterprise Plus edition
    • Data Boost: On-demand, isolated compute for analytical workloads (billed per serverless processing unit per hour)
  • Database storage:
    • SSD storage: For low-latency operational workloads (billed per GB per month per replica)
    • HDD storage: For less frequently accessed data with higher latency tolerance (billed per GB per month per replica)
    • Storage tiering policies can automatically move data from SSD to HDD after a specified time
  • Backup storage:
    • Regional configurations: Billed per GB per month (includes storage in all replicas)
    • Dual-region and multi-region configurations: Billed per GB per month (includes storage in all replicas)
  • Replication:
    • Intra-region replication: Free
    • Inter-region replication: Charged per GB
  • Network:
    • Ingress: Free
    • Intra-region egress: Free
    • Inter-region egress: Charged per GB

For current pricing details, refer to the official Google Cloud Spanner pricing page.

Validate Your Knowledge

Question 1

A company has an application that uses Cloud Spanner as its backend database. After a few months of monitoring your Cloud Spanner resource, you noticed that the incoming traffic of the application has a predictable pattern. You need to set up automatic scaling that will scale up or scale down your Spanner nodes based on the incoming traffic. You don’t want to use an open-source tool as much as possible.

What should you do?

  1. Set up an Autoscaler infrastructure in the same project where the Cloud Spanner is deployed to automatically scale the Cloud Spanner resources according to its CPU metric.
  2. Set up an alerting policy on Cloud Monitoring that sends an email alert to on-call Site Reliability Engineers (SRE) when the Cloud Spanner CPU metric exceeds the desired threshold. The SREs shall scale the resources up or down appropriately.
  3. Set up an alerting policy on Cloud Monitoring that sends an alert to a webhook when the Cloud Spanner CPU metric is over or under your desired threshold. Create a Cloud Function that listens to this HTTP webhook and resizes Spanner resources appropriately.
  4. Set up an alerting policy on Cloud Monitoring that sends an email alert to Google Cloud Support email when the Cloud Spanner CPU metric exceeds the desired threshold. The Google Support team shall scale the resources up or down appropriately.

Correct Answer: 3

When you create a Cloud Spanner instance, you choose the number of nodes that provide compute resources for the instance. As the instance’s workload changes, Cloud Spanner does not automatically adjust the number of nodes in the instance. As a result, you need to set up several alerts or use an Autoscaler tool to ensure that the instance stays within the recommended maximums for CPU utilization and the recommended limit for storage per node.

You can invoke Cloud Functions with an HTTP request using the POST, PUT, GET, DELETE, and OPTIONS HTTP methods. To create an HTTP endpoint for your function, specify –trigger-http as the trigger type when deploying your function. From the caller’s perspective, HTTP invocations are synchronous, meaning the result of the function execution will be returned in the response to the HTTP request.

Free AWS Courses

Cloud Spanner (Autoscaler)

Autoscaler tool for Cloud Spanner (Autoscaler), an open-source tool that you can use as a companion tool to Cloud Spanner. This tool lets you automatically increase or reduce the number of nodes or processing units in one or more Spanner instances based on how their capacity is being used.

Autoscaler monitors your instances and automatically adds or removes nodes or processing units to help ensure that they stay within the following parameters:

– The recommended maximums for CPU utilization.

– The recommended limit for storage per node, plus or minus a configurable margin.

To deploy Autoscaler, decide which of the following topologies is best to fulfill your technical and operational needs:

Per-project topology: The Autoscaler infrastructure is deployed in the same project as Cloud Spanner that needs to be autoscaled.

Centralized topology: Autoscaler is deployed in one project and manages one or more Cloud Spanner instances in different projects.

Distributed topology:: Most of the Autoscaler infrastructure is deployed in one project, but some infrastructure components are deployed with the Cloud Spanner instances being autoscaled in different projects.

In the scenario, you have to find a method where you can automatically scale your Cloud Spanner resources based on a traffic pattern. As much as possible, you also don’t want to use an open-source tool. Since Cloud Spanner does not scale automatically, you have to check for CPU usage of your Spanner instances and find a way to trigger your Cloud Spanner database to scale its resources accordingly. Moreover, you have to ensure that these steps are done automatically.

Hence the correct answer is: Set up an alerting policy on Cloud Monitoring that sends an alert to a webhook when the Cloud Spanner CPU metric is over or under your desired threshold. Create a Cloud Function that listens to this HTTP webhook and resizes Spanner resources appropriately.

The option that says: Set up an Autoscaler infrastructure in the same project where the Cloud Spanner is deployed to automatically scale the Cloud Spanner resources according to its CPU metric is incorrect because the Autoscaler tool for Cloud Spanner (Autoscaler) is an open-source tool.

The option that says: Set up an alerting policy on Cloud Monitoring that sends an email alert to on-call Site Reliability Engineers (SRE) when the Cloud Spanner CPU metric exceeds the desired threshold. The SREs shall scale the resources up or down appropriately is incorrect because this method requires an on-call SRE every time there is an alert which means that the scaling will be done manually rather than automatically.

The option that says: Set up an alerting policy on Cloud Monitoring that sends an email alert to Google Cloud Support email when the Cloud Spanner CPU metric exceeds the desired threshold. The Google Support team shall scale the resources up or down appropriately is incorrect because this does not satisfy the requirement to scale the Cloud Spanner resources automatically. In this method, you will delegate the task of scaling the Spanner resources to a Technical Account Manager every time an alert is triggered.

References:

https://cloud.google.com/spanner/docs/monitoring-cloud
https://cloud.google.com/functions/docs/writing/http
https://cloud.google.com/architecture/autoscaling-cloud-spanner
https://github.com/cloudspannerecosystem/autoscaler

Note: This question was extracted from our Google Certified Associate Cloud Engineer Practice Exams.

For more Google Cloud practice exam questions with detailed explanations, check out the Tutorials Dojo Portal:

Google Certified Associate Cloud Engineer Practice Exams

Google Cloud Spanner Cheat Sheet Reference:

https://cloud.google.com/spanner

https://docs.cloud.google.com/spanner/docs/getting-started/set-up

🌸 25% OFF All Reviewers on our International Women’s Month Sale! Save 10% OFF All Subscriptions Plans & 5% OFF Store Credits/Gift Cards!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

$2.99 AWS and Azure Exam Study Guide eBooks

tutorials dojo study guide eBook

New AWS Generative AI Developer Professional Course AIP-C01

AIP-C01 Exam Guide AIP-C01 examtopics AWS Certified Generative AI Developer Professional Exam Domains AIP-C01

Learn GCP By Doing! Try Our GCP PlayCloud

Learn Azure with our Azure PlayCloud

FREE AI and AWS Digital Courses

FREE AWS, Azure, GCP Practice Test Samplers

SAA-C03 Exam Guide SAA-C03 examtopics AWS Certified Solutions Architect Associate

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Written by: Jon Bonso

Jon Bonso is the co-founder of Tutorials Dojo, an EdTech startup and an AWS Digital Training Partner that provides high-quality educational materials in the cloud computing space. He graduated from Mapúa Institute of Technology in 2007 with a bachelor's degree in Information Technology. Jon holds 10 AWS Certifications and is also an active AWS Community Builder since 2020.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?