
Simuliere die echte Prüfungserfahrung mit 50 Fragen und einem Zeitlimit von 120 Minuten. Übe mit KI-verifizierten Antworten und detaillierten Erklärungen.
KI-gestützt
Jede Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
You need to configure an automated policy for a specific dual-region Cloud Storage bucket so that project documents are transitioned to Archive storage after 180 days and then permanently deleted exactly 730 days (2 years) after their creation; how should you set up the policy?
Incorrect. It assumes the Delete Age condition is relative to the previous lifecycle action (transition at 180 days), so it subtracts 180 from 730 and uses 550. In Cloud Storage lifecycle, Age is always measured from the object’s creation time, not from when it was transitioned. This would delete objects at 550 days after creation, violating the 2-year requirement.
Correct. Configure two Cloud Storage lifecycle rules using Age conditions measured from object creation: SetStorageClass to ARCHIVE at Age 180, and Delete at Age 730. This matches the requirement to transition after 180 days and delete 730 days after creation. This is the standard, fully managed approach and works the same for dual-region buckets.
Incorrect. gsutil rewrite is a manual/one-time operation to rewrite objects (often used to change storage class, encryption, or metadata). It does not create an automated ongoing policy for future objects. Also, setting Delete to 550 days repeats the same Age misunderstanding as option A and would delete too early.
Incorrect. While 730 days matches the desired deletion age, gsutil rewrite still does not implement an automated lifecycle policy; it only rewrites existing objects at the time you run it. It also doesn’t address the required automatic transition to Archive at 180 days for all objects over time.
Core Concept: This question tests Cloud Storage Object Lifecycle Management. Lifecycle rules let you automatically transition objects between storage classes (e.g., Standard to Archive) and delete objects based on conditions such as Age (days since object creation), creation date, or newer versions. This is an operations-focused control to manage cost and retention without custom scripts. Why the Answer is Correct: The requirement is: (1) transition to Archive after 180 days, and (2) permanently delete exactly 730 days after creation. In Cloud Storage lifecycle, an Age condition is evaluated relative to the object’s creation time, not relative to when a previous lifecycle action occurred. Therefore, you must configure two separate rules with Age conditions measured from creation: - Rule 1: Age = 180, action SetStorageClass = ARCHIVE - Rule 2: Age = 730, action Delete This ensures deletion occurs at 730 days from creation regardless of the earlier transition. Key Features / Best Practices: Lifecycle policies are enforced by Cloud Storage automatically (no cron jobs, no Compute). They apply per bucket (including dual-region buckets) and scale with no additional infrastructure. Using Archive after 180 days aligns with cost optimization (Google Cloud Architecture Framework: cost optimization and operational excellence). Deletion at 730 days enforces retention/cleanup. Note that lifecycle actions are not guaranteed to execute at an exact timestamp; they are applied asynchronously, typically within a day, so “exactly” in exam context means “based on Age=730,” not “to-the-second.” Common Misconceptions: A common trap is subtracting 180 from 730 and setting Delete at 550 days, assuming the delete timer starts after the storage-class transition. That is incorrect because Age is always since creation. Another misconception is using gsutil rewrite; rewrite changes storage class by copying/rewriting objects and is not an automated ongoing policy. Exam Tips: For retention/transition questions, default to Cloud Storage lifecycle rules. Remember: Age/CreatedBefore are evaluated from object creation time. Use multiple rules for multiple milestones. Also distinguish lifecycle management (automated policy) from one-time administrative commands (gsutil rewrite) and from retention policies/holds (which prevent deletion rather than schedule it).
Your company is launching a high-traffic digital ticketing API in a new Google Kubernetes Engine (GKE) regional cluster in us-central1 with cluster autoscaling enabled (minimum 2 nodes, maximum 20 nodes), and the application can scale from 3 to 50 pods during peak events; you must expose the API to the public over HTTPS using a single global public IPv4 address without modifying application code, and you want Google-managed TLS certificates to terminate HTTPS while supporting rolling updates and pod autoscaling. What should you do?
Correct. A Kubernetes Service used with GKE Ingress allows Google Cloud to provision an external HTTP(S) load balancer in front of the application. That load balancer can be assigned a single reserved global static IPv4 address, which directly satisfies the requirement for one global public endpoint. Google-managed certificates can be attached so TLS is terminated at the load balancer and certificate provisioning and renewal are handled automatically by Google. This approach also works well with rolling updates and autoscaling because the load balancer sends traffic only to healthy, ready backends as pods and nodes change over time.
Incorrect. A ClusterIP Service is only reachable from within the cluster or connected internal networks and is not a public internet endpoint. Public DNS cannot meaningfully point clients to a private ClusterIP and make the service internet-accessible. This option also does not provide a global public IPv4 address or a Google-managed HTTPS termination point. As a result, it fails both the networking and TLS requirements in the scenario.
Incorrect. Exposing a NodePort on every node and publishing all node IPs in DNS does not provide a single global public IPv4 address, which is explicitly required. It is also operationally fragile because cluster autoscaling adds and removes nodes, causing the set of node IPs to change over time. DNS-based distribution does not provide managed Layer 7 HTTPS features such as centralized TLS termination, health-aware request routing, and proper backend draining during updates. This design therefore does not meet the reliability, manageability, or certificate requirements.
Incorrect. Running HAProxy in a pod and forwarding traffic through one node’s external IP creates an unnecessary custom load-balancing layer and introduces a likely single point of failure. One node IP is not equivalent to a Google-managed global anycast IPv4 address, and the design would require you to manage failover, health checks, and proxy lifecycle yourself. It also bypasses the native GKE Ingress integration that supports managed TLS certificates and seamless backend updates. This option adds complexity while failing to meet the stated global public endpoint and managed HTTPS goals.
Core concept: This question tests how to expose a GKE workload publicly over HTTPS using Google Cloud Load Balancing with Kubernetes Ingress, while meeting requirements for a single global IPv4 address, Google-managed TLS, and compatibility with autoscaling and rolling updates. Why the answer is correct: Option A uses the standard GKE pattern: a Service (commonly NodePort when used with Ingress) plus a Kubernetes Ingress that provisions a Google Cloud external HTTP(S) Load Balancer. This load balancer is global (anycast) and can be assigned a single reserved global static IPv4 address. Using Google-managed certificates (via GKE Ingress + ManagedCertificate or Certificate Manager integration, depending on cluster version) allows Google to provision/renew TLS automatically and terminate HTTPS at the load balancer—no application code changes required. Key features and best practices: - Global external HTTP(S) Load Balancer: provides a single global IPv4, cross-region edge termination, and L7 routing. - Google-managed TLS: simplifies operations and aligns with the Google Cloud Architecture Framework’s reliability and security pillars (automated cert lifecycle, strong defaults). - Works with GKE autoscaling: the Ingress LB targets node instance groups (or NEGs when enabled). As cluster autoscaler adds/removes nodes and HPA scales pods, the backend membership updates automatically. - Rolling updates: Kubernetes Deployments update pods gradually; the load balancer health checks and readiness gates ensure traffic only reaches ready endpoints. Common misconceptions: - Confusing Service IP types: a ClusterIP is internal-only and not reachable from the public internet. - Thinking DNS can “load balance” by listing node IPs: nodes are ephemeral under autoscaling and this breaks the single global IP requirement. - Building custom LBs (HAProxy) inside the cluster: adds operational burden, creates single points of failure, and doesn’t provide a global anycast IP or managed TLS. Exam tips: When you see “single global public IPv4,” “HTTPS,” and “Google-managed certificates” for GKE, the expected solution is Ingress with an external HTTP(S) Load Balancer and a reserved global static IP. Remember: ClusterIP is internal; NodePort is typically paired with Ingress; and avoid DIY load balancers for production-grade, autoscaled GKE services.
Your security team needs to grant SSH access to a single VM named edge-proxy-01 in project film-prod-2468 (zone europe-west1-b) only for members of the Google Group qa1 (8 users), ensuring they cannot access any other VM in the project and preferring a Google-recommended, centrally managed approach that works from Cloud Shell without distributing private keys; what should you do?
Correct. Enabling `enable-oslogin=true` on edge-proxy-01 switches SSH authorization to OS Login for that VM. Granting the qa1 Google Group `compute.osLogin` scoped to the instance allows only those users to SSH to that specific VM and nowhere else in the project. Using Cloud Shell with `gcloud compute ssh` works without distributing private keys because access is tied to Google identity and IAM.
Incorrect. OS Login enablement is good, but changing the VM’s service account to “No service account” is unrelated to user SSH access. Service accounts control what the VM can access (outbound API calls), not who can SSH in. Also, this option does not explicitly grant the qa1 group OS Login permissions scoped to the instance, so it fails the access-control requirement.
Incorrect. Blocking project-wide SSH keys can help limit inherited metadata keys, but generating and distributing unique SSH keys to each user violates the requirement to avoid private key distribution and is not the Google-recommended centrally managed approach. It also increases operational burden (rotation, revocation, auditing) compared to OS Login with IAM and Google Groups.
Incorrect and insecure. Sharing a single private key among multiple users breaks accountability and non-repudiation, making it impossible to attribute actions to individual users. It also violates best practices for key management and the requirement to avoid distributing private keys. Even with project-wide keys blocked, this approach is not centrally managed via IAM and is operationally risky.
Core Concept: This question tests OS Login and IAM-based SSH access control in Compute Engine. OS Login is Google’s recommended approach to centrally manage Linux SSH access using IAM identities (including Google Groups) rather than distributing and managing SSH keys via instance/project metadata. Why the Answer is Correct: Option A enables OS Login on only the target VM (edge-proxy-01) and grants the qa1 Google Group the appropriate OS Login IAM role scoped to that single instance. This ensures members can SSH to that VM and not to other VMs in the project, because they lack OS Login permissions elsewhere. It also satisfies the requirement to work from Cloud Shell without distributing private keys: users can run `gcloud compute ssh`, which uses their Google identity and OS Login to provision ephemeral SSH credentials. Key Features / Best Practices: - Instance-level metadata `enable-oslogin=true` limits the behavior to one VM, aligning with least privilege. - IAM binding at the instance resource (not project) restricts access to only that VM. - Google Groups can be used directly in IAM policies, simplifying management for 8 users. - Cloud Shell + `gcloud compute ssh` integrates cleanly with OS Login and avoids manual key distribution. - This aligns with Google Cloud Architecture Framework security principles: centralized identity, least privilege, and auditable access (Cloud Audit Logs for IAM changes and OS Login activity). Common Misconceptions: Many assume “Block project-wide SSH keys” is the best security control. While it can reduce key sprawl, it still relies on key generation/distribution and does not provide the same centralized, IAM-driven access model as OS Login. Another misconception is that removing a VM service account affects SSH access; it does not. Exam Tips: - For “centrally managed SSH” and “no private key distribution,” think OS Login. - To restrict SSH to a single VM, scope IAM bindings to the instance, not the project. - Remember role selection: `compute.osLogin` is for standard SSH; `compute.osAdminLogin` is for sudo/admin access. Choose the least-privileged role that meets the requirement.
Your compliance team has contracted an external penetration tester to review all resources in the Google Cloud project proj-audit-789 for 7 days and must ensure they cannot modify anything. Your organization has the Organization Policy constraint Domain Restricted Sharing configured at the organization node to allow only accounts in the corp.example.com Cloud Identity domain. How do you provide the tester with read-only visibility to that project without violating the policy?
Incorrect. A personal Google Account (often gmail.com) is outside the allowed corp.example.com domain. With Domain Restricted Sharing enabled at the organization level, you will be prevented from adding that principal to the project IAM policy, even with a read-only role like Viewer. The role choice is irrelevant because the policy blocks the identity itself.
Incorrect. Although roles/iam.securityReviewer is a read-only role, it does not bypass Domain Restricted Sharing. If the tester’s Google Account is not in corp.example.com, you cannot grant it any IAM binding on the project. Additionally, Security Reviewer is more limited than Viewer and may not provide the broad resource visibility implied by “review all resources.”
Correct. Creating a temporary Cloud Identity user in the approved corp.example.com domain complies with Domain Restricted Sharing. Granting roles/viewer provides broad read-only access across many Google Cloud services, aligning with the requirement that the tester must not be able to modify resources while still being able to inspect configurations and assets during the 7-day engagement.
Partially aligned but not the best answer. A temporary corp.example.com user would satisfy Domain Restricted Sharing, but roles/iam.securityReviewer is narrower and focused on IAM/security posture visibility. The question states the tester must “review all resources,” which typically requires broader read-only access than Security Reviewer provides. Viewer is the more appropriate read-only role for comprehensive project visibility.
Core concept: This question tests IAM identity types and Organization Policy constraints—specifically Domain Restricted Sharing (DRS). DRS restricts which principals can be granted IAM access based on their domain, typically allowing only users/groups/service accounts from approved Cloud Identity/Google Workspace domains. Why the answer is correct: Because DRS is configured at the organization node to allow only accounts in corp.example.com, you cannot grant project-level IAM roles to an external tester’s personal Google Account (gmail.com) or any account outside the allowed domain. To provide access without violating policy, you must use an identity that belongs to corp.example.com. Creating a temporary Cloud Identity user in corp.example.com for the tester satisfies the DRS constraint. Then granting the Viewer role (roles/viewer) on proj-audit-789 provides broad read-only visibility across most Google Cloud resources, meeting the requirement that they “cannot modify anything,” while still allowing them to inspect configurations. Key features / best practices: - Domain Restricted Sharing is enforced at IAM policy binding time; disallowed principals cannot be added even if you are an owner. - Use least privilege and time-bound access: create a temporary user, grant only required roles, and remove access after 7 days. In practice, you can also use an access approval/workflow and audit logs to track activity. - Viewer is a general read-only role across many services; it’s commonly used for auditors/assessors who need visibility but no write permissions. Common misconceptions: - “Security Reviewer is read-only so it’s better.” It is read-only, but it is narrower (security/IAM-focused) and may not provide visibility to all resources a penetration tester needs to review. - “Using their Google Account is fine if it’s read-only.” DRS blocks the principal regardless of role; the domain restriction is about who can be granted access, not what permissions they get. Exam tips: When you see Domain Restricted Sharing, first determine whether the principal’s domain is allowed. If not, the only compliant options are to use an identity in an allowed domain (temporary Cloud Identity user, group membership in that domain, or an allowed service account where appropriate). Then choose the least-privilege role that still meets the stated visibility requirements—Viewer for broad read-only, Security Reviewer for security posture review only.
In Cloud Shell, your default project is dev-sbx-1234 and your organization has about 200 projects; without changing the default configuration, you must use gcloud to output only the currently enabled Google Cloud APIs for the production project whose display name is "orion-billing-prod"—what should you do?
Correct. You must identify the project ID for the project whose display name is "orion-billing-prod" because gcloud services operations are scoped to a project ID/number. Then gcloud services list --project <PROJECT_ID> lists only the currently enabled APIs for that project. This approach does not change the active configuration in Cloud Shell and is the standard, safe way to target a different project for a single command.
Incorrect. gcloud init is an interactive workflow that typically updates your active configuration (including the default project), which violates the requirement to not change the default configuration. Additionally, gcloud services list --available lists services that are available to enable, not the services currently enabled in the project, so it does not meet the “currently enabled APIs” requirement.
Incorrect. gcloud info shows environment and active account details, but enabled APIs are a project-level setting, not an account-level setting. Also, gcloud services list does not use an --account flag to determine which project’s enabled services to list; it uses the active project or --project. This option misunderstands the resource hierarchy and scoping model.
Incorrect. gcloud projects describe <PROJECT_ID> can verify a project once you already know the project ID, but it does not help you discover the correct project ID from a display name. More importantly, gcloud services list --available again lists APIs that could be enabled, not those currently enabled. This fails the main requirement even if the project were correctly identified.
Core Concept: This question tests gcloud CLI project scoping and the Service Usage API surface exposed via gcloud services. Specifically, it focuses on how to list only enabled APIs for a project that is not your current default project in Cloud Shell, without changing your active configuration. Why the Answer is Correct: In Cloud Shell, gcloud commands run against the active (default) project unless you explicitly override it. You are told not to change the default configuration, so you should not run commands that alter the active project (for example, gcloud init or gcloud config set project). The production project is identified by display name ("orion-billing-prod"), but gcloud services list requires a project identifier to scope the request. Therefore, the correct workflow is: (1) find the project ID (or project number) that corresponds to the display name using gcloud projects list (optionally filtered), then (2) run gcloud services list --project <PROJECT_ID> to list enabled services for that project. Key Features / Best Practices: - gcloud services list (without --available) lists enabled APIs/services for the target project. - Use --project to target a non-default project without modifying configuration, aligning with least surprise and operational safety. - With ~200 projects, you can filter: gcloud projects list --filter="name=orion-billing-prod" --format="value(projectId)" to quickly extract the ID. - This aligns with the Google Cloud Architecture Framework operational excellence principle: make safe, reversible changes and avoid unnecessary configuration drift in shared environments like Cloud Shell. Common Misconceptions: - Confusing --available with enabled services: --available shows services that could be enabled, not what is currently enabled. - Thinking account scoping affects enabled APIs: APIs are enabled per project, not per user account. - Using gcloud init: it changes configuration and is unnecessary for a one-off query. Exam Tips: - Memorize the pattern: “Need to act on a non-default project? Use --project.” - Remember: display name is not the same as project ID; many gcloud commands require project ID/number. - For API enablement questions, distinguish between listing enabled services (gcloud services list) vs listing available services (gcloud services list --available).
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
A nationwide retail company currently manages 120 staff accounts in Google Workspace and expects to scale to 1,200 accounts within 18 months; approximately 85% of users will need access to Google Cloud resources (BigQuery dashboards and Cloud Run services) across three projects, and the access solution must handle 10x growth without adding new infrastructure, degrading sign-in performance, or increasing security risk—what should you do?
Incorrect. Migrating to Microsoft AD introduces significant new infrastructure/operational overhead (directory services, sync, lifecycle management) and increases complexity and risk. GCDS is typically used to sync from AD to Google, not to justify moving away from an existing Google Workspace identity source. This does not meet the requirement to scale without adding infrastructure or increasing security risk.
Correct. Using Cloud Identity (Google Workspace) groups for IAM bindings is the standard scalable pattern: manage membership once, grant access consistently across three projects, and avoid per-user IAM sprawl. Enforcing MFA centrally strengthens authentication without impacting sign-in performance or requiring new systems. This approach supports 10x growth with minimal administrative overhead.
Incorrect. Cloud Identity and Google Workspace are not typically set up in a federation relationship for the same user population; Workspace already includes Cloud Identity for those accounts. “Domain-wide delegation” is for service accounts to impersonate users for APIs, not for enforcing MFA. This option misuses concepts and doesn’t describe a valid or necessary architecture.
Incorrect. Adding a third-party IdP plus real-time synchronization increases complexity, introduces another dependency, and can degrade reliability and sign-in experience. It also expands the attack surface and operational burden. Unless there is a hard requirement to use an external IdP, Google best practice is to use Workspace/Cloud Identity directly with group-based IAM and MFA.
Core Concept: This question tests Google Cloud identity and access management at scale: using Google Workspace/Cloud Identity as the identity provider, managing authorization with IAM, and simplifying administration with Google Groups (Cloud Identity groups). It also tests security best practices (MFA) without adding infrastructure. Why the Answer is Correct: Option B aligns with Google-recommended patterns for scalable access control: manage users in Google Workspace/Cloud Identity, place them into groups that represent roles (e.g., bq-dashboard-viewers, cloudrun-invokers), and bind those groups to IAM roles at the project level. This scales from 120 to 1,200+ users without deploying or operating additional identity infrastructure, and it avoids per-user IAM bindings (which become unmanageable and error-prone). Sign-in performance remains consistent because authentication stays within Google’s managed identity platform, and security risk is reduced by enforcing MFA centrally. Key Features / Best Practices: - Use Cloud Identity/Google Workspace groups as principals in IAM bindings across multiple projects. - Apply least privilege by mapping groups to predefined roles (e.g., BigQuery Data Viewer, BigQuery Job User, Cloud Run Invoker) and using separate groups per environment/project when needed. - Enforce MFA (and ideally context-aware access / security keys for admins) via Google Workspace/Cloud Identity security settings. - Centralize access reviews by auditing group membership rather than scattered IAM entries. This approach aligns with the Google Cloud Architecture Framework’s security pillar: centralized identity, strong authentication, and simplified authorization management. Common Misconceptions: A common trap is assuming you must introduce AD or a third-party IdP to “scale.” In Google Cloud, scaling identity for Workspace users is primarily an administrative model problem (group-based IAM), not a capacity problem requiring new infrastructure. Another misconception is that “federation” between Workspace and Cloud Identity is needed—Workspace already provides Cloud Identity capabilities for its users. Exam Tips: When you see “scale without adding infrastructure” plus “access across multiple projects,” think: Google Groups + IAM bindings. Prefer managed identity (Workspace/Cloud Identity) and MFA enforcement over building sync/federation chains unless there is a stated requirement to use an external corporate IdP.
Your security policy mandates that a single compiled log-processing binary must run directly on Compute Engine virtual machines (no containers or serverless), and you need the fleet to auto-scale on average VM CPU (scale out when above 65% and scale in when below 45%) between 2 and 40 instances with the fastest reaction and minimal operational overhead during unpredictable 5–10 minute spikes; what should you do?
GKE Horizontal Pod Autoscaling is designed for containerized workloads running in Kubernetes. The question explicitly forbids containers and serverless, requiring the binary to run directly on Compute Engine VMs. Even if you used node autoscaling, you’d still be operating a Kubernetes control plane and container runtime, increasing operational overhead and violating the stated security/policy constraint.
A Managed Instance Group with an instance template and CPU-based autoscaling is the canonical solution for a fleet of identical Compute Engine VMs. You can set the minimum size to 2, maximum size to 40, and use a target average CPU utilization of about 65% so the autoscaler adds instances when the group is too busy and removes them when demand subsides. This approach is fully managed, integrates with health checks and autohealing, and keeps operational overhead low compared with custom scaling logic. It also satisfies the policy requirement that the compiled binary run directly on VMs rather than in containers or serverless platforms.
Time-of-day scheduled scaling is useful for predictable, recurring traffic patterns (e.g., business hours). The question states spikes are unpredictable and last 5–10 minutes, so a schedule will either miss spikes or overprovision unnecessarily. It also doesn’t meet the requirement to scale based on average CPU thresholds (65%/45%).
Custom automation using third-party tools and Cloud Monitoring metrics can scale VMs, but it adds operational overhead (tooling, maintenance, failure modes, permissions, and testing). It’s also redundant because MIG autoscaling already uses CPU utilization and Cloud Monitoring signals in a managed, supported way. For an exam, prefer the simplest managed GCP feature that meets requirements.
Core Concept: This question tests Compute Engine autoscaling with Managed Instance Groups (MIGs) and instance templates. MIGs are the standard Google Cloud mechanism for running identical VM-based workloads with automatic scaling, health management, and low operational overhead. Why the Answer is Correct: The workload must run directly on Compute Engine VMs, so a MIG built from an instance template is the native fit. CPU-based autoscaling on a MIG lets you define minimum and maximum fleet size and a target average CPU utilization, such as about 65%, so the group can add instances during spikes and remove them as demand falls. This is the fastest managed VM autoscaling option with the least custom work, and it is well suited to short, unpredictable bursts when paired with a properly tuned instance initialization period and a startup method that brings instances online quickly. Key Features / Configurations: - Instance template: defines the VM configuration and how the binary is installed or started, such as via a custom image or startup script. - MIG autoscaler: configure min=2, max=40, and target CPU utilization around 0.65. Scale-in is automatic when average utilization stays below the target, subject to autoscaler stabilization behavior rather than a separately configured 45% threshold. - Health checks and autohealing: failed instances are recreated automatically, reducing operational burden. - Custom images and fast bootstrapping: important for reacting quickly to 5–10 minute spikes, because VM startup time affects real-world responsiveness. - Regional MIGs can improve availability if the workload should survive zonal failures. Common Misconceptions: Kubernetes autoscaling is not appropriate here because the policy explicitly disallows containers and adds unnecessary platform complexity. Scheduled autoscaling is only effective for predictable demand patterns, not short random spikes. Custom automation may work, but it is usually inferior to the built-in MIG autoscaler for a standard CPU-driven VM scaling requirement. Exam Tips: When a question says the application must run on Compute Engine VMs and scale automatically on CPU with minimal operations, think instance template plus managed instance group autoscaling. Be careful not to over-interpret threshold wording: on the exam, CPU autoscaling for MIGs is typically expressed as a target utilization, not separate explicit scale-out and scale-in percentages. Prefer the simplest native managed service that satisfies the constraints.
Your company operates a global fitness wearable platform where heart-rate telemetry events (about 1 KB each) arrive at highly variable rates from 50 to 25,000 events per second during weekly challenges, and you need a cost-effective solution that can absorb sudden 10–20 minute traffic spikes and process each event within 5 seconds without pre-provisioning capacity; what should you do to optimize ingestion and processing?
Firestore + Cloud Scheduler + periodic Cloud Function is not suitable for sub-5-second per-event processing. Scheduler runs on a fixed cadence, introducing inherent delay and making latency unpredictable during spikes. Firestore is a database, not a high-throughput burst buffer; heavy write rates can hit contention and cost issues. This design also couples ingestion to database writes and adds operational complexity for polling/processing logic.
A single dedicated Compute Engine instance cannot meet the requirement to handle 50 to 25,000 events/sec without pre-provisioning. You would need to size for peak, which is costly and still risky during sudden spikes. It also creates a single point of failure and requires load balancing, autoscaling groups, and queueing to be reliable. This is the opposite of a serverless, elastic ingestion pattern.
Appending telemetry to a JSON file in Cloud Storage and running a daily batch job violates the requirement to process each event within 5 seconds. Cloud Storage is excellent for durable object storage and batch analytics, but it is not intended for high-frequency per-event appends (object rewrite semantics) and introduces significant latency. This approach is appropriate for archival or offline analytics, not real-time processing.
Pub/Sub + Pub/Sub-triggered Cloud Function is the standard serverless pattern for bursty event ingestion and near-real-time processing. Pub/Sub absorbs spikes by buffering messages durably and scaling horizontally, while Cloud Functions scales automatically with demand and requires no pre-provisioning. This meets the 5-second processing goal (assuming downstream dependencies can keep up) and is cost-effective with pay-per-use pricing.
Core concept: This question tests serverless, autoscaling event ingestion and near-real-time processing on Google Cloud. The key services are Pub/Sub (durable, horizontally scalable messaging) and Cloud Functions (event-driven compute that scales automatically). Why the answer is correct: You have highly variable global telemetry (50 to 25,000 events/sec) with sudden 10–20 minute spikes and a strict processing SLO (each event within 5 seconds) without pre-provisioning. Pub/Sub is designed to absorb bursty traffic by buffering messages in a durable subscription while decoupling producers from consumers. A Pub/Sub-triggered Cloud Function scales out based on incoming message volume, processing events as they arrive. This combination is cost-effective because you pay per message (Pub/Sub) and per invocation/compute time (Functions), not for idle capacity. Key features / best practices: - Use Pub/Sub topics for ingestion and subscriptions for processing; Pub/Sub provides at-least-once delivery, so make the function idempotent (deduplicate using event IDs). - Configure appropriate acknowledgement deadlines and retry behavior; failed messages will be redelivered. - Consider Cloud Functions 2nd gen (Cloud Run-based) for better concurrency and scaling controls; set max instances if needed to protect downstream systems. - Use monitoring (Cloud Monitoring) on subscription backlog (oldest unacked message age) to validate the 5-second requirement. Common misconceptions: Firestore or Cloud Storage can store events, but they are not optimized as burst buffers for high-rate streaming ingestion with sub-5-second processing. Scheduling/batching introduces latency and violates the requirement. A single VM seems simple but cannot elastically scale without pre-provisioning and becomes a single point of failure. Exam tips: When you see “spiky traffic,” “no pre-provisioning,” and “process within seconds,” think Pub/Sub + serverless compute (Cloud Functions/Cloud Run) or Dataflow for more complex pipelines. Pub/Sub is the canonical ingestion buffer for event streams in the Google Cloud Architecture Framework’s reliability and performance pillars (decoupling, elasticity, and backpressure handling).
Your startup is launching a multi-tenant SaaS analytics platform that stores relational subscription and billing data (SQL tables with foreign keys) and must serve users in 15+ regions worldwide. The active user base is unpredictable and may grow from 20,000 to over 3,000,000 within six months, and you must scale without planned downtime or major configuration changes. You require ACID transactions, high availability across multiple regions, and a managed service that can horizontally scale as demand increases. Which Google Cloud storage solution should you choose?
Cloud SQL provides managed MySQL/PostgreSQL/SQL Server with ACID transactions and familiar relational features. However, it is primarily designed for regional deployments and scales mainly vertically (bigger machine) with optional read replicas and cross-region replicas for DR. It does not offer the same seamless horizontal scaling and globally distributed, strongly consistent multi-region architecture required for 15+ regions and rapid growth to millions of users.
Firestore is a fully managed, serverless NoSQL document database that can be multi-region and scales automatically with unpredictable traffic. The mismatch is the data model and relational requirements: Firestore does not support SQL tables, foreign keys, or joins in the way a relational billing/subscription system typically needs. While it can be used for some SaaS metadata, it is not the best fit for relational ACID workloads at global scale.
Cloud Spanner is a managed, horizontally scalable relational database that supports SQL, schemas with relationships (including foreign keys), and ACID transactions. It offers multi-region configurations with synchronous replication and strong consistency, enabling high availability across regions and global serving. Capacity scales by adding nodes/processing units without downtime, making it ideal for unpredictable growth from 20,000 to millions of users while maintaining relational integrity for billing/subscription data.
Bigtable is a massively scalable, low-latency wide-column NoSQL database suited for time-series, IoT, and large analytical/operational workloads with simple key-based access patterns. It does not provide relational constraints like foreign keys, nor does it offer general-purpose SQL with joins. While it can replicate and scale horizontally, it is not designed for ACID relational billing/subscription systems requiring strong transactional guarantees across related tables.
Core concept: Selecting the correct managed database for a globally distributed, relational (SQL + foreign keys) multi-tenant SaaS that requires ACID transactions, multi-region high availability, and horizontal scalability without downtime. Why the answer is correct: Cloud Spanner is Google Cloud’s globally distributed, strongly consistent relational database designed for massive scale. It supports SQL schemas with relationships (including foreign keys), ACID transactions, and synchronous replication across regions with high availability. Unlike traditional managed relational databases, Spanner scales horizontally by adding nodes/processing units, enabling growth from tens of thousands to millions of users without planned downtime or major re-architecture—matching the requirement for unpredictable growth and global reach (15+ regions). Key features and best practices: Spanner provides strong consistency and external consistency, multi-region instance configurations for high availability, and automatic sharding/replication. You can choose regional or multi-region instance configs; multi-region is appropriate here to serve global users and survive zonal/region failures. Scaling is primarily capacity-based (nodes/processing units), and schema design should use primary keys that avoid hotspots (e.g., avoid monotonically increasing keys). For multi-tenant SaaS, consider tenant-aware keys and access patterns to distribute load. Spanner integrates with IAM, CMEK options, and offers managed backups and point-in-time recovery features depending on configuration. Common misconceptions: Cloud SQL is relational and ACID, but it is not designed for global, multi-region active-active scale to millions with seamless horizontal scaling; it typically scales vertically and uses read replicas/failover patterns that introduce operational constraints. Firestore is multi-region and scales automatically, but it is a NoSQL document database and does not provide full relational modeling with foreign keys and SQL joins. Bigtable scales massively, but it is a wide-column NoSQL store without relational constraints or ACID transactions across arbitrary rows. Exam tips: When you see “global users,” “multi-region high availability,” “ACID,” and “horizontal scaling” together, Cloud Spanner is the canonical choice. Map requirements to database types: relational+global+scale => Spanner; relational+regional/limited scale => Cloud SQL; document NoSQL => Firestore; wide-column/time-series/low-latency NoSQL => Bigtable. Also watch for wording like “without planned downtime” and “major configuration changes,” which strongly points to Spanner’s managed horizontal scaling model.
Your data analytics team must provision 20 Compute Engine VMs on demand across us-central1-a and europe-west1-b. All VM specs (machine type e2-standard-4, 50-GB boot disk, preemptible=true, labels env=staging and team=etl, network tags=batch-v2, startup script from gs://configs/startup.sh) must live in a single, version-controlled configuration file that can be parameterized per project. You need a declarative, reviewable, template-driven approach that can be applied with a single gcloud command and supports safe updates and rollbacks without writing ad-hoc scripts. Following Google's recommended practices, which method should you use?
Deployment Manager is Google Cloud’s native Infrastructure as Code service. It uses declarative YAML configs with Jinja2/Python templates, supports parameterization per project, and can be applied with a single gcloud deployment-manager command. It manages deployment state, enabling controlled updates and practical rollbacks by reapplying a previous configuration from version control. It fits the requirement for a reviewable, template-driven approach without ad-hoc scripts.
Cloud Composer (managed Apache Airflow) orchestrates data and batch workflows, not infrastructure provisioning. While it could trigger scripts or APIs to create VMs, that would violate the requirement to avoid ad-hoc scripting and to use a declarative, template-driven, reviewable configuration applied with a single command. Composer is also heavier operationally and intended for scheduling/ETL pipelines rather than IaC.
Managed Instance Groups provide autoscaling, autohealing, and rolling updates for a group of identical VMs based on an instance template. However, they are not a general declarative IaC solution and don’t inherently meet the “single version-controlled configuration file” and “single gcloud command for template-driven provisioning across projects” requirement without additional tooling. MIGs also typically operate per zone/region and focus on fleet management, not full-stack provisioning/rollback semantics.
Unmanaged Instance Groups are simply a logical grouping of existing VMs for load balancing or simple grouping. They do not create instances from a template, do not provide declarative provisioning, and lack safe update/rollback mechanisms. You would still need scripts or manual steps to create and manage the 20 VMs and ensure consistent labels, tags, disks, and startup scripts, which conflicts with the question’s requirements.
Core Concept: This question tests Infrastructure as Code (IaC) on Google Cloud using a declarative, template-driven system that is version-controllable, parameterizable per project, and supports safe updates/rollbacks with a single command. Why the Answer is Correct: Deployment Manager is Google Cloud’s native declarative IaC service. You define resources (Compute Engine instances, disks, labels, tags, metadata startup scripts, etc.) in a single configuration (YAML) that can reference templates (Jinja2 or Python). It supports parameterization (e.g., project ID, region/zone lists, instance count) and can be deployed/updated with one gcloud command (gcloud deployment-manager deployments create|update). Deployment Manager maintains deployment state, enabling controlled updates and the ability to revert by updating back to a previous config version stored in source control. This aligns with Google-recommended practices of using declarative IaC rather than ad-hoc scripting. Key Features / Best Practices: - Single, reviewable config file with templates and properties for reuse across projects. - Declarative resource definitions for 20 VMs split across zones (us-central1-a and europe-west1-b). - Supports instance properties: e2-standard-4, boot disk size, preemptible scheduling, labels, network tags, and metadata startup-script-url pointing to gs://configs/startup.sh. - Safe change management: preview changes (via update planning practices) and apply updates consistently; rollback by redeploying a prior known-good configuration. - Fits the Google Cloud Architecture Framework’s operational excellence pillar: repeatable deployments, change control, and reduced configuration drift. Common Misconceptions: Managed Instance Groups are excellent for identical instances with autoscaling and self-healing, but they are not primarily a general-purpose IaC system and do not naturally satisfy “single version-controlled configuration file” and “template-driven applied with a single gcloud command” across multiple zones/projects without additional tooling (instance templates + separate MIG resources + scripting/automation). Unmanaged instance groups don’t provide declarative lifecycle management or rollbacks. Cloud Composer is for orchestrating workflows (Airflow), not provisioning infrastructure. Exam Tips: When you see “declarative,” “template-driven,” “version-controlled configuration,” “single command,” and “safe updates/rollbacks,” think Deployment Manager (or Terraform, but Terraform isn’t an option). MIGs are for fleet management (autoscaling/health checks), not for end-to-end declarative provisioning across projects without an IaC layer.
Lernzeitraum: 1 month
I could count probably like 15+ question exactly the same on the real exam. Cloud pass always the best pratical exam questions.
Lernzeitraum: 1 month
Thank you for the excellent source for preparing for cert exams, detail explanation really helped. Passed the exam.
Lernzeitraum: 1 month
Got my cert after going though the practice questions. I have a background in GCP so it was a bit easy to grasp for me.
Lernzeitraum: 1 month
This helped my pass the ACE on 1/12 , highly recommended.
Lernzeitraum: 1 month
Passed the exam in first attempt!


Möchtest du alle Fragen unterwegs üben?
Kostenlose App holen
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.