
GCP
287+ kostenlose Übungsfragen mit KI-verifizierten Antworten
KI-gestützt
Jede Google Cloud Digital Leader-Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
A global retail company needs to centralize ingestion of custom application logs from 3 GKE clusters (total 150 microservices), 200 Compute Engine VMs, and 12 Cloud Run services. Requirements: accept JSON-formatted payloads with custom fields and resource labels; create log-based metrics for alerting; query and manage retention in a single place; avoid third-party collectors beyond the Google Ops Agent. Which Google Cloud tool should the company use?
Dialogflow is a conversational AI platform for building chatbots and voice assistants. It does not ingest infrastructure/application logs, provide centralized log querying, or manage log retention. It also has no native concept of log-based metrics for operational alerting across GKE, Compute Engine, and Cloud Run. It’s unrelated to observability requirements in this scenario.
Cloud Logging is Google Cloud’s centralized log management service. It natively ingests logs from GKE and Cloud Run and can ingest VM logs via the Google Ops Agent. It supports structured JSON payloads with custom fields, resource labels, and powerful querying in Logs Explorer. It also enables log-based metrics for Cloud Monitoring alerting and supports centralized retention management using log buckets, sinks, and views.
Cloud SDK is a set of command-line tools (like gcloud) and libraries used to manage Google Cloud resources. While it can help configure logging components, it is not a centralized logging platform and does not provide log ingestion pipelines, retention controls, or log-based metrics. It’s tooling, not the operational logging service required by the question.
Data Catalog is a metadata management and data discovery service for datasets (e.g., BigQuery tables, Pub/Sub topics, files in Cloud Storage). It helps with governance and search of data assets, not operational log ingestion, querying, or retention. It does not create log-based metrics for alerting and is not used to centralize application logs from GKE/VMs/Cloud Run.
Core Concept: This question tests Google Cloud’s centralized observability stack, specifically Cloud Logging (part of Google Cloud Operations suite) for ingesting, storing, querying, and managing logs across heterogeneous compute (GKE, Compute Engine, Cloud Run) and for creating log-based metrics used by alerting. Why the Answer is Correct: Cloud Logging is the native service to centralize logs from multiple Google Cloud resources in one place. GKE, Compute Engine (via Ops Agent), and Cloud Run integrate directly with Cloud Logging. It supports structured JSON logs, custom fields, and resource labels (via monitored resource types and labels), enabling consistent querying and filtering across 150 microservices, 200 VMs, and 12 Cloud Run services. Cloud Logging also supports log-based metrics, which can be used with Cloud Monitoring alerting policies—meeting the requirement for alerting without third-party collectors. Key Features / How it Meets Requirements: 1) Ingestion: GKE and Cloud Run automatically export container/service logs to Cloud Logging; Compute Engine uses the Google Ops Agent to collect and send logs. 2) Structured logging: JSON payloads are preserved as structured fields, enabling advanced queries in Logs Explorer and programmatic access. 3) Log-based metrics: You can create counter/distribution metrics from filters on logs (including custom JSON fields) and then alert in Cloud Monitoring. 4) Central management and retention: Use Log Buckets, Sinks, and Views to manage access and retention centrally. Configure bucket retention (e.g., 30/90/365 days) and route subsets to different buckets or to BigQuery/Cloud Storage for longer retention and analytics. 5) Governance: IAM controls, CMEK options for some logging storage scenarios, and organization-level aggregation via aggregated sinks support enterprise operations aligned with the Google Cloud Architecture Framework’s operational excellence and security pillars. Common Misconceptions: Some may think “Cloud SDK” is needed to ship logs, but it’s a CLI/tooling suite, not a logging backend. Others may confuse Data Catalog (metadata) or Dialogflow (conversational AI) with operational logging. Exam Tips: When you see requirements like “centralize logs,” “query in one place,” “structured JSON,” “log-based metrics,” and “alerting,” the default answer is Cloud Logging (often paired with Cloud Monitoring). Also note the constraint “avoid third-party collectors beyond Ops Agent” strongly points to native Cloud Operations tooling.
A media-streaming company is migrating a 12-node analytics backend to a fully managed relational database service on Google Cloud that prohibits custom agents and targets a 99.95% monthly SLA; to reduce operational toil without writing maintenance scripts, which routine system-maintenance task will the managed platform perform automatically across all instances?
Incorrect. While managed databases integrate with IAM and support database users/roles, Google does not automatically create and maintain IAM roles and user access policies for your application users. Designing least-privilege access is part of the customer’s responsibility in the shared responsibility model. You must define who gets access, how it’s granted, and how it’s audited.
Correct. A core benefit of a fully managed relational database service (e.g., Cloud SQL/AlloyDB) is automated maintenance such as applying security patches and performing minor-version upgrades. You can typically configure a maintenance window, but the platform performs the work without custom agents or customer-written scripts. This directly reduces operational toil across all instances.
Incorrect. Archiving historical data to cold storage (e.g., Cloud Storage Nearline/Coldline/Archive) requires you to design data lifecycle and retention policies and implement export/ETL or partitioning/archival strategies. While Cloud Storage supports lifecycle rules, the managed relational database platform does not automatically decide what data is “historical” and archive it for you.
Incorrect. Automatically optimizing spend across projects by resizing resources is not a standard automatic function of managed relational databases. You can manually resize instances or use recommendations/monitoring tools, but cross-project automated resizing is not something the database service performs as routine maintenance. Cost optimization remains a customer governance and FinOps activity.
Core Concept: This question tests what a fully managed relational database service on Google Cloud (for example, Cloud SQL or AlloyDB) does for you operationally. “Prohibits custom agents” and a high availability SLA (99.95% monthly) point to a managed platform where Google operates the underlying infrastructure and much of the database maintenance. Why the Answer is Correct: Managed database platforms reduce operational toil by taking on routine, repeatable maintenance tasks that would otherwise require DBAs and scripts. A key example is patch management: applying security patches and performing minor-version upgrades to the database engine. In Cloud SQL, maintenance updates (including security patches) are handled by Google and can be scheduled via a maintenance window to control timing. This aligns directly with “without writing maintenance scripts” and “across all instances,” because the platform coordinates these updates fleet-wide. Key Features / Best Practices: - Automated maintenance: Google applies patches and minor updates; you typically choose a maintenance window to minimize disruption. - High availability: 99.95% SLA is commonly associated with HA configurations (regional/replica-based), which also influence how maintenance is rolled out to reduce downtime. - Shared responsibility: You still manage schema, queries, and access controls, but Google manages the underlying OS, storage, and many engine maintenance operations. - Architecture Framework alignment: This supports Operational Excellence (automation, reduced toil) and Security (timely patching). Common Misconceptions: People often assume “fully managed” means Google also manages IAM design, data lifecycle/archival policies, or cost optimization across projects. In reality, IAM roles/policies and data retention strategies remain customer responsibilities, and cost optimization is not automatically performed across projects by the database service. Exam Tips: When you see “fully managed database” in Digital Leader questions, think: automated backups (configurable), patching/minor upgrades, replication/HA options, and simplified operations. If an option describes cross-project spend optimization or business-specific data lifecycle rules, it’s usually not an automatic managed database function.
Your micromobility startup runs 8 telemetry-processing VMs on Windows Server Standard in a colocation facility. These workloads are needed only from 08:00 to 18:00, Monday–Friday, and you shut them down nights and weekends (about 65% idle time). Your Windows Server licenses expire in 30 days, and you want to minimize ongoing license spend while keeping the ability to stop instances when idle. What should you do?
Extending a 3-year Windows agreement keeps you locked into fixed licensing costs regardless of the 65% idle time. Colocation providers typically cannot reduce infrastructure charges proportionally to off-hours because you still reserve space, power capacity, and hardware. This option does not address the goal of minimizing ongoing license spend and misses the cloud advantage of paying only for what you run.
A 2-year renewal with auto-renew discount still results in paying for Windows licenses continuously, even though the workload runs only during business hours. It also increases commitment and reduces flexibility. This option may reduce per-license cost slightly, but it does not leverage consumption-based pricing or automation to stop paying for compute and licensing during idle periods.
Migrating to Compute Engine with BYOL can be valid in some enterprises, but it does not minimize ongoing license spend here because the licenses are expiring soon and you would still need to purchase/maintain eligible Windows licenses (often with Software Assurance) to remain compliant. BYOL also adds complexity and may not deliver savings compared to pay-as-you-go licensing for part-time workloads.
Compute Engine with Google-provided Windows images includes Windows licensing in the hourly VM price, which directly aligns cost with actual runtime. By scheduling VMs to stop outside 08:00–18:00 Monday–Friday, you avoid compute and Windows license charges during idle periods (while still paying for storage). This best meets the requirement to minimize ongoing license spend and keep stop/start flexibility.
Core concept: This question tests how Google Compute Engine pricing and Windows licensing work, especially for workloads that can be stopped when idle. It also touches on operational automation (scheduling start/stop) and cost optimization from the Google Cloud Architecture Framework. Why the answer is correct: Option D minimizes ongoing license spend by using Google-provided Windows Server images, where the Windows license cost is bundled into the VM’s hourly price (pay-as-you-go). Because the workload runs only 08:00–18:00 Monday–Friday, you can stop the VMs outside business hours and avoid compute charges while stopped. This aligns with the requirement to keep the ability to stop instances when idle and avoids renewing expiring Windows licenses. Key features and best practices: - Compute Engine Windows images include licensing in the per-vCPU/per-hour cost, so you don’t manage separate Windows agreements. - Stopped VMs do not incur vCPU/RAM charges; you typically still pay for persistent disk and any reserved external IPs. This is still usually far cheaper than paying for Windows licenses sized for 24/7 use. - Use automation to enforce schedules: Cloud Scheduler + Cloud Functions/Cloud Run calling the Compute Engine API, or instance schedules via automation scripts. This reduces human error and supports operational excellence. - Consider rightsizing and using Managed Instance Groups only if you need autoscaling; here, simple scheduled start/stop is sufficient. Common misconceptions: BYOL (option C) can sound cheaper, but it often requires active Software Assurance and meeting license mobility rules. It also doesn’t inherently solve the “licenses expiring in 30 days” problem—you’d still be paying for licenses even when VMs are stopped. On-prem renewal options (A/B) ignore the major savings from shutting down compute and shifting to consumption-based licensing. Exam tips: For Windows on Google Cloud, remember: Google-provided images = license included and billed per use; BYOL = you must meet eligibility requirements and you still carry license cost risk. For workloads with predictable off-hours, scheduling start/stop is a common cost-optimization pattern; also remember stopped instances still incur storage costs.
A regional retail company is launching an inventory forecasting service on Google Cloud; finance mandates a single billing account with project-level budgets set to $18,000/month for production and $3,500/month for development, and only 8 engineers should have any permissions in development while none of them may access production; you must ensure that development workloads cannot affect production in terms of IAM, quotas, or network reachability. What should you do to meet these requirements while keeping spend tracking simple?
Applying a unique tag/label to development resources can help with cost attribution and filtering in reports, but it does not enforce isolation. Tags/labels do not create separate IAM policies, do not prevent quota contention, and do not block network reachability. For exam purposes, labels are an accounting/organization tool, not a security or governance boundary. This fails the requirement that dev cannot affect prod in IAM, quotas, or connectivity.
Putting development resources on their own network (separate VPC) helps with network reachability isolation, but it does not address IAM separation or quota isolation by itself. Engineers could still have permissions across environments if IAM is not separated, and quotas are not enforced per VPC—they are primarily per project. A separate network is a good additional control, but it is insufficient as the primary solution to meet all stated constraints.
Using a separate billing account would make it easier to split costs, but it directly conflicts with the requirement for a single billing account. It can also complicate centralized financial governance and consolidated invoicing. Additionally, separate billing accounts do not inherently solve IAM, quota, or network isolation; those are controlled by projects, IAM policies, and VPC design. This option violates the finance mandate and is not the best practice here.
Creating a separate project for development is the correct approach because the project is the main boundary for IAM policies, quota enforcement, and budget configuration. You can attach both dev and prod projects to the same billing account and set distinct project-level budgets ($3.5k and $18k). Grant the 8 engineers access only to the dev project and none to prod. Use separate VPCs (and no peering) to prevent network reachability between environments.
Core Concept: This question tests Google Cloud resource hierarchy and isolation boundaries: billing accounts, projects, IAM, quotas, and network separation. In Google Cloud, the project is the primary unit for IAM policy attachment, quota enforcement, and most resource scoping, and it is also the cleanest boundary for budget controls. Why the Answer is Correct: Putting development resources in their own project (while keeping a single billing account) best satisfies all requirements simultaneously: (1) finance wants one billing account with separate project-level budgets ($18k prod, $3.5k dev), which is natively supported by Cloud Billing budgets per project; (2) only 8 engineers should have permissions in development and none in production—separate projects allow completely separate IAM policies, preventing accidental privilege inheritance at the project level; (3) development workloads must not affect production in IAM, quotas, or network reachability—quotas are applied per project (and per region/service within a project), so dev consumption cannot exhaust prod quotas, and IAM separation is straightforward. Network reachability can also be controlled by using separate VPCs in each project and not peering them (or using Shared VPC with strict segmentation if needed), but the project boundary is the foundational control plane separation. Key Features / Best Practices: - Cloud Billing: Attach both projects to the same billing account; create separate budgets and alerts per project for simple spend tracking. - IAM: Grant the 8 engineers roles only in the dev project; do not grant them roles in the prod project; avoid broad org/folder-level bindings that would leak access. - Quotas: Since quotas are per project, dev load testing cannot starve prod. - Networking: Use separate VPC networks per project (default or custom). Without peering/VPN/shared connectivity, there is no direct L3 reachability between dev and prod. These align with Google Cloud Architecture Framework guidance on security (least privilege, strong isolation) and cost management (budgeting and allocation). Common Misconceptions: People often think tags/labels or separate networks alone provide full isolation. Labels help reporting, not enforcement. A separate VPC helps network isolation but does not isolate IAM or quotas. A separate billing account would violate the “single billing account” mandate and complicate consolidated spend tracking. Exam Tips: When requirements mention isolating IAM and quotas, think “separate projects.” When they also require separate budgets but one billing account, use project-level budgets under a single billing account. Network isolation is typically achieved with separate VPCs and controlled connectivity, but the project boundary is the key exam pattern.
A regional logistics company needs to build two custom AI solutions—package damage detection from images (2.5 million labeled photos) and claim text classification (600,000 labeled records)—and go live in 4 weeks. They require a fully managed, end-to-end platform that supports data preparation, GPU training, automated hyperparameter tuning, pipeline orchestration, online and batch prediction with scalable HTTPS endpoints, model registry, and monitoring, and integrates easily with BigQuery, without stitching together multiple low-level services. Which Google Cloud product should they choose?
Dataproc is a managed Spark/Hadoop service for big data processing and ETL. It can help prepare data at scale, but it is not an end-to-end ML platform: it doesn’t natively provide managed model registry, integrated hyperparameter tuning, managed online HTTPS endpoints, or first-class MLOps monitoring. You would still need to assemble multiple services (training, deployment, monitoring), which conflicts with the requirement.
Compute Engine provides virtual machines (including GPU VMs) and maximum control, but it is infrastructure, not a managed ML platform. You would need to build or integrate your own tooling for data pipelines, hyperparameter tuning, model registry, CI/CD, scalable prediction endpoints, and monitoring. That “stitching together” approach increases operational burden and is risky for a 4-week go-live timeline.
Recommendations AI is a prebuilt, specialized product for recommendation use cases (e.g., product recommendations, personalization) and is not intended for training and deploying arbitrary custom computer vision and text classification models. It won’t meet requirements like custom GPU training, hyperparameter tuning, pipeline orchestration, or a general model registry for two distinct bespoke models.
Vertex AI is Google Cloud’s unified, fully managed ML platform covering the full lifecycle: data/datasets, custom training with GPUs, hyperparameter tuning, Vertex AI Pipelines, Model Registry, and deployment via Vertex AI Endpoints (scalable HTTPS) plus Batch Prediction. It also integrates well with BigQuery for data access and operationalizes monitoring, making it the best fit for rapid delivery without assembling many low-level services.
Core Concept: This question tests choosing the right managed AI/ML platform on Google Cloud for building custom models end-to-end (data prep through deployment and monitoring) under a tight timeline. It specifically points to an integrated MLOps platform rather than assembling infrastructure and tools manually. Why the Answer is Correct: Vertex AI is Google Cloud’s fully managed, end-to-end ML platform designed for custom training and production MLOps. The company needs two custom solutions (computer vision damage detection and NLP text classification) with millions of labeled examples and a 4-week deadline. Vertex AI provides managed datasets, training (including GPU support), hyperparameter tuning, pipelines for orchestration, model registry, and managed online/batch prediction. It also integrates cleanly with BigQuery for data access and feature workflows, meeting the requirement to avoid “stitching together multiple low-level services.” Key Features (what maps to the requirements): - Data preparation and management: Vertex AI Datasets and integration with BigQuery and Cloud Storage. - GPU training: Vertex AI Custom Training with GPU-enabled machine types; managed training jobs. - Automated hyperparameter tuning: Vertex AI Hyperparameter Tuning. - Pipeline orchestration: Vertex AI Pipelines (managed Kubeflow Pipelines) for repeatable ML workflows. - Deployment: Vertex AI Endpoints for scalable HTTPS online prediction; Batch Prediction for offline scoring. - Governance and lifecycle: Vertex AI Model Registry for versioning and promotion. - Monitoring: Vertex AI Model Monitoring (e.g., drift/skew/feature distribution monitoring) and logging/metrics integration. Common Misconceptions: Compute Engine and Dataproc can run ML workloads, but they are infrastructure/data processing services that require assembling many components (training code, orchestration, deployment, monitoring, registry) yourself—too slow and complex for the stated constraints. Recommendations AI is a prebuilt solution for retail-style recommendations, not custom vision/NLP training. Exam Tips: When you see requirements like “end-to-end managed ML,” “pipelines,” “model registry,” “online endpoints,” “monitoring,” and “BigQuery integration,” the Digital Leader answer is almost always Vertex AI. Prebuilt AI APIs fit narrow tasks; infrastructure services fit custom but DIY builds; Vertex AI fits custom + managed MLOps at speed.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
A smart home company runs its control platform on a managed messaging service from its cloud provider that guarantees 99.99% monthly availability, but last month monitoring showed only 99.6% availability (about 173 minutes of cumulative downtime), intermittently preventing devices from receiving commands—what is a likely impact on the organization?
This is not the best answer for the question being asked. Error budget consumption is an internal SRE or reliability-management concept used by engineering teams to track service performance against objectives, but the prompt asks for a likely impact on the organization from customer-visible downtime. In a Digital Leader context, the broader business consequence of outages is more relevant than an internal operational metric.
Incorrect. Cloud provider SLAs for managed services are generally standardized and are not typically renegotiated simply because one month of availability was lower than expected. The more common outcomes are service credits, escalation with the provider, or architectural changes to improve resilience. De-emphasizing uptime commitments would not be a likely or sensible organizational response.
Correct. The messaging outage intermittently prevents smart home devices from receiving commands, which directly degrades the customer experience and undermines trust in the product. When a core feature of a connected device platform becomes unreliable, customers may cancel subscriptions, avoid renewals, or move to competitors. That makes customer churn and lost revenue the most likely organizational impact in this scenario.
Incorrect. Reduced availability means the service was intermittently unavailable or unable to process requests, not that stored data was deleted. Data deletion is related to durability, backup strategy, replication, and disaster recovery rather than uptime percentage alone. An availability shortfall can disrupt operations, but it does not imply irretrievable database loss.
Core concept: This question is about the business impact of reduced service availability in a customer-facing smart home platform. A managed messaging service that falls from a 99.99% availability expectation to 99.6% availability experiences substantial downtime, which directly affects end users when devices cannot receive commands. Why correct: Because the outage interrupts core product functionality, customers may lose trust in the service, stop using it, or switch to competitors, which can reduce revenue. Key features: Availability measures whether a service can be accessed and used when needed, and customer-facing outages often translate into support costs, reputational damage, and churn. Common misconceptions: Error budgets and SRE metrics are real operational concepts, but they are internal engineering indicators rather than the most likely organizational impact described here; availability issues also do not imply data deletion. Exam tips: For Google Cloud Digital Leader questions, prefer the answer that connects technical failure to business outcomes when the scenario emphasizes customer-facing disruption.
A logistics company operates 24 Kubernetes clusters (12 on-premises, 8 on Google Cloud, and 4 on AWS) to support microservices across 60 warehouses and needs a single, centralized platform to consistently manage policies, configurations, service mesh, and observability across all environments without relocating workloads; which Google Cloud service should they choose?
Cloud Functions is a serverless Functions-as-a-Service product for event-driven code. It does not manage Kubernetes clusters, enforce fleet-wide policies, or provide multi-cluster service mesh and observability. While it can be part of a microservices ecosystem, it cannot centrally govern 24 clusters across on-prem, Google Cloud, and AWS. It’s compute for functions, not a hybrid/multi-cloud Kubernetes management platform.
GKE Enterprise is the correct choice because it provides centralized management for Kubernetes across hybrid and multi-cloud environments. It supports fleet management, consistent policy enforcement (Policy Controller), configuration synchronization (Config Management), service mesh capabilities (Anthos Service Mesh), and unified observability patterns across clusters—without requiring workloads to be moved. This directly matches the need to manage 24 clusters across on-prem, Google Cloud, and AWS consistently.
Cloud Run is a managed serverless container platform that runs stateless HTTP services and jobs. It simplifies deployment and scaling, but it is not a centralized platform for managing existing Kubernetes clusters across multiple environments. It doesn’t provide fleet-wide policy/config management or multi-cluster service mesh governance. Choosing Cloud Run would imply changing the runtime platform rather than centrally managing the current Kubernetes footprint.
Compute Engine provides virtual machines and is foundational infrastructure, but it does not offer Kubernetes fleet management, centralized policy/config enforcement, service mesh, or cross-cluster observability as a unified platform. You could run Kubernetes on VMs, but you would still need a higher-level management solution to meet the requirement of consistent governance across on-prem, Google Cloud, and AWS without relocating workloads.
Core Concept: This question tests Google Cloud’s hybrid and multi-cloud Kubernetes management capabilities—specifically centralized policy/config management, service mesh, and observability across clusters running in different environments (on-prem, Google Cloud, AWS) without moving workloads. Why the Answer is Correct: GKE Enterprise is designed to manage fleets of Kubernetes clusters across environments. It provides a single control plane experience for consistent governance and operations across on-prem (often via Google Distributed Cloud), Google Cloud (GKE), and other clouds (including AWS). The requirement “single, centralized platform” plus “policies, configurations, service mesh, and observability” maps directly to GKE Enterprise’s fleet management and Anthos heritage, enabling standardization without workload relocation. Key Features / Best Practices: - Fleet management: Register clusters into a fleet for centralized administration and consistency. - Policy and configuration: Use Policy Controller (OPA/Gatekeeper) and Config Management to enforce and sync desired state (e.g., namespaces, RBAC, resource constraints) across all clusters. - Service mesh: Anthos Service Mesh (Istio-based) provides consistent traffic management, mTLS, and service-to-service observability across clusters. - Observability: Centralized telemetry, dashboards, and tracing integration (commonly via Cloud Operations suite) across the fleet. - Architecture Framework alignment: Improves operational excellence (standardized operations), security (policy enforcement, mTLS), and reliability (consistent rollout patterns) across heterogeneous environments. Common Misconceptions: People may choose Cloud Run or Cloud Functions because they simplify operations, but they are serverless compute platforms—not multi-cluster management layers. Compute Engine is infrastructure compute and doesn’t provide Kubernetes fleet governance, mesh, or centralized policy management. Exam Tips: When you see “hybrid/multi-cloud Kubernetes,” “centralized governance,” “policy/config consistency,” and “service mesh across clusters,” think GKE Enterprise (Anthos capabilities). Also note the explicit constraint “without relocating workloads,” which points to a management plane rather than a migration or a single-environment compute service.
Your 12-person logistics startup must add image label detection for 50,000 product photos per month and sentiment analysis on 5,000 support emails per week within 30 days and without hiring any new staff; how do Google Cloud's out-of-the-box AI APIs make AI/ML adoption feasible for your team?
Correct. Google Cloud’s out-of-the-box AI APIs (e.g., Vision API for label detection and Natural Language for sentiment) use Google-managed, pre-trained models. Your team can integrate them via simple API calls and IAM-controlled credentials without hiring ML engineers or building/training custom models. This fits tight timelines and small teams by minimizing operational overhead and accelerating delivery.
Incorrect. These APIs still require data input (images and email text) and benefit from validation and preprocessing. For example, you must ensure supported file formats, handle language/encoding, remove signatures/quoted text in emails, and manage edge cases (blurry images, short messages). Managed AI reduces model-building effort, not the need for responsible data handling and quality checks.
Incorrect. AI APIs do not inherently require fewer security permissions. You still must grant appropriate IAM roles (principle of least privilege), manage service accounts, and protect sensitive data. In some cases, using AI services can increase the need for governance (audit logging, data retention policies, DLP considerations, VPC Service Controls) because you are processing potentially sensitive customer communications.
Incorrect. Not all Google Cloud AI offerings require custom training. Many common tasks are covered by pre-trained APIs that work immediately. Custom training is optional and used when you need domain-specific labels, unique terminology, or higher accuracy than general models provide. For this startup’s requirements and timeline, pre-trained APIs are the intended approach.
Core Concept: This question tests Google Cloud’s pre-trained, out-of-the-box AI services (often called “AI APIs”), such as Cloud Vision API for image label detection and Natural Language API (or Vertex AI language capabilities) for sentiment analysis. These are managed services that expose REST/gRPC endpoints and client libraries, letting teams add AI features without building or training models. Why the Answer is Correct: A small startup must deliver two AI capabilities quickly (within 30 days) and without hiring ML specialists. Google Cloud’s pre-trained APIs are designed for exactly this scenario: you send data (images or text) to an API and receive predictions (labels or sentiment) immediately. No custom model development lifecycle is required (data labeling, feature engineering, training, hyperparameter tuning, evaluation, deployment, monitoring). This dramatically reduces time-to-value and operational burden, aligning with the Google Cloud Architecture Framework principle of using managed services to reduce undifferentiated heavy lifting. Key Features: 1) Pre-trained models: Vision label detection and sentiment analysis work out-of-the-box. 2) Simple integration: client libraries, REST calls, and IAM-controlled service accounts. 3) Scalability: the APIs scale to handle monthly/weekly batch workloads; you can run batch jobs via Cloud Run/Functions + Cloud Scheduler, or Dataflow for larger pipelines. 4) Governance and security: IAM permissions (least privilege), audit logs, and optional VPC Service Controls for data exfiltration protection. 5) Cost model: pay-per-use pricing per request/unit processed, which is attractive for variable workloads like 50,000 images/month and 5,000 emails/week. Common Misconceptions: Some assume “AI” always requires custom training (D) or that managed AI removes the need for data preparation (B). In reality, you must provide valid inputs (correct image formats, text encoding, language considerations) and handle quality checks, but you don’t need to build models. Others think AI APIs reduce security requirements (C); instead, they still require appropriate IAM and data handling controls. Exam Tips: For Digital Leader questions, map business constraints (small team, fast timeline, no hiring) to managed services and pre-trained APIs. When you see tasks like OCR, labeling, translation, sentiment, entity extraction, or speech-to-text, the best answer is typically Google’s out-of-the-box AI APIs or Vertex AI pre-built offerings—especially when custom ML development is unrealistic.
A national bike-sharing consortium needs to publish real-time docking station availability (updated every 2 seconds) from 300 municipal operators and simultaneously receive rider rental/return events in real time to forward to each operator’s system. They require a standardized, secure, versioned interface over HTTPS that supports authentication and partner-specific rate limits; what should they implement?
SRE practices and runbooks improve reliability and incident response (SLIs/SLOs, on-call, playbooks), but they do not provide an external, standardized HTTPS interface. They also don’t inherently deliver authentication, API versioning, or partner-specific rate limiting. SRE would be complementary after the platform is built, not the primary solution to expose and govern partner integrations.
An application programming interface managed through an API gateway is the correct pattern because it provides a standardized HTTPS interface for many external partners. API management capabilities such as authentication, versioning, and policy enforcement are exactly what this consortium needs to securely publish availability data and receive rental events. It also allows the organization to apply partner-specific controls such as quotas or rate limits without changing backend services. In Google Cloud, this is commonly associated with Apigee for full API management, while the general architectural choice remains an API front door managed through a gateway layer.
A customized ML model could help predict bike demand or station fullness, but it does not address the requirement to publish and receive real-time events via a secure, versioned HTTPS interface. ML is an analytics enhancement, not an integration control plane. Even if predictions were useful, you would still need an API layer to expose data securely to 300 operators.
A multi-regional shared database could store availability and rental events, but it is not a safe or practical partner integration interface. Databases don’t natively provide standardized API contracts, per-partner authentication models, versioning, or rate limits. Exposing a shared database to 300 external operators increases security risk, complicates governance, and can create performance and quota contention.
Core Concept: This question is about exposing a standardized integration interface to many external partners over HTTPS. The required capabilities—authentication, versioning, and partner-specific rate limits—align with API management, typically implemented with an API gateway or, in Google Cloud, more fully with Apigee. Why the Answer is Correct: The consortium needs a secure, versioned HTTPS interface for 300 municipal operators, plus the ability to authenticate callers and enforce different limits per partner. Those are classic API management requirements: publish a consistent contract, secure access, apply policies, and govern consumption centrally. An API gateway layer is the right architectural pattern, and on Google Cloud Apigee is the best-known service for full partner-facing API management. Key Features: - Standardized HTTPS endpoints for publishing station availability and receiving rental/return events. - Authentication and authorization for external operators using API keys, OAuth 2.0, JWTs, or similar mechanisms. - API versioning so the interface can evolve without breaking all partners at once. - Partner-specific quotas or rate limits to protect backend systems and support fair usage. - Centralized policy enforcement, monitoring, and routing to backend services that process the real-time data. Common Misconceptions: A shared database may centralize storage, but it does not create a governed external interface with versioning and per-partner controls. SRE practices improve operations but do not provide an integration surface. Machine learning is unrelated to the need for secure partner connectivity. Exam Tips: When a question emphasizes external partners, HTTPS, authentication, versioning, and rate limiting, think API management. In Google Cloud, that usually points conceptually to an API gateway pattern, often implemented with Apigee for richer partner-facing controls.
A food-delivery marketplace is building a new order-tracking system that stores JSON documents and requires real-time updates to listeners, offline synchronization for iOS/Android/web clients, automatic serverless scaling from 0 to 150,000 daily active users, and multi-region availability with typical write latencies under 100 ms; which Google Cloud product should they choose for a flexible, scalable NoSQL database with strong web and mobile SDK support?
Cloud Spanner is a globally distributed, strongly consistent relational (SQL) database designed for high-scale OLTP with horizontal scaling and high availability. It’s excellent for structured relational data and global transactions, but it does not provide the same offline-first mobile/web client synchronization model or built-in real-time listeners via consumer SDKs. Spanner also typically requires more deliberate schema design and operational planning than a serverless document database.
Cloud Storage is an object store for unstructured blobs (images, videos, backups, large files). While it is massively scalable and multi-region capable, it is not a NoSQL document database and does not support querying JSON documents, real-time update listeners, or offline synchronization patterns for app state. You could store JSON files as objects, but you would need additional services to index, query, and push real-time updates.
BigQuery is a serverless data warehouse optimized for analytics (OLAP) over large datasets using SQL. It is not intended for low-latency operational reads/writes for an order-tracking application, nor does it provide real-time listeners or offline mobile/web synchronization. BigQuery streaming ingestion exists, but typical usage is analytical reporting and dashboards, not serving app transactions under 100 ms write latency expectations.
Firestore is a serverless, scalable NoSQL document database that stores JSON-like documents in collections and supports real-time listeners for instant updates to clients. It offers offline persistence and synchronization for iOS, Android, and web SDKs, making it ideal for mobile/web order tracking. Firestore supports multi-region locations for high availability and typically low write latencies when clients are near the chosen location, with automatic scaling as usage grows.
Core Concept: This question tests selecting the right managed NoSQL database for web/mobile apps that need JSON-like documents, real-time listeners, offline sync, and automatic serverless scaling with multi-region availability. Why the Answer is Correct: Firestore (Cloud Firestore) is purpose-built for application backends that require document storage (JSON-like documents), real-time updates via listeners, and offline-first client behavior. It provides first-class SDKs for iOS, Android, Web, and server environments, enabling clients to subscribe to changes and automatically synchronize when connectivity returns. Firestore is fully managed and serverless, so it scales automatically without capacity planning—well aligned with “scale from 0 to 150,000 daily active users.” For multi-region deployments, Firestore offers multi-region locations with strong availability characteristics and typically low latencies for regional users; the question’s “typical write latencies under 100 ms” aligns with Firestore’s common performance profile when clients are near the chosen location. Key Features / Best Practices: Firestore supports document/collection modeling, automatic indexing (with composite indexes as needed), and real-time streams. Offline persistence is available in mobile and web SDKs, enabling local caching and queued writes. Multi-region configuration improves resilience and aligns with the Google Cloud Architecture Framework pillars of reliability and operational excellence (managed service, reduced ops burden). Security is commonly enforced with Firebase Authentication plus Firestore Security Rules (or IAM for server access). For scale and cost control, design efficient document structures, avoid hot documents, and use batched writes/transactions appropriately. Common Misconceptions: Cloud Spanner is globally consistent and relational, but it’s not optimized for offline mobile sync and real-time listeners via client SDKs. Cloud Storage stores objects (blobs), not queryable documents with real-time subscriptions. BigQuery is an analytics warehouse, not an operational database for low-latency app reads/writes. Exam Tips: When you see “real-time listeners,” “offline sync,” and “mobile/web SDKs,” think Firestore (or Firebase Realtime Database). If the question also emphasizes flexible JSON documents and serverless scaling, Firestore is the best match. Reserve Spanner for relational schemas, SQL, and global strong consistency with high-scale OLTP, not offline-first client synchronization.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
An international airline is launching a real-time reservation platform serving customers in 12 regions across North America, Europe, and APAC. Seat inventory and payment records must be updated with strongly consistent ACID transactions across regions, 24/7 availability, and sub-100 ms regional read latency, and the system must horizontally scale to over 50 million bookings per day with multi-petabyte operational data growth projected over 3 years. Which Google Cloud product should they choose?
Cloud SQL provides managed relational databases (MySQL, PostgreSQL, SQL Server) with ACID transactions, but it is primarily designed for regional deployments. While it supports read replicas and cross-region replicas for disaster recovery, cross-region replication is typically asynchronous and does not provide globally distributed, strongly consistent ACID transactions across multiple regions. It also has vertical scaling limits compared to Spanner for multi-petabyte, ultra-high throughput OLTP.
Cloud Spanner is purpose-built for globally distributed OLTP requiring strong consistency and high availability. It supports multi-region instances with synchronous replication and external consistency, enabling ACID transactions across regions—critical for seat inventory and payment correctness. Spanner scales horizontally by adding nodes/processing units and can handle very large operational datasets (multi-petabyte) with high throughput, while providing low-latency reads via replica placement near users.
Cloud Storage is an object store for unstructured data (files, images, backups) with very high durability and availability. It is not a relational database and does not provide ACID transactional semantics for complex multi-row updates like seat inventory and payment records. While it can serve content globally and scale massively, it is not suitable as the primary system of record for real-time reservations requiring strong consistency and SQL transactions.
BigQuery is a serverless data warehouse optimized for analytics (OLAP), not transactional workloads (OLTP). It excels at scanning large datasets for reporting, dashboards, and batch/stream analytics, but it is not designed for high-frequency, strongly consistent, multi-row ACID transactions across regions. BigQuery can complement the reservation system for analytics (e.g., demand forecasting), but it should not be used as the real-time booking database.
Core Concept: This question tests choosing a globally distributed operational database that provides strong consistency (ACID) across regions, high availability, low-latency reads near users, and massive horizontal scalability. Why the Answer is Correct: Cloud Spanner is Google Cloud’s globally distributed relational database designed for mission-critical OLTP at planetary scale. It uniquely combines relational schemas and SQL with strong external consistency across regions using synchronous replication and TrueTime. For an airline reservation system, seat inventory and payments require strict correctness (no double-booking, no lost updates) even with concurrent writes from 12 regions. Spanner can run as a multi-region instance to provide 24/7 availability and automatic failover while maintaining ACID transactions across regions. Key Features / Best Practices: - Strongly consistent ACID transactions across regions (external consistency) suitable for inventory and financial records. - Multi-region configurations (e.g., nam3, eur3, asia3 or custom) to place replicas near users for sub-100 ms regional reads while maintaining global correctness. - Horizontal scaling via adding nodes/processing units; supports very high throughput (tens of millions of writes/day) and multi-petabyte operational datasets. - High availability with synchronous replication and automatic failover; aligns with Google Cloud Architecture Framework goals for reliability and performance. - Use schema design and keys to avoid hotspots (e.g., avoid monotonically increasing primary keys for heavy write tables; consider hashed keys) and use read-only transactions for low-latency reads. Common Misconceptions: Cloud SQL is relational and ACID, but it is primarily regional and doesn’t provide globally distributed strong consistency with multi-region synchronous writes at this scale. BigQuery and Cloud Storage are excellent for analytics and object storage, respectively, but they are not OLTP transactional systems for real-time reservations. Exam Tips: When you see “global,” “strong consistency across regions,” “ACID,” “high availability,” and “massive horizontal scale,” think Cloud Spanner. If the workload is analytical (reporting/BI), think BigQuery; if it’s objects/blobs, think Cloud Storage; if it’s traditional regional relational DB, think Cloud SQL.
A regional hospital network plans to use Google Cloud’s advanced ML services (such as Vertex AI) for radiology model training and inference, but regulations require all 120 TB of patient images and metadata to remain stored only in its on-premises data center; the hospital has a 5 Gbps Dedicated Interconnect to Google Cloud and must ensure no PHI is persisted in the public cloud. Which overall cloud strategy should they adopt to meet these constraints while still using Google Cloud’s ML capabilities?
A hybrid-cloud approach is the correct strategy because it combines on-premises infrastructure with public cloud services in a single operating model. In this scenario, the hospital can keep all 120 TB of patient images and metadata stored in its own data center to satisfy the regulatory requirement that PHI never be persisted in the public cloud. At the same time, it can use Google Cloud ML capabilities such as Vertex AI over the existing 5 Gbps Dedicated Interconnect, which provides private, high-throughput connectivity between environments. This is exactly the kind of requirement hybrid cloud is designed for: regulated data remains on-prem while selected cloud services are consumed where appropriate.
A multi-cloud approach means using services from multiple cloud providers, such as Google Cloud together with AWS or Azure. The question does not describe a need to distribute workloads across multiple public clouds, avoid vendor lock-in, or compare cloud-native services from different vendors. More importantly, multi-cloud does not inherently address the core requirement that all PHI remain stored only in the on-premises data center. Adding more clouds would usually increase governance, networking, and compliance complexity rather than solve the stated constraint.
A public-cloud approach would place the primary workload and data handling model in the cloud provider environment. That conflicts with the explicit requirement that all patient images and metadata must remain stored only in the hospital’s on-premises data center and that no PHI be persisted in the public cloud. Even if cloud services could be secured, the deployment model itself does not match the data residency and persistence restrictions in the question. Therefore, a pure public-cloud strategy is not appropriate here.
A private-cloud approach focuses on running infrastructure in a privately controlled environment, often entirely on-premises. While that could satisfy the requirement to keep PHI stored locally, it would not align with the goal of using Google Cloud’s advanced managed ML services such as Vertex AI. The question specifically asks for an overall cloud strategy that preserves on-prem data residency while still leveraging Google Cloud capabilities. That combination points to hybrid cloud, not private cloud alone.
Core Concept: This question tests cloud deployment models (hybrid vs. public/private/multi-cloud) and how to meet strict data residency and security requirements (no PHI persisted in the public cloud) while still consuming Google Cloud managed ML services like Vertex AI. Why the Answer is Correct: A hybrid-cloud approach combines on-premises infrastructure (where regulated PHI must remain) with public cloud services (for elastic compute and managed ML capabilities). The hospital can keep the 120 TB of images/metadata stored only on-prem while using a 5 Gbps Dedicated Interconnect for private, high-throughput connectivity into Google Cloud. This aligns with the constraint “must ensure no PHI is persisted in the public cloud” because storage remains on-prem; only controlled, transient processing can occur in Google Cloud, subject to architecture and governance. Key Features / How to Implement: - Connectivity: Dedicated Interconnect provides private connectivity and predictable bandwidth/latency versus internet VPN, supporting large-scale data access patterns. - Data governance: Design so PHI is not written to Cloud Storage/BigQuery/etc. Use strict IAM, VPC Service Controls (where applicable), and organization policies to reduce accidental data exfiltration. - Processing pattern: Stream or access data from on-prem during training/inference and ensure ephemeral compute disks and logs do not retain PHI. Consider de-identification/tokenization on-prem if feasible, and send only non-PHI features to the cloud. - Architecture Framework alignment: This is primarily a Trust and Security decision (data residency, compliance, least privilege), but also touches reliability and performance (dedicated private connectivity). Common Misconceptions: - “Private cloud” can sound right because data stays on-prem, but it would not satisfy the requirement to use Google Cloud’s managed ML services. - “Public cloud” is incorrect because it implies storing/processing PHI in Google-managed storage/services. - “Multi-cloud” is about using multiple public clouds; it doesn’t address the on-prem-only storage requirement. Exam Tips: When a question states data must remain on-prem (regulatory/sovereignty/PHI) yet wants to use public cloud services, the default answer is hybrid cloud. Look for keywords like Dedicated Interconnect, on-prem data center, and “no data persisted in cloud,” which strongly indicate hybrid connectivity plus strict data governance controls.
An international retail marketplace is migrating its order processing and customer analytics to Google Cloud, storing personally identifiable information (PII) for 50 million users across 12 countries in the EU, US, and APAC, with workloads in europe-west1, us-central1, and asia-southeast1. To ensure compliance with global standards during migration—given obligations such as GDPR, CPRA/CCPA, and the Australia Privacy Act—what approach should the company take regarding security and privacy regulations?
Correct. Multi-country PII processing requires compliance with each applicable jurisdiction’s laws and regulations where data is collected, processed, or stored, and where data subjects reside. GDPR, CPRA/CCPA, and Australia’s Privacy Act can all apply simultaneously depending on users and processing activities. A practical approach is to map data flows, classify data, and implement policy and technical controls to meet overlapping and strictest requirements.
Incorrect. Regional frameworks do not universally supersede other laws. For example, GDPR does not override US state privacy laws for US residents, and US frameworks do not replace EU requirements for EU data subjects. A global retailer must handle overlapping obligations and cross-border transfer requirements. Relying on only one region’s framework creates compliance gaps and legal risk.
Incorrect. International standards like ISO/IEC 27001 (security) and 27701 (privacy extension) are valuable for building a management system and demonstrating controls, but they do not replace legal obligations such as GDPR or CPRA/CCPA. Laws can require specific rights, notices, retention limits, and transfer mechanisms that standards alone do not guarantee.
Incorrect. Privacy and security are both required and intertwined. Privacy laws often mandate security safeguards (e.g., appropriate technical and organizational measures) and also require processes for consent, access/deletion requests, purpose limitation, and transparency. Prioritizing only security ignores core privacy obligations and would fail compliance audits and regulatory expectations.
Core Concept: This question tests governance for security and privacy compliance in a multi-jurisdiction, multi-region cloud migration. The key idea is that compliance is driven by where data subjects reside and where data is collected/processed/stored, and that privacy laws (GDPR, CPRA/CCPA, Australia Privacy Act) and security standards must be addressed together. Why the Answer is Correct: Option A is correct because an international retailer handling PII across the EU, US, and APAC must meet applicable legal and regulatory requirements in each relevant jurisdiction. GDPR applies to EU data subjects and imposes strict rules (lawful basis, data minimization, DPIAs, cross-border transfer mechanisms, breach notification). CPRA/CCPA applies to certain California residents and focuses on consumer rights, notice, and data sharing. Australia’s Privacy Act governs Australian personal information handling. These do not “supersede” each other; obligations can overlap, and the organization must satisfy the strictest applicable requirements per processing activity. Key Features / Best Practices (Google Cloud context): Use a policy-driven approach: data classification, records of processing, and privacy-by-design. Implement technical controls such as IAM least privilege, VPC Service Controls for data exfiltration risk reduction, Cloud KMS/Cloud HSM and CMEK where required, audit logging with Cloud Audit Logs, and DLP for discovery/masking/tokenization. Apply data residency and access controls by choosing regions (europe-west1, us-central1, asia-southeast1) appropriately, and manage cross-region transfers with documented legal mechanisms and strong encryption. Use Organization Policy constraints, Security Command Center, and Assured Workloads where relevant to enforce guardrails. Common Misconceptions: A frequent trap is believing that adopting an international standard (ISO 27001/27701) automatically satisfies laws. Standards help demonstrate a security/privacy management system, but they do not replace statutory requirements. Another misconception is treating security as separate from privacy; privacy requirements often mandate security safeguards and user rights processes. Exam Tips: For Digital Leader, choose answers emphasizing “meet all applicable regulations” and shared responsibility: Google secures the cloud, customers configure access, data handling, and compliance processes. When multiple countries/regions are involved, assume you must map requirements per jurisdiction and per data flow, not pick a single framework to cover everything.
A healthcare imaging startup has 0 legacy applications and no on-premises servers to migrate. They plan to launch a new AI-assisted diagnostics platform entirely on Google Cloud within 6 months and explicitly want cloud-native microservices from day 1 (managed services over VMs, no lift-and-shift). Which application modernization approach should they choose?
Lift-and-shift (rehost) is used when you have existing applications and want to move them quickly to the cloud with minimal changes, often onto Compute Engine VMs. This scenario explicitly says there are no legacy applications and they do not want VMs or lift-and-shift. Choosing A would add an unnecessary migration phase and contradict the requirement for cloud-native microservices from day 1.
Refactoring on premises first assumes you have an on-premises environment and existing applications to modify before migrating. The startup has no on-prem servers and no legacy apps, so there is nothing to refactor in place. Additionally, refactoring “on premises” delays cloud-native benefits and is misaligned with the goal to build entirely on Google Cloud within 6 months using managed services.
A greenfield approach means building net-new cloud-native applications designed specifically for the cloud. Because the company has no legacy systems to migrate and wants microservices and managed services from day 1, greenfield is the most direct and lowest-friction path. It enables using Cloud Run/GKE Autopilot, managed databases, and event-driven services immediately, accelerating delivery and reducing operational burden.
Brownfield modernization applies when there is an existing environment (legacy apps, infrastructure, data center, or prior cloud footprint) that must be modernized or integrated with. The question states there are 0 legacy applications and no on-prem servers, so there is no existing environment to modernize. Selecting D would misinterpret the scenario and imply constraints that do not exist.
Core Concept: This question tests application modernization strategy selection (greenfield vs. brownfield, refactor vs. rehost) and aligning the approach to business context. In Google Cloud terms, “cloud-native microservices from day 1” implies designing for managed services (e.g., Cloud Run, GKE Autopilot, Pub/Sub, Cloud SQL/Spanner, Vertex AI) rather than migrating existing workloads. Why the Answer is Correct: The startup has 0 legacy applications and no on-premises servers. There is nothing to migrate, rehost, or refactor from an existing environment. Their goal is to launch a new platform on Google Cloud within 6 months and explicitly prefers cloud-native microservices and managed services over VMs. That is the definition of a greenfield approach: building net-new applications designed for the cloud, using modern patterns (microservices, event-driven architecture, managed databases, CI/CD) from the start. Key Features / Best Practices: A greenfield build enables: - Microservices on managed compute: Cloud Run (serverless containers) or GKE Autopilot (managed Kubernetes) to reduce ops overhead. - Event-driven integration: Pub/Sub for decoupling services and improving resilience. - Managed data stores: Cloud SQL for relational needs, Firestore for document workloads, or Spanner for global scale. - Security-by-design: IAM least privilege, VPC Service Controls for sensitive healthcare data, Cloud KMS, Cloud Audit Logs. - Delivery velocity: Cloud Build + Artifact Registry + Cloud Deploy for CI/CD; Infrastructure as Code with Terraform. This aligns with Google Cloud Architecture Framework principles: operational excellence (managed services), reliability (decoupling), security (defense-in-depth), and cost optimization (pay-per-use serverless where appropriate). Common Misconceptions: Options involving “migrate first” or “refactor on premises” can sound like modernization, but they assume an existing application estate. “Brownfield” modernization applies when you must integrate with or transform an existing environment. Here, there is no legacy footprint, so those approaches add unnecessary steps and risk. Exam Tips: Look for keywords: “no legacy,” “no on-prem,” “net-new,” “cloud-native from day 1” → greenfield build. If the scenario mentions existing apps/VMs/data centers, then consider rehost/refactor/brownfield. Also note that Digital Leader questions often test selecting the simplest strategy that matches constraints and desired outcomes (speed, managed services, minimal ops).
A fintech company with 12 departments must comply with a policy that forbids any Compute Engine VM from having a public (external) IP; over the next quarter, 5 new folders and about 30 new projects will be created by different teams using automated pipelines. You need a scalable control that applies by default to all current and future resources and prevents assigning external IPs to VMs across the organization; what should you do?
Correct. Applying constraints/compute.vmExternalIpAccess at the organization root enforces a centralized guardrail that automatically applies to all existing and newly created folders and projects via inheritance. This is the most scalable approach for an environment where teams create projects through automation, and it provides a preventive control (not just guidance) against assigning external IPs to VMs.
Incorrect. Setting the policy only on each existing folder may block external IPs for current descendants, but it does not automatically cover new folders created later unless you remember to apply the policy to each new folder. This creates operational overhead and risk of gaps, especially with multiple teams and automated project provisioning.
Incorrect. Applying the policy at the project level is the least scalable: you must configure it on every current project and repeat the work for each new project created. This approach is prone to configuration drift and missed projects, and it does not satisfy the requirement that the control applies by default to future resources.
Incorrect. Relying on teams to update templates and “do the right thing” is not an enforceable compliance control. It can be bypassed, forgotten, or inconsistently implemented across departments and pipelines. The requirement calls for a scalable, default, preventive control—best achieved with Organization Policy rather than process-based guidance.
Core Concept: This question tests Google Cloud resource hierarchy and Organization Policy Service (policy-as-code guardrails). Organization policies let you centrally enforce security constraints across an organization, folders, and projects, inheriting down the hierarchy by default. Why the Answer is Correct: To ensure no Compute Engine VM can be created with an external IP—now and as new folders/projects are added—you must apply the constraint at the highest applicable point: the organization root node. Setting constraints/compute.vmExternalIpAccess at the org level creates a scalable, default-deny control that automatically applies to all descendant folders and projects, including the 5 new folders and ~30 new projects created by automated pipelines. This meets the requirement of “applies by default to all current and future resources” without relying on teams to remember per-project settings. Key Features / Best Practices: - Inheritance: Policies set at the organization node propagate to all descendants unless explicitly overridden (where allowed). This is aligned with the Google Cloud Architecture Framework’s Security, privacy, and compliance pillar: enforce preventive controls centrally. - Preventive guardrail: constraints/compute.vmExternalIpAccess blocks assignment of external IPs at VM creation (and can restrict which principals/projects can use them). This reduces data exfiltration risk and enforces a private-by-default network posture. - Operational scalability: One policy change covers future growth and avoids drift across many projects. Common Misconceptions: Teams often try to enforce this via documentation or CI/CD template changes (option D). That’s not a reliable control: it’s detective at best and prone to exceptions, drift, and human error. Others apply policies at folders/projects (B/C), which works only for existing resources and requires continuous updates as new folders/projects appear. Exam Tips: When you see “all current and future projects/folders” and “scalable control,” think Organization Policy at the org root. Use the resource hierarchy to minimize administrative overhead. Also remember that network-level controls (like firewall rules) don’t prevent external IP assignment; organization policies are the correct preventive mechanism for configuration constraints.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
An e-commerce startup named ParcelPeak operates a Cloud SQL for PostgreSQL instance (db-custom-4-15360) ingesting about 400,000 order, payment, and refund rows per day from 7 microservices across 3 regions. Executives request weekly KPI dashboards and ad-hoc cohort revenue analysis without exporting data to on-prem systems. In this situation, which capability of Cloud SQL most directly helps the team turn this operational data into business insights?
Correct. Cloud SQL for PostgreSQL supports standard relational database connectivity, which allows BI and analytics tools to connect and query the data for dashboards and reports. That is the most direct way the startup can turn operational order and payment data into weekly KPI dashboards and ad-hoc business analysis. Cloud SQL itself stores and serves the data, while external BI platforms provide the visualization and analytical experience.
Incorrect. Cloud SQL does not automatically train, deploy, or serve machine learning models on its transactional tables. Google Cloud provides machine learning capabilities through services such as Vertex AI and BigQuery ML, not as a built-in feature of Cloud SQL. This option describes functionality outside Cloud SQL’s core purpose as a managed relational database.
Incorrect. Cloud SQL does not include a native dashboarding or intelligent analytics interface for creating charts and business reports by itself. It is designed to run relational database workloads, not to replace BI products. To create executive dashboards or interactive analytics, teams typically connect Cloud SQL to tools such as Looker Studio or Looker.
Incorrect. Cloud SQL is not a built-in ETL or text-processing platform for converting unstructured data into structured relational tables. It stores structured relational data and can participate in data pipelines, but transformation of raw or unstructured data is handled by other services. This option describes data engineering functionality rather than a direct Cloud SQL capability.
Core Concept: This question is about recognizing Cloud SQL’s role as a managed relational database and how organizations commonly derive business insights from operational data stored there. Cloud SQL itself is not a business intelligence platform, but it can be connected to analytics and reporting tools using standard database connectivity. The most direct capability that helps the team is integration with BI tools and analytics platforms so they can build dashboards and run analysis on cloud-hosted relational data. Why the Answer is Correct: Executives want KPI dashboards and ad-hoc revenue analysis without moving data back on-premises. Cloud SQL for PostgreSQL supports standard PostgreSQL-compatible connections, which allows tools such as Looker Studio, Looker, and other analytics platforms to query the data and present insights. This makes option A the best fit because it describes the practical way Cloud SQL contributes to business insights: serving as a managed relational source for reporting and analysis. Key Features: Cloud SQL provides managed PostgreSQL with standard SQL access, secure connectivity, backups, and high availability options. Because it uses familiar relational interfaces, many BI and reporting tools can connect to it directly. This makes it useful as a source of truth for operational reporting, though heavier analytical workloads are often better handled by dedicated analytics services. Common Misconceptions: A common mistake is to assume Cloud SQL includes built-in dashboarding, machine learning, or ETL features. It does not natively create charts, train models, or transform unstructured data into structured datasets. Those capabilities come from other Google Cloud or partner services that connect to Cloud SQL. Exam Tips: For Digital Leader questions, map each product to its primary purpose. Cloud SQL is for managed relational databases and transactional workloads, while BI tools such as Looker and Looker Studio are used for dashboards and analysis. If a question asks how Cloud SQL helps generate insights, the best answer is usually about integration with analytics tools rather than native analytics features.
A regional nonprofit with 12 offices and 650 staff must roll out email, calendaring, and a document collaboration suite within 14 days, meet a 99.9% availability target, stay under a $12,000 monthly IT budget, and has only one part-time sysadmin; they want the provider to handle upgrades, backups, security baselines, and scaling so the team can focus on using the apps. In this scenario, which cloud service model is the best fit, and why would choosing SaaS be appropriate?
This describes PaaS: a balance where you deploy code while the provider manages underlying servers/OS. It’s useful for custom applications (e.g., App Engine, Cloud Run) but does not directly deliver a complete email/calendaring/document suite. The nonprofit’s need is to adopt a ready-made collaboration product quickly, not to build and operate one, so PaaS is not the best fit.
This is SaaS and matches the scenario precisely. The nonprofit wants the provider to run the full application stack: updates, backups, security baselines, scaling, and high availability—while the organization focuses on using the tools. Google Workspace is a typical SaaS example for email, calendaring, and document collaboration, enabling rapid rollout and predictable per-user pricing within a tight budget and limited admin capacity.
This describes IaaS: maximum control over VMs, networking, storage, and operating systems (e.g., Compute Engine). While flexible, it requires significant operational work to meet 99.9% availability (multi-zone design, patching, monitoring), plus backups and upgrades. With only a part-time sysadmin and a 14-day deadline, IaaS would add complexity and risk, and likely increase ongoing operational costs.
This suggests a strategy of dynamically shifting between flexibility and provider management, which is not a standard service model choice for delivering a collaboration suite. While organizations can mix SaaS/PaaS/IaaS across workloads, the question asks for the best fit for this specific need. The nonprofit’s requirements strongly point to SaaS rather than an approach that implies frequent model changes.
Core Concept: This question tests cloud service models (SaaS vs PaaS vs IaaS) and matching them to business constraints: rapid rollout, high availability, limited IT staff, predictable costs, and a desire for the provider to manage operations. Why the Answer is Correct: SaaS (Software as a Service) is the best fit because the nonprofit needs a complete email, calendaring, and document collaboration suite in 14 days with 99.9% availability, while having only one part-time sysadmin and a strict monthly budget. With SaaS, the provider operates the full application stack (application, runtime, OS, infrastructure), including upgrades, patching, backups, security baselines, and scaling. This directly aligns to the requirement that “the provider handle upgrades, backups, security baselines, and scaling so the team can focus on using the apps.” In Google Cloud terms, this typically maps to Google Workspace for Gmail, Calendar, Drive, and Docs, which is designed for fast deployment and minimal customer operations. Key Features / Best Practices: SaaS offerings provide built-in SLAs (often meeting or exceeding 99.9%), centralized admin controls, and security capabilities such as MFA/2-Step Verification, SSO/SAML, DLP and retention (plan-dependent), and audit logs. Operational tasks like capacity planning, server maintenance, and software version management are handled by the vendor, supporting the Google Cloud Architecture Framework principle of “operational excellence” by reducing undifferentiated heavy lifting. Cost is typically per-user/per-month, which improves predictability versus building and operating equivalent systems on IaaS/PaaS. Common Misconceptions: PaaS can sound attractive because it reduces server management, but it’s meant for building/deploying custom applications—not adopting a ready-made collaboration suite. IaaS offers maximum control, but it increases operational burden (patching, backups, HA design), which conflicts with the staffing and timeline constraints. “Dynamic shifting” between models is not a primary selection criterion for this scenario. Exam Tips: When requirements emphasize fastest time-to-value, minimal IT operations, vendor-managed updates/availability, and standard business functions (email/docs), choose SaaS. If the question emphasizes custom code deployment without managing servers, think PaaS. If it emphasizes OS/network control, think IaaS. Always map constraints (staffing, SLA, timeline, budget predictability) to the shared responsibility model.
A nationwide media company runs compute-heavy web, analytics, and ML training workloads across 9 Google Cloud projects that all use the same billing account; per-project monthly costs fluctuate from $2,000 to $18,000, but the combined monthly spend has remained about $100,000 with less than 4% variance over the past 6 months, and most compute runs on Compute Engine in us-central1 and europe-west1; the company wants to optimize costs without reorganizing projects or changing application architecture—what should they do?
Purchasing separate 1-year commitments per project is inefficient when per-project usage fluctuates. Each project could end up with unused committed capacity during low months, while other projects pay on-demand during spikes. This increases the risk of overcommitting in some projects and undercommitting in others, reducing realized savings. It also adds operational overhead to manage and resize multiple commitments across 9 projects.
Creating separate billing accounts per project generally makes cost optimization harder, not easier. It prevents pooling and sharing of discounts and can fragment purchasing power. It also increases administrative overhead (billing management, invoicing, budgets, and permissions). This does not address the core requirement—optimize costs without reorganizing projects or changing architecture—and may reduce the ability to apply commitments efficiently.
This is the best approach: enable billing-account-level committed use discount sharing and purchase a single 1-year commitment sized to the aggregate steady-state usage. Because the total spend is stable, the commitment will be consistently utilized even if individual projects vary. The discount can be applied across eligible usage in all linked projects, maximizing savings while meeting the constraint of no project reorganization or application changes.
Consolidating all workloads into one project is unnecessary and conflicts with the requirement to avoid reorganizing projects. Project consolidation can introduce governance, IAM, quota, and operational complexity, and it may disrupt existing teams and chargeback models. Larger discounts do not require consolidation when billing-account-level sharing is available; you can centralize discount benefits without moving resources.
Core Concept: This question tests cost optimization using Committed Use Discounts (CUDs) for Compute Engine and the ability to share those discounts across multiple projects under the same Cloud Billing account. It also implicitly tests the idea of sizing commitments to steady-state (baseline) usage and avoiding organizational/app changes. Why the Answer is Correct: The company has 9 projects with volatile per-project spend, but a very stable aggregate monthly spend (~$100,000 with <4% variance). That pattern is ideal for purchasing a single 1-year commitment sized to the aggregate baseline usage rather than trying to predict each project’s minimum. With billing-account-level CUD sharing enabled, the commitment is applied wherever eligible usage occurs across projects linked to that billing account. This maximizes utilization of the commitment even when workloads shift between projects month to month, delivering consistent savings without reorganizing projects or changing application architecture. Key Features / Best Practices: - Compute Engine CUDs provide discounted pricing in exchange for committing to a certain amount of resource usage for 1 or 3 years. - Billing-account-level sharing allows the discount to be consumed by eligible usage across multiple projects under the same billing account, improving commitment “coverage” and reducing waste. - Best practice is to commit only to the steady-state baseline (the portion you are confident will run continuously), then leave bursty/variable usage on on-demand pricing. - Consider regional/usage alignment: since most compute is in us-central1 and europe-west1, commitments should be purchased to match where the baseline usage actually runs (and the relevant resource types). Common Misconceptions: A common trap is assuming commitments must be purchased per project (leading to underutilized commitments when a project’s usage dips) or that consolidating projects is required to get better discounts. Another misconception is that changing billing accounts improves discounts; it usually fragments spend and reduces flexibility. Exam Tips: When you see multiple projects with fluctuating costs but stable total spend, think “shared discounts at the billing account level” and “size to aggregate baseline.” Also remember the constraint: “no reorg, no architecture change” strongly points to billing/discount configuration rather than migration or consolidation. This aligns with Google Cloud Architecture Framework cost optimization principles: maximize utilization of committed capacity and reduce waste while maintaining operational simplicity.
A multinational logistics company is onboarding a new workload to Google Cloud for 5 departments (Route Planning, Fleet Maintenance, Customer Support, Finance, Compliance) with 180 employees; access must be tailored per department within 72 hours, changes in membership occur quarterly, and the solution must minimize admin overhead while maintaining clear audit trails and avoiding over-privileged access. What is the quickest and most efficient way to implement department-specific access?
Correct. Assigning predefined IAM roles to Google Groups per department is the standard scalable IAM pattern. It enables rapid rollout (create groups, bind roles once), easy quarterly changes (update group membership), and strong governance (least privilege via role selection and resource scoping). It also improves auditability because access is traceable through IAM policy bindings and group membership logs.
Incorrect. Primitive roles (Owner/Editor/Viewer) are overly broad and commonly violate least-privilege requirements, especially across multiple departments with different duties (Finance vs. Customer Support). Directly assigning roles to 180 individuals also increases admin overhead and the chance of lingering access when employees change departments, weakening compliance and audit readiness.
Incorrect. Custom roles are useful when predefined roles don’t fit, but creating a unique custom role for every employee is operationally expensive, slow to implement within 72 hours, and difficult to maintain. It also increases the risk of mistakes and inconsistent permissions. Best practice is to use predefined roles and, if needed, a small number of custom roles aligned to job functions—not individuals.
Incorrect. A shared service account and distributed credentials eliminate individual accountability and create major security and compliance risks. It prevents clear audit trails (actions appear as the service account), complicates incident response, and violates best practices around credential management. Service accounts are intended for workloads/applications, not for shared human access across departments.
Core Concept: This question tests Google Cloud Identity and Access Management (IAM) best practices: using predefined roles, granting access to groups (not individuals), and applying least privilege with strong auditability. It also touches on operational efficiency (low admin overhead) and governance (clear audit trails). Why the Answer is Correct: Option A is the fastest and most efficient approach because it combines (1) predefined IAM roles aligned to job functions and (2) Google Groups per department. You assign roles once to each group at the appropriate resource level (project/folder), then manage membership in the group as employees join/leave departments. Quarterly membership changes become a simple group update rather than repeated IAM policy edits for 180 individuals. This minimizes administrative effort and reduces the risk of configuration drift or missed removals. Key Features / Best Practices: - Group-based access control: Bind IAM roles to Google Groups (e.g., route-planning@, finance@). This is a standard pattern for scalable IAM. - Predefined roles first: Prefer Google-managed predefined roles because they are maintained by Google and map to common job needs; use custom roles only when predefined roles cannot meet least-privilege requirements. - Least privilege and scope: Grant roles at the lowest practical level (specific projects or folders per department) to avoid over-privileged access. - Auditability: Cloud Audit Logs record IAM policy changes, and group membership changes can be audited via Cloud Identity / Workspace logs, providing clear traceability of who had access and when. Common Misconceptions: Some assume giving primitive roles (Owner/Editor/Viewer) is “quick,” but it creates broad permissions, increases security risk, and complicates compliance. Others think custom roles per person are the most secure, but that is operationally unmanageable and error-prone. Service account credential sharing is sometimes mistaken for convenience, but it breaks accountability and violates security best practices. Exam Tips: For Digital Leader questions, look for “quick + efficient + least privilege + low admin overhead.” The typical winning pattern is: organize resources (folders/projects), assign predefined roles to groups, and manage users through group membership. Reserve custom roles for exceptional cases, and avoid primitive roles and shared credentials unless explicitly justified.
Your team operates a ride-sharing analytics service on Google Cloud across us-central1 and europe-west1, aiming for 99.9% monthly availability and 95th-percentile request latency under 300 ms during peak 18:00–22:00 traffic; within the SRE framework, which concept represents the metric that actually quantifies how well the service is performing?
A Service-level agreement (SLA) is a customer-facing contract that may specify availability/latency commitments and often includes consequences (credits/penalties) if not met. It is not the metric itself; it’s the formal agreement built on top of internal objectives and indicators. SLAs are typically less strict than internal SLOs to provide a safety margin.
A Service-level indicator (SLI) is the actual quantitative measure of service performance, such as availability, error rate, or 95th-percentile latency. It answers “how is the service doing right now/over a period?” In this scenario, the measured monthly availability and p95 latency during peak hours are SLIs, making this the correct choice.
Error reporting is an operational capability (for example, Google Cloud Error Reporting) that aggregates and alerts on application exceptions and crashes. While it can contribute data used to compute an SLI (like error rate), it is not the SRE concept that represents the performance metric itself. It’s a tool/service, not the measurement definition.
A Service-level objective (SLO) is the target value or threshold for an SLI over a time window (for example, “99.9% monthly availability” or “p95 latency < 300 ms during 18:00–22:00”). SLOs define the desired reliability level and drive error budgets, but they are not the metric; they are the goal applied to the metric.
Core Concept: This question tests Site Reliability Engineering (SRE) reliability measurement terminology: SLI, SLO, and SLA. In Google’s SRE model, you first define what “good” looks like (objectives), then choose the concrete measurements that quantify actual performance. Why the Answer is Correct: A Service-level indicator (SLI) is the metric that actually quantifies how well the service is performing. Examples include availability (% of successful requests), request latency (e.g., 95th percentile under 300 ms), error rate, or throughput. The prompt asks for “the metric that actually quantifies how well the service is performing,” which is precisely the definition of an SLI. In the scenario, the measured values for monthly availability and p95 latency during peak hours are SLIs. Key Features / Best Practices: In practice, SLIs are computed from telemetry (Cloud Monitoring metrics, logs-based metrics, traces) and should be: - User-centric (measure what users experience, e.g., successful requests, end-to-end latency). - Clearly defined (what counts as “success,” which endpoints, which regions, which time windows like 18:00–22:00). - Tied to error budgets (derived from SLOs) to guide release velocity vs. reliability. For a multi-region service (us-central1 and europe-west1), teams often define SLIs per region and globally, and ensure measurement accounts for traffic distribution and failover behavior. Common Misconceptions: People often confuse SLI with SLO because the question includes targets (99.9% and p95 < 300 ms). Those targets are SLOs, but the underlying measured quantities (availability, latency percentiles) are SLIs. Another confusion is with SLA, which is an external contract and may include penalties; it is not the internal measurement itself. Exam Tips: Memorize the mapping: SLI = “what you measure,” SLO = “the target for the measurement,” SLA = “customer-facing contract.” If the question asks for the metric/measurement, pick SLI; if it asks for the goal/threshold, pick SLO; if it asks for a contractual guarantee, pick SLA. This aligns with Google Cloud Operations/SRE practices and the Google Cloud Architecture Framework’s reliability pillar (measurable objectives and monitoring).
Lernzeitraum: 1 month
시험 문제랑 많이 유사해서 좋았어요. 강의 다 듣고난 후에 문제 찾기 어려웠는데 앱 너무 잘 이용했네요
Lernzeitraum: 1 month
Good questions and explanations. The resource is very similar to the real exam questions. Thanks.
Lernzeitraum: 1 month
This is exactly what you need to pass the GCP CDL exam. The look and feel are exactly how you would experience the real exam. The questions are very similar and you'll even find a few on the exam itself. I would recommend this for anyone looking to obtain this certification. The exam is not an easy one, so the explanations to the questions are very helpful to solidify your understanding and help.
Lernzeitraum: 2 months
I used Cloud Pass to prepare for the GCP CDL exam, and it made a huge difference. The practice questions covered a wide range of scenarios I actually saw on the test. The explanations were clear and helped me understand how LookML and data modeling work in real projects. If you focus on understanding the logic behind each question, this app is more than enough to pass.
Lernzeitraum: 1 month
Cloud Pass was my main study tool for the GCP CDL exam, and I passed on the first try. The questions were realistic and helped me get comfortable with Looker concepts, permissions, explores, and model structures. I especially liked that I could reset my progress and re-solve the tricky questions. Strongly recommend this for anyone targeting CDL.

Professional

Associate

Professional

Associate

Professional

Professional

Professional

Professional

Professional

Professional


Möchtest du alle Fragen unterwegs üben?
Kostenlose App holen
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.