
50問と90分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
A smart home company runs its control platform on a managed messaging service from its cloud provider that guarantees 99.99% monthly availability, but last month monitoring showed only 99.6% availability (about 173 minutes of cumulative downtime), intermittently preventing devices from receiving commands—what is a likely impact on the organization?
This is not the best answer for the question being asked. Error budget consumption is an internal SRE or reliability-management concept used by engineering teams to track service performance against objectives, but the prompt asks for a likely impact on the organization from customer-visible downtime. In a Digital Leader context, the broader business consequence of outages is more relevant than an internal operational metric.
Incorrect. Cloud provider SLAs for managed services are generally standardized and are not typically renegotiated simply because one month of availability was lower than expected. The more common outcomes are service credits, escalation with the provider, or architectural changes to improve resilience. De-emphasizing uptime commitments would not be a likely or sensible organizational response.
Correct. The messaging outage intermittently prevents smart home devices from receiving commands, which directly degrades the customer experience and undermines trust in the product. When a core feature of a connected device platform becomes unreliable, customers may cancel subscriptions, avoid renewals, or move to competitors. That makes customer churn and lost revenue the most likely organizational impact in this scenario.
Incorrect. Reduced availability means the service was intermittently unavailable or unable to process requests, not that stored data was deleted. Data deletion is related to durability, backup strategy, replication, and disaster recovery rather than uptime percentage alone. An availability shortfall can disrupt operations, but it does not imply irretrievable database loss.
Core concept: This question is about the business impact of reduced service availability in a customer-facing smart home platform. A managed messaging service that falls from a 99.99% availability expectation to 99.6% availability experiences substantial downtime, which directly affects end users when devices cannot receive commands. Why correct: Because the outage interrupts core product functionality, customers may lose trust in the service, stop using it, or switch to competitors, which can reduce revenue. Key features: Availability measures whether a service can be accessed and used when needed, and customer-facing outages often translate into support costs, reputational damage, and churn. Common misconceptions: Error budgets and SRE metrics are real operational concepts, but they are internal engineering indicators rather than the most likely organizational impact described here; availability issues also do not imply data deletion. Exam tips: For Google Cloud Digital Leader questions, prefer the answer that connects technical failure to business outcomes when the scenario emphasizes customer-facing disruption.
A regional hospital network plans to use Google Cloud’s advanced ML services (such as Vertex AI) for radiology model training and inference, but regulations require all 120 TB of patient images and metadata to remain stored only in its on-premises data center; the hospital has a 5 Gbps Dedicated Interconnect to Google Cloud and must ensure no PHI is persisted in the public cloud. Which overall cloud strategy should they adopt to meet these constraints while still using Google Cloud’s ML capabilities?
A hybrid-cloud approach is the correct strategy because it combines on-premises infrastructure with public cloud services in a single operating model. In this scenario, the hospital can keep all 120 TB of patient images and metadata stored in its own data center to satisfy the regulatory requirement that PHI never be persisted in the public cloud. At the same time, it can use Google Cloud ML capabilities such as Vertex AI over the existing 5 Gbps Dedicated Interconnect, which provides private, high-throughput connectivity between environments. This is exactly the kind of requirement hybrid cloud is designed for: regulated data remains on-prem while selected cloud services are consumed where appropriate.
A multi-cloud approach means using services from multiple cloud providers, such as Google Cloud together with AWS or Azure. The question does not describe a need to distribute workloads across multiple public clouds, avoid vendor lock-in, or compare cloud-native services from different vendors. More importantly, multi-cloud does not inherently address the core requirement that all PHI remain stored only in the on-premises data center. Adding more clouds would usually increase governance, networking, and compliance complexity rather than solve the stated constraint.
A public-cloud approach would place the primary workload and data handling model in the cloud provider environment. That conflicts with the explicit requirement that all patient images and metadata must remain stored only in the hospital’s on-premises data center and that no PHI be persisted in the public cloud. Even if cloud services could be secured, the deployment model itself does not match the data residency and persistence restrictions in the question. Therefore, a pure public-cloud strategy is not appropriate here.
A private-cloud approach focuses on running infrastructure in a privately controlled environment, often entirely on-premises. While that could satisfy the requirement to keep PHI stored locally, it would not align with the goal of using Google Cloud’s advanced managed ML services such as Vertex AI. The question specifically asks for an overall cloud strategy that preserves on-prem data residency while still leveraging Google Cloud capabilities. That combination points to hybrid cloud, not private cloud alone.
Core Concept: This question tests cloud deployment models (hybrid vs. public/private/multi-cloud) and how to meet strict data residency and security requirements (no PHI persisted in the public cloud) while still consuming Google Cloud managed ML services like Vertex AI. Why the Answer is Correct: A hybrid-cloud approach combines on-premises infrastructure (where regulated PHI must remain) with public cloud services (for elastic compute and managed ML capabilities). The hospital can keep the 120 TB of images/metadata stored only on-prem while using a 5 Gbps Dedicated Interconnect for private, high-throughput connectivity into Google Cloud. This aligns with the constraint “must ensure no PHI is persisted in the public cloud” because storage remains on-prem; only controlled, transient processing can occur in Google Cloud, subject to architecture and governance. Key Features / How to Implement: - Connectivity: Dedicated Interconnect provides private connectivity and predictable bandwidth/latency versus internet VPN, supporting large-scale data access patterns. - Data governance: Design so PHI is not written to Cloud Storage/BigQuery/etc. Use strict IAM, VPC Service Controls (where applicable), and organization policies to reduce accidental data exfiltration. - Processing pattern: Stream or access data from on-prem during training/inference and ensure ephemeral compute disks and logs do not retain PHI. Consider de-identification/tokenization on-prem if feasible, and send only non-PHI features to the cloud. - Architecture Framework alignment: This is primarily a Trust and Security decision (data residency, compliance, least privilege), but also touches reliability and performance (dedicated private connectivity). Common Misconceptions: - “Private cloud” can sound right because data stays on-prem, but it would not satisfy the requirement to use Google Cloud’s managed ML services. - “Public cloud” is incorrect because it implies storing/processing PHI in Google-managed storage/services. - “Multi-cloud” is about using multiple public clouds; it doesn’t address the on-prem-only storage requirement. Exam Tips: When a question states data must remain on-prem (regulatory/sovereignty/PHI) yet wants to use public cloud services, the default answer is hybrid cloud. Look for keywords like Dedicated Interconnect, on-prem data center, and “no data persisted in cloud,” which strongly indicate hybrid connectivity plus strict data governance controls.
An e-commerce startup named ParcelPeak operates a Cloud SQL for PostgreSQL instance (db-custom-4-15360) ingesting about 400,000 order, payment, and refund rows per day from 7 microservices across 3 regions. Executives request weekly KPI dashboards and ad-hoc cohort revenue analysis without exporting data to on-prem systems. In this situation, which capability of Cloud SQL most directly helps the team turn this operational data into business insights?
Correct. Cloud SQL for PostgreSQL supports standard relational database connectivity, which allows BI and analytics tools to connect and query the data for dashboards and reports. That is the most direct way the startup can turn operational order and payment data into weekly KPI dashboards and ad-hoc business analysis. Cloud SQL itself stores and serves the data, while external BI platforms provide the visualization and analytical experience.
Incorrect. Cloud SQL does not automatically train, deploy, or serve machine learning models on its transactional tables. Google Cloud provides machine learning capabilities through services such as Vertex AI and BigQuery ML, not as a built-in feature of Cloud SQL. This option describes functionality outside Cloud SQL’s core purpose as a managed relational database.
Incorrect. Cloud SQL does not include a native dashboarding or intelligent analytics interface for creating charts and business reports by itself. It is designed to run relational database workloads, not to replace BI products. To create executive dashboards or interactive analytics, teams typically connect Cloud SQL to tools such as Looker Studio or Looker.
Incorrect. Cloud SQL is not a built-in ETL or text-processing platform for converting unstructured data into structured relational tables. It stores structured relational data and can participate in data pipelines, but transformation of raw or unstructured data is handled by other services. This option describes data engineering functionality rather than a direct Cloud SQL capability.
Core Concept: This question is about recognizing Cloud SQL’s role as a managed relational database and how organizations commonly derive business insights from operational data stored there. Cloud SQL itself is not a business intelligence platform, but it can be connected to analytics and reporting tools using standard database connectivity. The most direct capability that helps the team is integration with BI tools and analytics platforms so they can build dashboards and run analysis on cloud-hosted relational data. Why the Answer is Correct: Executives want KPI dashboards and ad-hoc revenue analysis without moving data back on-premises. Cloud SQL for PostgreSQL supports standard PostgreSQL-compatible connections, which allows tools such as Looker Studio, Looker, and other analytics platforms to query the data and present insights. This makes option A the best fit because it describes the practical way Cloud SQL contributes to business insights: serving as a managed relational source for reporting and analysis. Key Features: Cloud SQL provides managed PostgreSQL with standard SQL access, secure connectivity, backups, and high availability options. Because it uses familiar relational interfaces, many BI and reporting tools can connect to it directly. This makes it useful as a source of truth for operational reporting, though heavier analytical workloads are often better handled by dedicated analytics services. Common Misconceptions: A common mistake is to assume Cloud SQL includes built-in dashboarding, machine learning, or ETL features. It does not natively create charts, train models, or transform unstructured data into structured datasets. Those capabilities come from other Google Cloud or partner services that connect to Cloud SQL. Exam Tips: For Digital Leader questions, map each product to its primary purpose. Cloud SQL is for managed relational databases and transactional workloads, while BI tools such as Looker and Looker Studio are used for dashboards and analysis. If a question asks how Cloud SQL helps generate insights, the best answer is usually about integration with analytics tools rather than native analytics features.
An online language-learning platform with 500,000 paying learners sees a 15% annual cancellation rate and wants to proactively retain users by offering a 20% discount and personalized study plans. They have 36 months of historical data including demographics, session frequency, lesson completion rates, support tickets, and a label indicating whether each user canceled. What should the company do to use data and AI to identify at-risk subscribers for targeted retention?
A dashboard can help explore trends (descriptive analytics) and may surface correlations, but it is manual, subjective, and not a predictive system. It won’t reliably identify which current users are most likely to cancel, nor will it scale to continuous, proactive interventions. This is better for initial exploration, not for targeted retention at production scale.
This is the correct choice because the company has historical examples of users who canceled and users who did not, which makes this a supervised learning problem. A churn prediction model can learn patterns from demographics and usage behavior, then assign risk scores to current users. Those scores can be used to trigger targeted retention offers such as discounts and personalized study plans. This directly supports the business goal of reducing cancellations in a scalable and proactive way.
Surveys can provide qualitative insights and may complement a churn program, but they are not sufficient for proactive identification of at-risk users. Response rates can be low and biased, and sentiment alone may not correlate strongly with churn. It also delays action compared to using existing behavioral and support data to predict churn now.
A summary report and quarterly discussion is retrospective and too slow for churn prevention. It doesn’t produce user-level risk scores or enable automated, timely interventions. Reporting can inform strategy, but it won’t operationalize AI-driven targeting to reduce cancellations in the near term.
Core concept: This question is about using supervised machine learning to predict customer churn. The company has historical user data plus a label showing whether each user canceled, which is exactly the pattern needed to train a model that predicts future cancellation risk. Why correct: Option B is correct because it uses the labeled historical data to train a churn prediction model and then applies that model to current users. That allows the company to identify subscribers who are most likely to cancel and target them with retention actions such as discounts and personalized study plans. Key features: The important signals listed in the question—demographics, session frequency, lesson completion, and support tickets—are useful predictive features. A churn model can combine these variables at scale and produce risk scores for each current subscriber, enabling proactive and targeted intervention. Common misconceptions: Dashboards, surveys, and summary reports can support business understanding, but they do not directly provide scalable user-level churn predictions. The presence of a historical cancellation label is the strongest clue that predictive ML is the intended solution. Exam tips: On Digital Leader questions, when you see historical data with known outcomes and a goal to predict future behavior, think supervised ML. Prefer the option that turns data into actionable predictions rather than manual analysis or retrospective reporting.
A media-streaming company operates 14 microservices on Google Kubernetes Engine (GKE), receives about 70 paging alerts per week, and estimates that roughly 40% of engineering hours are spent on repetitive, manual tasks like log triage, ticket updates, and release rollbacks; the team wants to boost operational efficiency without reducing service scope or relaxing SLOs. Which SRE best practice should they prioritize to increase efficiency?
Relying less on data and dashboards is the opposite of SRE best practice. SRE depends on observability (metrics, logs, traces) to make fast, correct decisions during incidents and to validate improvements against SLOs. Reducing dashboards might feel like it speeds decisions, but it typically increases guesswork, slows root-cause analysis, and can lead to more outages and more toil over time.
Having developers participate in on-call can be healthy, but removing SREs from on-call and assigning exclusive production ownership to developers does not address the core issue: repetitive manual operational work. It can also reduce reliability expertise during incidents and weaken systematic reliability engineering. SRE is about engineering reliability and reducing toil, not simply shifting operational burden to another team.
Spending less time measuring and documenting outage impact is not an efficiency best practice in SRE. Understanding impact supports blameless postmortems, prioritization, and SLO/error-budget decisions. Skipping impact analysis may reduce short-term effort but usually increases long-term cost by repeating incidents, misprioritizing fixes, and failing to justify automation or reliability investments.
Increasing automation of toil is the canonical SRE response to high manual workload and frequent paging. Runbooks and playbooks standardize response; CI/CD enables safer, repeatable releases and rollbacks; self-service reduces ticket-driven operations; and auto-remediation can resolve common failures without human intervention. This improves operational efficiency while maintaining service scope and SLOs, aligning directly with SRE principles.
Core Concept: This question tests the SRE concept of “toil” and the best practice of reducing it through automation. In SRE, toil is repetitive, manual, automatable work that does not provide enduring value (e.g., log triage, ticket updates, manual rollbacks). High paging volume (70/week) plus 40% engineering time on repetitive tasks signals excessive operational load and insufficient automation. Why the Answer is Correct: Option D directly targets the stated problem: operational inefficiency without reducing service scope or relaxing SLOs. SRE best practice is to systematically identify toil, prioritize it, and automate it using runbooks, CI/CD, self-service workflows, and auto-remediation. This increases reliability and efficiency simultaneously: fewer manual steps reduces human error, speeds incident response, and frees engineers to improve systems (capacity, resilience, observability) rather than “keeping the lights on.” Key Features / Practices (Google Cloud context): On GKE microservices, common automation patterns include: standardized runbooks and incident playbooks; CI/CD pipelines (Cloud Build, Artifact Registry, Cloud Deploy) with progressive delivery and automated rollback; self-service operations via GitOps (Config Sync/Anthos Config Management) and policy guardrails; auto-remediation using Cloud Monitoring alerting + Cloud Functions/Cloud Run responders; and better log/trace correlation with Cloud Logging, Error Reporting, and Cloud Trace to reduce triage time. SRE also emphasizes error budgets: if SLOs are at risk, prioritize reliability work and toil reduction. Common Misconceptions: Some may think reducing dashboards (A) speeds decisions, but it undermines observability and increases risk. Others may interpret “developers own production” (B) as DevOps maturity; however, removing SREs from on-call doesn’t remove toil and can worsen incident handling. Measuring outage impact (C) might feel like “paperwork,” but it’s essential for learning, prioritization, and SLO governance. Exam Tips: When you see high alert volume and large time spent on repetitive manual work, map it to “toil” and choose automation as the primary lever. Digital Leader questions often expect alignment with SRE principles: strong observability, blameless learning, error budgets, and automation to improve both reliability and operational efficiency.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
Your healthcare analytics startup operates 18 Google Cloud projects across 3 folders under a single organization, and auditors require quarterly organization-wide compliance reports aligned to CIS benchmarks and PCI DSS; you need a native service that continuously identifies misconfigurations and threats across all current and future projects and provides centralized dashboards to help maintain and attest to compliance. Which Google Cloud product should you use?
Cloud Logging centralizes logs and supports auditability (e.g., Admin Activity and Data Access logs) and can help investigations and compliance evidence. However, it does not continuously assess configuration against CIS benchmarks or provide a unified security posture dashboard across an organization. Logging is a data source that SCC and other tools can consume, not the primary compliance posture management solution described.
Identity and Access Management (IAM) is foundational for enforcing least privilege and meeting compliance requirements, but it is not a continuous monitoring or compliance reporting product. IAM can tell you “who has access to what,” yet it won’t automatically detect broad classes of misconfigurations across services, correlate threats, or provide organization-wide compliance dashboards aligned to CIS/PCI on its own.
Google Cloud Armor provides DDoS protection and a web application firewall (WAF) for HTTP(S) Load Balancing, helping mitigate threats like OWASP Top 10 and volumetric attacks. While important for PCI-related security controls, it is not designed for organization-wide posture management, asset inventory, or continuous compliance reporting across all projects and services. It addresses a specific perimeter protection use case.
Security Command Center is Google Cloud’s native platform for organization-wide security posture management and threat detection. It provides centralized dashboards, asset inventory, and continuous findings from sources like Security Health Analytics (misconfiguration/CIS-aligned checks) and threat detection services. It scales across folders/projects, supports onboarding future projects under the org, and enables exporting findings for quarterly auditor-ready compliance reporting.
Core Concept: This question tests organization-wide security posture management and compliance reporting in Google Cloud. The key idea is using a native, centralized service that continuously discovers assets, detects misconfigurations/threats, and supports compliance evidence and reporting across an entire organization (including future projects). Why the Answer is Correct: Security Command Center (SCC) is Google Cloud’s unified security and risk management platform. It can be enabled at the organization level to provide centralized visibility across all folders and projects, including newly created projects (when configured appropriately). SCC aggregates findings from multiple sources (e.g., Security Health Analytics, Event Threat Detection, Vulnerability/Container findings depending on tier) and presents them in dashboards and reports that help teams and auditors assess compliance against common benchmarks such as CIS and requirements relevant to PCI DSS. Key Features / How You’d Use It: SCC supports organization-level activation and centralized administration, aligning with the Google Cloud Architecture Framework’s “Security, privacy, and compliance” pillar (central governance, continuous monitoring, and auditability). Security Health Analytics includes built-in detectors mapped to CIS Google Cloud Foundations Benchmark and other best-practice controls, flagging misconfigurations like overly permissive IAM, public buckets, or insecure firewall rules. Findings can be filtered by organization/folder/project, exported to BigQuery or SIEM tools, and used to produce quarterly compliance evidence. SCC also supports continuous monitoring rather than point-in-time checks, which is critical for “current and future projects.” Common Misconceptions: Cloud Logging is essential for audit trails but doesn’t natively provide benchmark-aligned posture management or compliance dashboards. IAM is a control plane for access, not a continuous compliance reporting product. Cloud Armor protects against web attacks (WAF/DDoS) but doesn’t provide organization-wide compliance posture across all services. Exam Tips: When you see “organization-wide,” “continuous identification of misconfigurations,” “threat detection,” and “centralized dashboards/compliance,” think Security Command Center. For audit evidence, remember SCC findings export (BigQuery) and built-in detectors (Security Health Analytics) that map to CIS-style controls. Also note the scope: enabling at the organization level is the giveaway for multi-project governance.
Your team is building an IoT telemetry platform for a fleet of 15,000 sensors across 120 factories; each sensor sends JSON payloads every 10 seconds, and the fields differ by device model and firmware version, with new attributes added weekly; to avoid frequent schema migrations and keep write latency under 50 ms, you are considering a non-relational database. In this scenario, what is a defining feature of the non-relational database that makes it a good fit?
Built-in consolidated reporting across disparate sources is not a defining NoSQL feature. Cross-source reporting is typically handled by analytics tools and data warehouses (e.g., BigQuery with federated queries, BI tools) rather than an operational NoSQL database. IoT telemetry platforms often land data in a scalable store first, then build reporting pipelines separately.
A strictly enforced, fixed schema is characteristic of relational databases, where tables define columns and types and changes often require migrations. The scenario explicitly wants to avoid frequent schema migrations due to weekly attribute changes, so a fixed schema would increase operational overhead and slow down iteration rather than help.
A flexible data model allowing varying fields and structures is a core reason to choose non-relational databases, especially document stores. Each sensor payload can include different attributes without altering a global schema, enabling rapid evolution as firmware changes. This supports high-ingest workloads and helps maintain low write latency by avoiding schema validation and migration steps.
Native support for complex queries with joins across multiple tables is a hallmark of relational SQL databases, not a defining NoSQL feature. Many NoSQL databases either don’t support joins or discourage them for performance/scalability reasons, favoring denormalization and key-based access patterns. For complex joins, an analytics system is usually a better fit.
Core Concept: This question tests why non-relational (NoSQL) databases are commonly chosen for high-ingest IoT telemetry with evolving payloads. In Google Cloud, typical NoSQL options include Firestore (document), Bigtable (wide-column), and others; the shared defining idea is schema flexibility compared to traditional relational databases. Why the Answer is Correct: With 15,000 sensors sending JSON every 10 seconds, you have high write volume and a rapidly changing set of attributes by device model/firmware, with new fields added weekly. A defining feature of many non-relational databases—especially document databases—is a flexible data model where each record/document can contain different fields without requiring schema migrations. This directly addresses the requirement to avoid frequent schema changes while maintaining low write latency (under 50 ms), because writes don’t need to validate against a rigid table schema or trigger DDL migrations. Key Features / Best Practices: NoSQL systems often support semi-structured data (e.g., JSON documents) and allow “sparse” records (missing fields are fine). This aligns with the Google Cloud Architecture Framework’s guidance to design for change and scalability: decouple data ingestion from rigid schema evolution, and choose storage that matches access patterns (high-throughput writes, simple key-based reads). In practice, you’d also pair ingestion (often Pub/Sub) with a NoSQL store optimized for write throughput and predictable latency, and apply data lifecycle controls (TTL/retention) to manage cost. Common Misconceptions: People sometimes associate “database power” with SQL-style joins and complex reporting. However, joins and consolidated reporting are typically strengths of relational databases or analytics warehouses (e.g., BigQuery), not defining features of NoSQL operational stores. NoSQL trades some relational features for horizontal scalability, flexible schema, and predictable performance. Exam Tips: When you see “JSON,” “fields differ per record,” “new attributes frequently,” and “avoid schema migrations,” think document/NoSQL flexibility. When you see “joins,” “fixed schema,” and “complex multi-table queries,” think relational/SQL. Also note that operational telemetry storage is often separated from analytics: store raw events flexibly first, then transform for reporting later.
A sports-streaming startup plans to launch a new containerized live-score microservice on Google Cloud; traffic could fluctuate from 30 requests per minute during off-hours to 15,000 requests per minute during championship games, and they want to avoid paying for idle capacity when demand drops to near zero overnight. Which benefit of a serverless platform best addresses this requirement?
Integrated disaster recovery is not the primary serverless benefit being tested. While managed serverless platforms run on highly available Google infrastructure and can be deployed across regions, DR typically involves multi-region design, backups, and failover planning. The requirement is cost efficiency during idle periods and rapid scaling during spikes, not recovery from regional outages.
Lower development costs through prebuilt frameworks can be true in some contexts (e.g., managed services, templates, or opinionated platforms), but it does not directly address fluctuating traffic or paying for idle capacity. The question focuses on operational elasticity and billing model rather than developer productivity features.
Automatic scalability with scale-to-zero and paying only for actual usage is the defining serverless benefit that matches this scenario. A serverless platform can increase capacity automatically during championship-game traffic spikes and reduce capacity dramatically when demand falls overnight. This helps the startup avoid provisioning for peak load all the time and minimizes charges for idle resources. It is the clearest match to both the performance and cost requirements in the question.
Built-in security policies by default is not the best match for the stated requirement. Serverless platforms do provide strong security primitives (IAM, service identity, encryption by default, and managed patching), but these features don’t solve the cost problem of idle capacity or the need to scale from very low to very high request rates.
Core Concept: This question tests the main serverless advantage of automatic scaling combined with usage-based pricing. In Google Cloud, a serverless platform for containerized applications, such as Cloud Run, can rapidly scale out when traffic surges and scale down dramatically when traffic falls, including to zero in many cases. This aligns directly with the need to handle unpredictable demand without paying for idle resources. Why the Answer is Correct: The startup expects highly variable traffic, from very low request volume to major spikes during championship events. The serverless benefit that best addresses this is automatic scalability with scale-to-zero and paying only for resources used, because the platform adjusts capacity automatically instead of requiring pre-provisioned servers. That means the service can handle sudden increases in traffic while minimizing costs when demand drops to near zero. Key Features / Best Practices: Serverless platforms reduce operational overhead by removing the need to manage infrastructure capacity manually. They are especially well suited for bursty or unpredictable workloads because they can respond quickly to changing request volume. For containerized microservices on Google Cloud, Cloud Run is the typical example to associate with this pattern. Common Misconceptions: It is easy to confuse serverless benefits like security, resilience, and developer productivity with the primary benefit being tested here. While those can be advantages of managed platforms, they do not directly solve the problem of traffic spikes and idle-cost reduction. The key clue is the combination of fluctuating demand and the desire not to pay when the service is unused. Exam Tips: When a question mentions unpredictable traffic, sudden spikes, and avoiding idle infrastructure costs, think of serverless autoscaling and usage-based pricing. If the workload is containerized on Google Cloud, Cloud Run is the most likely service association. Focus on the business requirement first: elasticity and cost efficiency.
A nationwide food delivery platform operates workloads across 3 Google Cloud projects and needs a fully managed, centralized dashboard to view infrastructure metrics, create alerts, and run built-in 1-minute HTTP uptime checks for two public APIs; which Google Cloud service should they use?
Cloud Trace is used for distributed tracing—capturing and analyzing end-to-end request latency across microservices (often instrumented apps on GKE, Cloud Run, or Compute Engine). It helps identify slow spans and bottlenecks in application request paths. However, it is not the primary tool for centralized infrastructure metrics dashboards, multi-project alerting, or built-in 1-minute HTTP uptime checks.
Cloud Monitoring is the correct choice because it is Google Cloud’s fully managed observability service for infrastructure and service metrics, dashboards, alerting, and uptime monitoring. It supports centralized visibility across multiple projects through metrics scopes, which fits the requirement to monitor workloads spread across 3 Google Cloud projects from a single place. It also includes built-in Uptime Checks for public HTTP/HTTPS endpoints, making it the native service for monitoring the two public APIs. Among the listed options, it is the only service that directly combines all of these capabilities in one managed platform.
Cloud Logging is designed for collecting, storing, searching, and analyzing logs (and can create log-based metrics). While it complements Monitoring and can feed alerting via log-based metrics, it is not the primary service for infrastructure metrics dashboards or for configuring built-in HTTP uptime checks at 1-minute intervals. Logging alone would not satisfy the full set of requirements.
Cloud Profiler continuously profiles application CPU and memory usage to help optimize performance and reduce cost. It is valuable for code-level performance tuning in production with low overhead. However, it does not provide centralized infrastructure metrics dashboards, cross-project alerting as the main function, or built-in HTTP uptime checks for public APIs.
Core Concept: This question tests knowledge of Google Cloud Operations Suite, specifically which service provides centralized observability for metrics, dashboards, alerting, and uptime monitoring across multiple Google Cloud projects. Why the Answer is Correct: Cloud Monitoring is the fully managed Google Cloud service used to collect and visualize infrastructure and service metrics, build centralized dashboards, and configure alerting policies. It also includes Uptime Checks for public endpoints, making it the native service for monitoring the availability of the two public APIs mentioned in the scenario. For workloads spread across 3 Google Cloud projects, Cloud Monitoring supports centralized visibility through metrics scopes, allowing operators to monitor multiple projects from a single scoping project. Key Features / Configurations / Best Practices: - Metrics scopes let you aggregate metrics from multiple projects into one centralized Monitoring view. - Dashboards can display built-in and custom charts for Compute Engine, GKE, Cloud Run, load balancers, and other Google Cloud resources. - Alerting policies can be created from metric thresholds, uptime checks, and other Monitoring signals. - Uptime Checks are built into Cloud Monitoring and are the correct native feature for checking the availability of public HTTP/HTTPS endpoints. - Monitoring integrates with notification channels and incident workflows, making it suitable for centralized operations teams. Common Misconceptions: Cloud Logging is often confused with Monitoring because both are part of Google Cloud Operations Suite, but Logging is focused on collecting and querying log entries rather than serving as the primary service for infrastructure dashboards and uptime monitoring. Cloud Trace and Cloud Profiler are specialized application performance tools and do not provide the centralized infrastructure monitoring capabilities requested here. Exam Tips: When a question mentions dashboards, infrastructure metrics, alerting, and uptime checks together, Cloud Monitoring is usually the correct answer. If the scenario also mentions multiple projects, think about metrics scopes and centralized observability. Distinguish Monitoring from Logging, Trace, and Profiler by matching each service to its primary purpose.
A retail analytics startup runs 12 microservices on Cloud Run and GKE across two Google Cloud projects in us-central1 and europe-west1, and they need a single Google-managed service to automatically collect, index, and retain all application logs (stdout/stderr and system logs) from these workloads for troubleshooting and log-based metrics without installing any third-party agents; which service should they use?
Cloud Profiler continuously analyzes application CPU and memory usage to identify performance bottlenecks (hot paths) in code. It is used for optimizing runtime performance and reducing cost by finding inefficient functions. Profiler does not collect or retain application/system logs, does not provide log indexing/search, and does not create log-based metrics. It’s complementary to logging, not a replacement.
Cloud Monitoring is primarily for metrics collection, dashboards, uptime checks, and alerting (including SLOs). While Monitoring can alert based on log-based metrics, it is not the service that ingests, indexes, and retains logs. In Google Cloud Observability, logs live in Cloud Logging; Monitoring consumes metrics (including those derived from logs) but does not serve as the centralized log store.
Cloud Trace collects distributed traces to analyze end-to-end request latency across microservices, helping pinpoint where time is spent in a request path. It is useful for performance troubleshooting and latency analysis, especially in microservice architectures. However, Trace does not automatically collect stdout/stderr or system logs, does not provide log retention/indexing, and does not support log-based metrics directly.
Cloud Logging is the Google-managed service for collecting, indexing, searching, and retaining logs from Google Cloud services, including Cloud Run and GKE. It captures application stdout/stderr and platform/system logs, supports configurable retention via log buckets, and enables log-based metrics for alerting and analytics. It also supports cross-project visibility via aggregated views and centralized routing via sinks, meeting the multi-project, multi-region requirement.
Core Concept: This question tests Google Cloud’s operations suite for centralized log collection and management. The correct service is Cloud Logging (part of Google Cloud Observability), which is the Google-managed logging backend that automatically ingests logs from Google-managed compute services. Why the Answer is Correct: Cloud Logging automatically collects, indexes, and stores logs from Cloud Run and GKE without requiring third-party agents. For Cloud Run, request logs and application stdout/stderr are captured by default. For GKE, system and workload logs can be routed to Cloud Logging via Google-managed integrations (Cloud Logging for GKE), enabling centralized troubleshooting across projects and regions. Cloud Logging supports log-based metrics, which is explicitly required in the prompt. Key Features / Configurations / Best Practices: - Centralization across projects/regions: Use aggregated log views (via Log Analytics / Log Explorer with multi-project scopes) and/or log sinks to route logs to a central logging project. - Retention and compliance: Configure log bucket retention (default varies by log bucket type) and use CMEK where required. Retention can be set per log bucket. - Indexing and querying: Logs are indexed for fast search; Log Explorer and Log Analytics enable structured queries. - Log-based metrics: Create counter/distribution metrics from log entries for SLOs and alerting. - Cost control: Use exclusion filters to drop noisy logs, and use sinks to route only needed logs to BigQuery/Cloud Storage. Common Misconceptions: Cloud Monitoring is often confused with Logging because both are in the operations suite, but Monitoring focuses on metrics, dashboards, and alerting—not log ingestion/retention. Cloud Trace and Profiler are for performance diagnostics (latency traces and CPU/memory profiling), not centralized log management. Exam Tips: When you see “collect, index, retain logs,” “stdout/stderr,” “system logs,” “log-based metrics,” and “no agents,” think Cloud Logging. If the question emphasizes metrics/alerts, think Cloud Monitoring; if it emphasizes request latency across services, think Cloud Trace; if it emphasizes code-level CPU/memory hotspots, think Cloud Profiler. Also remember cross-project needs are typically solved with aggregated views and/or centralized sinks and log buckets.
学習期間: 1 month
시험 문제랑 많이 유사해서 좋았어요. 강의 다 듣고난 후에 문제 찾기 어려웠는데 앱 너무 잘 이용했네요
学習期間: 1 month
Good questions and explanations. The resource is very similar to the real exam questions. Thanks.
学習期間: 1 month
This is exactly what you need to pass the GCP CDL exam. The look and feel are exactly how you would experience the real exam. The questions are very similar and you'll even find a few on the exam itself. I would recommend this for anyone looking to obtain this certification. The exam is not an easy one, so the explanations to the questions are very helpful to solidify your understanding and help.
学習期間: 2 months
I used Cloud Pass to prepare for the GCP CDL exam, and it made a huge difference. The practice questions covered a wide range of scenarios I actually saw on the test. The explanations were clear and helped me understand how LookML and data modeling work in real projects. If you focus on understanding the logic behind each question, this app is more than enough to pass.
学習期間: 1 month
Cloud Pass was my main study tool for the GCP CDL exam, and I passed on the first try. The questions were realistic and helped me get comfortable with Looker concepts, permissions, explores, and model structures. I especially liked that I could reset my progress and re-solve the tricky questions. Strongly recommend this for anyone targeting CDL.
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。