
Simuliere die echte Prüfungserfahrung mit 50 Fragen und einem Zeitlimit von 90 Minuten. Übe mit KI-verifizierten Antworten und detaillierten Erklärungen.
KI-gestützt
Jede Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
A media-streaming company is migrating a 12-node analytics backend to a fully managed relational database service on Google Cloud that prohibits custom agents and targets a 99.95% monthly SLA; to reduce operational toil without writing maintenance scripts, which routine system-maintenance task will the managed platform perform automatically across all instances?
Incorrect. While managed databases integrate with IAM and support database users/roles, Google does not automatically create and maintain IAM roles and user access policies for your application users. Designing least-privilege access is part of the customer’s responsibility in the shared responsibility model. You must define who gets access, how it’s granted, and how it’s audited.
Correct. A core benefit of a fully managed relational database service (e.g., Cloud SQL/AlloyDB) is automated maintenance such as applying security patches and performing minor-version upgrades. You can typically configure a maintenance window, but the platform performs the work without custom agents or customer-written scripts. This directly reduces operational toil across all instances.
Incorrect. Archiving historical data to cold storage (e.g., Cloud Storage Nearline/Coldline/Archive) requires you to design data lifecycle and retention policies and implement export/ETL or partitioning/archival strategies. While Cloud Storage supports lifecycle rules, the managed relational database platform does not automatically decide what data is “historical” and archive it for you.
Incorrect. Automatically optimizing spend across projects by resizing resources is not a standard automatic function of managed relational databases. You can manually resize instances or use recommendations/monitoring tools, but cross-project automated resizing is not something the database service performs as routine maintenance. Cost optimization remains a customer governance and FinOps activity.
Core Concept: This question tests what a fully managed relational database service on Google Cloud (for example, Cloud SQL or AlloyDB) does for you operationally. “Prohibits custom agents” and a high availability SLA (99.95% monthly) point to a managed platform where Google operates the underlying infrastructure and much of the database maintenance. Why the Answer is Correct: Managed database platforms reduce operational toil by taking on routine, repeatable maintenance tasks that would otherwise require DBAs and scripts. A key example is patch management: applying security patches and performing minor-version upgrades to the database engine. In Cloud SQL, maintenance updates (including security patches) are handled by Google and can be scheduled via a maintenance window to control timing. This aligns directly with “without writing maintenance scripts” and “across all instances,” because the platform coordinates these updates fleet-wide. Key Features / Best Practices: - Automated maintenance: Google applies patches and minor updates; you typically choose a maintenance window to minimize disruption. - High availability: 99.95% SLA is commonly associated with HA configurations (regional/replica-based), which also influence how maintenance is rolled out to reduce downtime. - Shared responsibility: You still manage schema, queries, and access controls, but Google manages the underlying OS, storage, and many engine maintenance operations. - Architecture Framework alignment: This supports Operational Excellence (automation, reduced toil) and Security (timely patching). Common Misconceptions: People often assume “fully managed” means Google also manages IAM design, data lifecycle/archival policies, or cost optimization across projects. In reality, IAM roles/policies and data retention strategies remain customer responsibilities, and cost optimization is not automatically performed across projects by the database service. Exam Tips: When you see “fully managed database” in Digital Leader questions, think: automated backups (configurable), patching/minor upgrades, replication/HA options, and simplified operations. If an option describes cross-project spend optimization or business-specific data lifecycle rules, it’s usually not an automatic managed database function.
A logistics company operates 24 Kubernetes clusters (12 on-premises, 8 on Google Cloud, and 4 on AWS) to support microservices across 60 warehouses and needs a single, centralized platform to consistently manage policies, configurations, service mesh, and observability across all environments without relocating workloads; which Google Cloud service should they choose?
Cloud Functions is a serverless Functions-as-a-Service product for event-driven code. It does not manage Kubernetes clusters, enforce fleet-wide policies, or provide multi-cluster service mesh and observability. While it can be part of a microservices ecosystem, it cannot centrally govern 24 clusters across on-prem, Google Cloud, and AWS. It’s compute for functions, not a hybrid/multi-cloud Kubernetes management platform.
GKE Enterprise is the correct choice because it provides centralized management for Kubernetes across hybrid and multi-cloud environments. It supports fleet management, consistent policy enforcement (Policy Controller), configuration synchronization (Config Management), service mesh capabilities (Anthos Service Mesh), and unified observability patterns across clusters—without requiring workloads to be moved. This directly matches the need to manage 24 clusters across on-prem, Google Cloud, and AWS consistently.
Cloud Run is a managed serverless container platform that runs stateless HTTP services and jobs. It simplifies deployment and scaling, but it is not a centralized platform for managing existing Kubernetes clusters across multiple environments. It doesn’t provide fleet-wide policy/config management or multi-cluster service mesh governance. Choosing Cloud Run would imply changing the runtime platform rather than centrally managing the current Kubernetes footprint.
Compute Engine provides virtual machines and is foundational infrastructure, but it does not offer Kubernetes fleet management, centralized policy/config enforcement, service mesh, or cross-cluster observability as a unified platform. You could run Kubernetes on VMs, but you would still need a higher-level management solution to meet the requirement of consistent governance across on-prem, Google Cloud, and AWS without relocating workloads.
Core Concept: This question tests Google Cloud’s hybrid and multi-cloud Kubernetes management capabilities—specifically centralized policy/config management, service mesh, and observability across clusters running in different environments (on-prem, Google Cloud, AWS) without moving workloads. Why the Answer is Correct: GKE Enterprise is designed to manage fleets of Kubernetes clusters across environments. It provides a single control plane experience for consistent governance and operations across on-prem (often via Google Distributed Cloud), Google Cloud (GKE), and other clouds (including AWS). The requirement “single, centralized platform” plus “policies, configurations, service mesh, and observability” maps directly to GKE Enterprise’s fleet management and Anthos heritage, enabling standardization without workload relocation. Key Features / Best Practices: - Fleet management: Register clusters into a fleet for centralized administration and consistency. - Policy and configuration: Use Policy Controller (OPA/Gatekeeper) and Config Management to enforce and sync desired state (e.g., namespaces, RBAC, resource constraints) across all clusters. - Service mesh: Anthos Service Mesh (Istio-based) provides consistent traffic management, mTLS, and service-to-service observability across clusters. - Observability: Centralized telemetry, dashboards, and tracing integration (commonly via Cloud Operations suite) across the fleet. - Architecture Framework alignment: Improves operational excellence (standardized operations), security (policy enforcement, mTLS), and reliability (consistent rollout patterns) across heterogeneous environments. Common Misconceptions: People may choose Cloud Run or Cloud Functions because they simplify operations, but they are serverless compute platforms—not multi-cluster management layers. Compute Engine is infrastructure compute and doesn’t provide Kubernetes fleet governance, mesh, or centralized policy management. Exam Tips: When you see “hybrid/multi-cloud Kubernetes,” “centralized governance,” “policy/config consistency,” and “service mesh across clusters,” think GKE Enterprise (Anthos capabilities). Also note the explicit constraint “without relocating workloads,” which points to a management plane rather than a migration or a single-environment compute service.
Your 12-person logistics startup must add image label detection for 50,000 product photos per month and sentiment analysis on 5,000 support emails per week within 30 days and without hiring any new staff; how do Google Cloud's out-of-the-box AI APIs make AI/ML adoption feasible for your team?
Correct. Google Cloud’s out-of-the-box AI APIs (e.g., Vision API for label detection and Natural Language for sentiment) use Google-managed, pre-trained models. Your team can integrate them via simple API calls and IAM-controlled credentials without hiring ML engineers or building/training custom models. This fits tight timelines and small teams by minimizing operational overhead and accelerating delivery.
Incorrect. These APIs still require data input (images and email text) and benefit from validation and preprocessing. For example, you must ensure supported file formats, handle language/encoding, remove signatures/quoted text in emails, and manage edge cases (blurry images, short messages). Managed AI reduces model-building effort, not the need for responsible data handling and quality checks.
Incorrect. AI APIs do not inherently require fewer security permissions. You still must grant appropriate IAM roles (principle of least privilege), manage service accounts, and protect sensitive data. In some cases, using AI services can increase the need for governance (audit logging, data retention policies, DLP considerations, VPC Service Controls) because you are processing potentially sensitive customer communications.
Incorrect. Not all Google Cloud AI offerings require custom training. Many common tasks are covered by pre-trained APIs that work immediately. Custom training is optional and used when you need domain-specific labels, unique terminology, or higher accuracy than general models provide. For this startup’s requirements and timeline, pre-trained APIs are the intended approach.
Core Concept: This question tests Google Cloud’s pre-trained, out-of-the-box AI services (often called “AI APIs”), such as Cloud Vision API for image label detection and Natural Language API (or Vertex AI language capabilities) for sentiment analysis. These are managed services that expose REST/gRPC endpoints and client libraries, letting teams add AI features without building or training models. Why the Answer is Correct: A small startup must deliver two AI capabilities quickly (within 30 days) and without hiring ML specialists. Google Cloud’s pre-trained APIs are designed for exactly this scenario: you send data (images or text) to an API and receive predictions (labels or sentiment) immediately. No custom model development lifecycle is required (data labeling, feature engineering, training, hyperparameter tuning, evaluation, deployment, monitoring). This dramatically reduces time-to-value and operational burden, aligning with the Google Cloud Architecture Framework principle of using managed services to reduce undifferentiated heavy lifting. Key Features: 1) Pre-trained models: Vision label detection and sentiment analysis work out-of-the-box. 2) Simple integration: client libraries, REST calls, and IAM-controlled service accounts. 3) Scalability: the APIs scale to handle monthly/weekly batch workloads; you can run batch jobs via Cloud Run/Functions + Cloud Scheduler, or Dataflow for larger pipelines. 4) Governance and security: IAM permissions (least privilege), audit logs, and optional VPC Service Controls for data exfiltration protection. 5) Cost model: pay-per-use pricing per request/unit processed, which is attractive for variable workloads like 50,000 images/month and 5,000 emails/week. Common Misconceptions: Some assume “AI” always requires custom training (D) or that managed AI removes the need for data preparation (B). In reality, you must provide valid inputs (correct image formats, text encoding, language considerations) and handle quality checks, but you don’t need to build models. Others think AI APIs reduce security requirements (C); instead, they still require appropriate IAM and data handling controls. Exam Tips: For Digital Leader questions, map business constraints (small team, fast timeline, no hiring) to managed services and pre-trained APIs. When you see tasks like OCR, labeling, translation, sentiment, entity extraction, or speech-to-text, the best answer is typically Google’s out-of-the-box AI APIs or Vertex AI pre-built offerings—especially when custom ML development is unrealistic.
A national bike-sharing consortium needs to publish real-time docking station availability (updated every 2 seconds) from 300 municipal operators and simultaneously receive rider rental/return events in real time to forward to each operator’s system. They require a standardized, secure, versioned interface over HTTPS that supports authentication and partner-specific rate limits; what should they implement?
SRE practices and runbooks improve reliability and incident response (SLIs/SLOs, on-call, playbooks), but they do not provide an external, standardized HTTPS interface. They also don’t inherently deliver authentication, API versioning, or partner-specific rate limiting. SRE would be complementary after the platform is built, not the primary solution to expose and govern partner integrations.
An application programming interface managed through an API gateway is the correct pattern because it provides a standardized HTTPS interface for many external partners. API management capabilities such as authentication, versioning, and policy enforcement are exactly what this consortium needs to securely publish availability data and receive rental events. It also allows the organization to apply partner-specific controls such as quotas or rate limits without changing backend services. In Google Cloud, this is commonly associated with Apigee for full API management, while the general architectural choice remains an API front door managed through a gateway layer.
A customized ML model could help predict bike demand or station fullness, but it does not address the requirement to publish and receive real-time events via a secure, versioned HTTPS interface. ML is an analytics enhancement, not an integration control plane. Even if predictions were useful, you would still need an API layer to expose data securely to 300 operators.
A multi-regional shared database could store availability and rental events, but it is not a safe or practical partner integration interface. Databases don’t natively provide standardized API contracts, per-partner authentication models, versioning, or rate limits. Exposing a shared database to 300 external operators increases security risk, complicates governance, and can create performance and quota contention.
Core Concept: This question is about exposing a standardized integration interface to many external partners over HTTPS. The required capabilities—authentication, versioning, and partner-specific rate limits—align with API management, typically implemented with an API gateway or, in Google Cloud, more fully with Apigee. Why the Answer is Correct: The consortium needs a secure, versioned HTTPS interface for 300 municipal operators, plus the ability to authenticate callers and enforce different limits per partner. Those are classic API management requirements: publish a consistent contract, secure access, apply policies, and govern consumption centrally. An API gateway layer is the right architectural pattern, and on Google Cloud Apigee is the best-known service for full partner-facing API management. Key Features: - Standardized HTTPS endpoints for publishing station availability and receiving rental/return events. - Authentication and authorization for external operators using API keys, OAuth 2.0, JWTs, or similar mechanisms. - API versioning so the interface can evolve without breaking all partners at once. - Partner-specific quotas or rate limits to protect backend systems and support fair usage. - Centralized policy enforcement, monitoring, and routing to backend services that process the real-time data. Common Misconceptions: A shared database may centralize storage, but it does not create a governed external interface with versioning and per-partner controls. SRE practices improve operations but do not provide an integration surface. Machine learning is unrelated to the need for secure partner connectivity. Exam Tips: When a question emphasizes external partners, HTTPS, authentication, versioning, and rate limiting, think API management. In Google Cloud, that usually points conceptually to an API gateway pattern, often implemented with Apigee for richer partner-facing controls.
Your team operates a ride-sharing analytics service on Google Cloud across us-central1 and europe-west1, aiming for 99.9% monthly availability and 95th-percentile request latency under 300 ms during peak 18:00–22:00 traffic; within the SRE framework, which concept represents the metric that actually quantifies how well the service is performing?
A Service-level agreement (SLA) is a customer-facing contract that may specify availability/latency commitments and often includes consequences (credits/penalties) if not met. It is not the metric itself; it’s the formal agreement built on top of internal objectives and indicators. SLAs are typically less strict than internal SLOs to provide a safety margin.
A Service-level indicator (SLI) is the actual quantitative measure of service performance, such as availability, error rate, or 95th-percentile latency. It answers “how is the service doing right now/over a period?” In this scenario, the measured monthly availability and p95 latency during peak hours are SLIs, making this the correct choice.
Error reporting is an operational capability (for example, Google Cloud Error Reporting) that aggregates and alerts on application exceptions and crashes. While it can contribute data used to compute an SLI (like error rate), it is not the SRE concept that represents the performance metric itself. It’s a tool/service, not the measurement definition.
A Service-level objective (SLO) is the target value or threshold for an SLI over a time window (for example, “99.9% monthly availability” or “p95 latency < 300 ms during 18:00–22:00”). SLOs define the desired reliability level and drive error budgets, but they are not the metric; they are the goal applied to the metric.
Core Concept: This question tests Site Reliability Engineering (SRE) reliability measurement terminology: SLI, SLO, and SLA. In Google’s SRE model, you first define what “good” looks like (objectives), then choose the concrete measurements that quantify actual performance. Why the Answer is Correct: A Service-level indicator (SLI) is the metric that actually quantifies how well the service is performing. Examples include availability (% of successful requests), request latency (e.g., 95th percentile under 300 ms), error rate, or throughput. The prompt asks for “the metric that actually quantifies how well the service is performing,” which is precisely the definition of an SLI. In the scenario, the measured values for monthly availability and p95 latency during peak hours are SLIs. Key Features / Best Practices: In practice, SLIs are computed from telemetry (Cloud Monitoring metrics, logs-based metrics, traces) and should be: - User-centric (measure what users experience, e.g., successful requests, end-to-end latency). - Clearly defined (what counts as “success,” which endpoints, which regions, which time windows like 18:00–22:00). - Tied to error budgets (derived from SLOs) to guide release velocity vs. reliability. For a multi-region service (us-central1 and europe-west1), teams often define SLIs per region and globally, and ensure measurement accounts for traffic distribution and failover behavior. Common Misconceptions: People often confuse SLI with SLO because the question includes targets (99.9% and p95 < 300 ms). Those targets are SLOs, but the underlying measured quantities (availability, latency percentiles) are SLIs. Another confusion is with SLA, which is an external contract and may include penalties; it is not the internal measurement itself. Exam Tips: Memorize the mapping: SLI = “what you measure,” SLO = “the target for the measurement,” SLA = “customer-facing contract.” If the question asks for the metric/measurement, pick SLI; if it asks for the goal/threshold, pick SLO; if it asks for a contractual guarantee, pick SLA. This aligns with Google Cloud Operations/SRE practices and the Google Cloud Architecture Framework’s reliability pillar (measurable objectives and monitoring).
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
Your global e-commerce platform ingests clickstream and ad-impression events from 180 million monthly active users across 5 regions (NA, EU, APAC, LATAM, MEA). At peak, the system must handle at least 1.2 million writes per second with sub-10 ms write latency, store petabyte-scale wide-column time-series data, and scale horizontally without complex schema migrations. Which Google Cloud product should you choose?
Firestore is a serverless document database suited for app backends, user profiles, and real-time sync. While it scales well and supports multi-region, it is not optimized for petabyte-scale wide-column time-series workloads with extremely high sustained write rates (e.g., 1.2M writes/sec) and the access patterns typical of clickstream/event ingestion. Firestore’s document model and limits make it a less natural fit than Bigtable for this scenario.
Cloud Data Fusion is a managed data integration/ETL service (based on CDAP) used to build pipelines that move and transform data between systems. It does not provide the underlying low-latency, high-throughput operational storage required here. In this architecture, Data Fusion could help ingest/transform data into Bigtable/BigQuery, but it cannot replace the database layer needed for sub-10 ms writes at massive scale.
Cloud SQL is a managed relational database (MySQL/PostgreSQL/SQL Server). It is ideal for transactional relational workloads but is constrained by vertical scaling and read replicas; it is not designed for horizontally scaling to millions of writes per second with sub-10 ms latency across regions. Schema evolution and migrations are also more complex in relational systems compared to sparse wide-column stores for evolving event attributes.
Cloud Bigtable is a managed wide-column NoSQL database built for massive throughput and low-latency reads/writes at petabyte scale. It scales horizontally by adding nodes, supports sparse and flexible schemas (reducing migration complexity), and is a common choice for time-series, clickstream, and ad-tech event data. With proper row-key design to avoid hotspots and optional replication for multi-region resilience, it matches the stated requirements.
Core Concept: This question tests selecting the right managed database for extremely high-throughput, low-latency ingestion of petabyte-scale, wide-column time-series data. The key signals are: 1.2M writes/sec, sub-10 ms write latency, wide-column model, horizontal scaling, and avoiding complex schema migrations. Why the Answer is Correct: Cloud Bigtable is Google Cloud’s managed, wide-column NoSQL database designed for massive scale and consistent single-digit millisecond latency for reads/writes. It is commonly used for time-series, clickstream, ad-tech, IoT telemetry, and personalization event stores. Bigtable scales horizontally by adding nodes; throughput increases linearly with node count, making it suitable for sustained high write rates like 1.2M writes/sec across multiple regions. Key Features / Best Practices: Bigtable’s schema is sparse and flexible (column families/qualifiers), which supports evolving event attributes without disruptive migrations. It is optimized for high write throughput when you design row keys correctly (e.g., time-bucketing or salting/hashing to avoid hotspotting from monotonically increasing timestamps). You can use replication (multi-cluster routing) for regional resilience and lower-latency access, aligning with global architectures. Bigtable integrates well with streaming ingestion (often via Pub/Sub + Dataflow) and analytics (BigQuery federation/exports), supporting an end-to-end data transformation pipeline. Common Misconceptions: Firestore is also NoSQL and globally distributed, but it is a document database with different scaling characteristics and limits; it is not the typical choice for petabyte-scale wide-column time-series at million-writes/sec rates. Cloud SQL is relational and will not meet the horizontal scaling and write-latency requirements at this magnitude. Cloud Data Fusion is an integration/ETL tool, not a serving database. Exam Tips: When you see “wide-column,” “time-series,” “petabyte-scale,” and “single-digit ms latency with massive throughput,” think Cloud Bigtable. For relational OLTP choose Cloud SQL/Spanner; for document/mobile/web app data choose Firestore; for ETL/orchestration choose Data Fusion. Also remember Bigtable performance depends heavily on row-key design and node sizing/quotas, and replication is used for multi-region availability rather than strong multi-row transactions.
Your agriculture analytics company has 200,000 labeled PNG images of crop leaves in a Cloud Storage bucket and needs to train a custom model to classify each image into 8 disease categories within two weeks using a fully managed service without writing model code. Which Google Cloud product or service should you use?
Video Intelligence API is intended for analyzing video content with prebuilt capabilities such as label detection, shot change detection, and object tracking in videos. The question is about training a custom classifier on PNG image files, not extracting insights from video streams. It does not provide a no-code workflow for building a bespoke image classification model from labeled still images. Therefore, it does not fit either the data type or the custom training requirement.
AutoML Vision is the correct choice because it is a fully managed Google Cloud service for training custom computer vision models from labeled image datasets. It supports image classification use cases like assigning crop leaf images into 8 disease categories, and it can ingest training data from Cloud Storage. The service is designed for users who do not want to write model architecture or training code, which matches the requirement exactly. It also provides built-in training, evaluation, and deployment workflows, making it practical for delivering a model within a short timeline such as two weeks.
BigQuery ML is primarily used to build machine learning models with SQL on data stored in BigQuery, especially structured or tabular datasets. Although it is a managed ML option, it is not the standard service for training custom image classifiers directly from large PNG datasets in Cloud Storage. The scenario specifically requires image-based supervised learning without writing model code, which is what AutoML Vision is built for. Using BigQuery ML here would be a mismatch in both data modality and intended workflow.
Looker is a business intelligence and analytics platform used for dashboards, reporting, and data exploration. It can help visualize results from a machine learning system, but it does not train custom computer vision models. The requirement is to build an image classifier from labeled crop leaf images using a fully managed service. Because Looker is not a model training service, it is not an appropriate choice.
Core Concept: This question tests selecting a fully managed Google Cloud AI service that can train a custom image classification model from labeled images without writing model code. In Google Cloud, this is addressed by Vertex AI’s AutoML capabilities (commonly referred to in exams as AutoML Vision). Why the Answer is Correct: AutoML Vision is designed for supervised learning on images (classification, object detection) using labeled datasets stored in Cloud Storage. The company already has 200,000 labeled PNG images and needs an 8-class classifier. AutoML Vision provides an end-to-end workflow: import data from Cloud Storage, define labels, automatically split training/validation/test sets, train a model, evaluate metrics, and deploy for online prediction or batch prediction—without requiring the team to implement model architecture or training code. The “within two weeks” requirement aligns with managed training that can scale using Google’s infrastructure and GPUs/TPUs behind the scenes. Key Features / Best Practices: AutoML Vision (Vertex AI AutoML) supports large-scale datasets, handles common preprocessing, and provides model evaluation (confusion matrix, precision/recall) to validate performance across the 8 disease categories. Best practices include ensuring label quality, balanced classes (or using data augmentation/collection to reduce imbalance), and using Cloud Storage organization conventions. For production, consider batch prediction for large backfills and online endpoints for real-time classification. Also plan for regional availability (Vertex AI is regional) and cost controls (training and prediction are billed; large datasets increase training time/cost). Common Misconceptions: People may confuse “no code” ML with BigQuery ML, but BQML is primarily for structured/tabular data and SQL-based modeling, not image classification from PNGs. Video Intelligence API is for analyzing video content, not training custom image classifiers. Looker is BI/visualization and does not train ML models. Exam Tips: When you see “labeled images in Cloud Storage,” “custom model,” and “no model code,” think Vertex AI AutoML (AutoML Vision). If the input is video, consider Video Intelligence; if it’s tabular data in BigQuery with SQL, consider BigQuery ML; if it’s dashboards, consider Looker. Map the data modality (images) to the correct managed AI product.
An urban bike-sharing startup managing 2,500 bikes across 120 stations wants to improve rider satisfaction over the next quarter. Over the past 6 months, they collected 500 user submissions via an in-app 'Report an issue' form, IoT telemetry from bike locks every 5 seconds, and daily station fill-level summaries. To decide where to prioritize improvements, which of the following represents unstructured data they can analyze?
Free-text issue descriptions are unstructured because they don’t conform to a fixed schema and vary widely in wording and length. They require NLP or text analytics to extract meaning (topics, sentiment, entities) before they can be aggregated reliably. This is the best example of unstructured data for prioritizing rider-experience improvements.
GPS coordinates collected every 5 seconds are typically structured time-series data: each event has consistent fields such as timestamp, latitude, longitude, and bike identifier. While the dataset is high-volume and high-velocity, it still fits a defined schema and is readily stored/analyzed in systems like BigQuery or time-series pipelines.
Daily station utilization percentages by station are structured metrics. They naturally fit into a table with columns like station_id, date, and utilization_percent. This data is straightforward for reporting and trend analysis in BigQuery or Looker, and it is not considered unstructured.
Inventory tables with SKU codes, quantities, and reorder thresholds are classic structured relational data. They have a well-defined schema and are typically stored in relational databases or managed services. This is the opposite of unstructured data and is easy to query with SQL.
Core Concept: This question tests data classification for analytics: structured vs semi-structured vs unstructured data. In Google Cloud exam contexts, this maps to choosing appropriate storage/analytics tools (for example, BigQuery for structured/semi-structured, and NLP/Vertex AI for unstructured text) and understanding what “unstructured” means. Why the Answer is Correct: Unstructured data is information that does not follow a predefined schema of rows/columns and is not easily represented in a relational table without significant preprocessing. Free-text issue descriptions from users are classic unstructured data: each submission can vary in length, vocabulary, grammar, and content. Analyzing this text (e.g., clustering complaints, sentiment, topic extraction) can directly inform prioritization to improve rider satisfaction. Key Features / How Google Cloud Would Analyze It: Unstructured text is commonly analyzed using Natural Language Processing (NLP). On Google Cloud, teams might store raw submissions in Cloud Storage or Firestore, then use BigQuery for metadata and indexing, and apply Vertex AI / Natural Language APIs for entity extraction, sentiment analysis, and classification. Best practice is to keep raw data immutable, add derived structured fields (topic labels, severity scores), and then join those features with structured telemetry and station metrics in BigQuery for holistic prioritization. Common Misconceptions: High-volume telemetry (like GPS every 5 seconds) can feel “unstructured” because it’s big and fast, but it is typically structured: each record has consistent fields (timestamp, lat, long, bike_id). Similarly, daily utilization percentages are clearly structured time-series metrics. Inventory tables are the most obviously structured relational data. The exam often distinguishes “complex” or “large-scale” from “unstructured”—volume/velocity does not make data unstructured. Exam Tips: When asked to identify unstructured data, look for free-form text, images, audio, video, PDFs, or social posts. When you see numeric fields with consistent columns (coordinates, percentages, SKUs), that’s structured (or at most semi-structured if in JSON). Also remember: unstructured data often benefits from AI/ML (Vertex AI) to extract structure, after which it can be analyzed alongside structured datasets in BigQuery.
A 120-employee nonprofit is conducting a quarterly security readiness review and notes that during the last 30 days, 3 simulated lures were sent and 26% of staff clicked the links; the CISO wants to prioritize controls for social-engineering risks specifically. Which of the following is the most plausible method an attacker would use to carry out a social-engineering attack in this context?
SQL injection is an application-layer technical exploit where an attacker manipulates poorly validated inputs to execute unintended database queries. It targets software vulnerabilities in code and input handling, not human behavior. While it is a serious risk, it is not a social-engineering method and does not align with the phishing-simulation context described (lures, clicks).
Overheating rack servers is a form of physical sabotage or environmental attack. It could cause outages or equipment damage, but it is not social engineering because it does not rely on deceiving employees into taking an action. The scenario focuses on staff clicking links from simulated lures, which points to phishing rather than physical threats.
This is a classic phishing attack, which is one of the most common forms of social engineering. The attacker impersonates a trusted internal function such as payroll, relying on employee trust and urgency to persuade users to click a link and submit credentials. That directly matches the scenario’s mention of simulated lures and a measured click rate, which are standard indicators used in phishing-awareness testing. Unlike technical exploits against software or infrastructure, this method targets human decision-making and is therefore the most plausible social-engineering attack in this context.
A botnet-driven flood is a Distributed Denial of Service (DDoS) attack focused on exhausting resources and degrading availability. It is primarily a technical, volumetric attack against infrastructure and network capacity, not a human-deception tactic. It does not fit the context of employees clicking links or being tricked into revealing credentials.
Core Concept: This question tests understanding of social engineering (especially phishing) as a human-focused attack vector. In Google Cloud Digital Leader terms, it aligns with security fundamentals: threats, risk management, and controls that reduce credential theft and account compromise. Why the Answer is Correct: The scenario describes simulated lures and a click rate (26%), which is classic phishing-readiness testing. The most plausible real attacker method in this context is sending deceptive emails that impersonate a trusted internal function (e.g., payroll) and directing users to a fake login form to harvest credentials. This is a social-engineering attack because it manipulates human trust rather than exploiting a technical vulnerability directly. Key Features / Best Practices (controls to prioritize): To reduce social-engineering risk, prioritize identity and access controls and user protections: enforce phishing-resistant MFA (e.g., security keys/passkeys) for Google accounts, implement strong email authentication (SPF, DKIM, DMARC), and use advanced phishing and malware protection in email. Apply least privilege and conditional access principles (context-aware access) so stolen credentials alone are less useful. In Google Cloud, also emphasize centralized identity (Cloud Identity / Google Workspace), security monitoring and alerting (Security Command Center for cloud posture; Workspace security investigations where applicable), and continuous awareness training with measurable outcomes. Common Misconceptions: Learners sometimes confuse “security attack” with “social engineering.” SQL injection and DDoS are common cyberattacks, but they target systems and availability rather than manipulating people. Physical sabotage (overheating racks) is a threat, but it is not the typical “social engineering” method implied by phishing simulation metrics. Exam Tips: When you see metrics like “lures sent,” “click rate,” “credential entry,” or “impersonation,” think phishing/social engineering. Map the attack to the security domain: human deception → identity compromise → account takeover. Then think of mitigations: phishing-resistant MFA, email authentication, user training, and least privilege to limit blast radius.
A metropolitan transit agency stores real-time vehicle locations and historical ridership data in a relational database (e.g., Cloud SQL). They want to create a new revenue stream by allowing third-party mobile app developers to use this data in their apps. They expect around 120,000 requests per day at peak and require usage-based billing without granting direct database access. Which cloud-first approach should the agency choose?
Creating external accounts to query the production database violates least privilege and significantly increases security risk (credential leakage, SQL injection surface, network exposure). It also makes usage-based billing and throttling harder because databases aren’t designed as public multi-tenant billing endpoints. This approach can degrade reliability by allowing uncontrolled query patterns that impact operational workloads.
Selling monthly CSV downloads is a form of data distribution, but it doesn’t satisfy real-time access needs and doesn’t align with “requests per day at peak.” It also limits monetization flexibility (no per-request billing, limited tiering) and creates operational overhead for generating, hosting, and supporting bulk exports. It’s not a cloud-first, scalable integration pattern.
Exposing an authenticated, metered API is the correct cloud-first approach. An API layer (often with Apigee/API Gateway) enables developer onboarding, API keys/OAuth, quotas and rate limiting, analytics, and monetization models (pay-per-call, tiers). It protects Cloud SQL by mediating access, enabling caching, and allowing backend evolution without breaking clients.
Migrating to a non-relational database is not required to share data securely or to implement usage-based billing. The key requirement is controlled access and metering, which is solved by an API management pattern. A migration adds cost, risk, and time, and may not improve security or monetization by itself unless there are separate performance/scaling needs.
Core Concept: This question tests cloud-first data monetization and secure sharing patterns: exposing internal data via an API layer (API management) rather than granting direct database access. In Google Cloud, this commonly maps to Cloud Run/Cloud Functions or GKE hosting an API, fronted by Apigee or API Gateway, with authentication, quotas, and analytics for metering. Why the Answer is Correct: Option C best meets all requirements: (1) third parties can access data without direct Cloud SQL access, (2) requests can be authenticated and authorized, (3) usage can be metered and tied to billing, and (4) the solution scales to peak demand (120,000 requests/day) while protecting the production database. An API façade decouples consumers from the database schema and enables productization (plans, tiers, keys, and contracts). This aligns with the Google Cloud Architecture Framework principles of security (least privilege), reliability (protecting the system of record), and cost optimization (controlled consumption). Key Features / Best Practices: Use an API management layer (Apigee is the canonical choice for monetization) to issue developer keys, enforce OAuth2/JWT, apply quotas/rate limits, and capture analytics per consumer. Implement caching (Apigee caching, Memorystore, or CDN where applicable) to reduce database load and improve latency. Place the API behind Cloud Armor/WAF controls and use IAM/service accounts for the API-to-database connection (e.g., Cloud SQL Auth Proxy/Connector). Consider read replicas or a separate serving datastore (e.g., BigQuery for historical analytics) to isolate production workloads. Common Misconceptions: Some assume “just give read-only DB users” is simplest, but it breaks least privilege, increases attack surface, complicates auditing, and makes per-request billing difficult. Others think changing database type is required for sharing; it isn’t—access pattern and governance matter more than the storage engine. Exam Tips: When you see “third-party access,” “no direct database access,” and “usage-based billing,” think “API product” with authentication + quotas + analytics. For Digital Leader, choose the managed, cloud-first approach (API management) over manual exports or exposing core databases directly.
Lernzeitraum: 1 month
시험 문제랑 많이 유사해서 좋았어요. 강의 다 듣고난 후에 문제 찾기 어려웠는데 앱 너무 잘 이용했네요
Lernzeitraum: 1 month
Good questions and explanations. The resource is very similar to the real exam questions. Thanks.
Lernzeitraum: 1 month
This is exactly what you need to pass the GCP CDL exam. The look and feel are exactly how you would experience the real exam. The questions are very similar and you'll even find a few on the exam itself. I would recommend this for anyone looking to obtain this certification. The exam is not an easy one, so the explanations to the questions are very helpful to solidify your understanding and help.
Lernzeitraum: 2 months
I used Cloud Pass to prepare for the GCP CDL exam, and it made a huge difference. The practice questions covered a wide range of scenarios I actually saw on the test. The explanations were clear and helped me understand how LookML and data modeling work in real projects. If you focus on understanding the logic behind each question, this app is more than enough to pass.
Lernzeitraum: 1 month
Cloud Pass was my main study tool for the GCP CDL exam, and I passed on the first try. The questions were realistic and helped me get comfortable with Looker concepts, permissions, explores, and model structures. I especially liked that I could reset my progress and re-solve the tricky questions. Strongly recommend this for anyone targeting CDL.


Möchtest du alle Fragen unterwegs üben?
Kostenlose App holen
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.