
Simuliere die echte Prüfungserfahrung mit 50 Fragen und einem Zeitlimit von 120 Minuten. Übe mit KI-verifizierten Antworten und detaillierten Erklärungen.
KI-gestützt
Jede Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
Your team is preparing to train a fraud detection model using data in BigQuery that includes several fields containing PII (for example, card_number, customer_email, and phone_number). The dataset has approximately 250 million rows and every column is required as a feature. Security requires that you reduce the sensitivity of PII before training while preserving each column’s format and length so downstream SQL joins and validations continue to work. The transformation must be deterministic so the same input always maps to the same protected value, and authorized teams must be able to decrypt values for audits. How should you proceed?
You are a data scientist at a national power utility analyzing 850 million smart-meter readings from 3,000 substations collected over 5 years; for exploratory analysis, you must compute descriptive statistics (mean, median, mode) by device and region, perform complex hypothesis tests (e.g., differences between peak vs off-peak and seasonal periods with multiple comparisons), and plot feature variations at hourly and daily granularity over time, while using as much of the telemetry as possible and minimizing computational resources—what should you do?
You are launching a grocery delivery mobile app across 3 cities and will use Google Cloud's Recommendations AI to build, test, and deploy product suggestions; you currently capture about 2.5 million user events per day, maintain a catalog of 120,000 SKUs with accurate price and availability, and your business objective is to raise average order value (AOV) by at least 6% within the next quarter while adhering to best practices. Which approach should you take to develop recommendations that most directly increase revenue under these constraints?
A fintech analytics team has migrated 12 time-series forecasting and anomaly-detection models to Google Cloud over the last 90 days and is now standardizing new training on Vertex AI. You must implement a system that automatically tracks model artifacts (datasets, feature snapshots, checkpoints, and model binaries) and end-to-end lineage across pipeline steps for dev, staging, and prod; the solution must be simple to adopt via reusable templates, require minimal custom code, retain lineage for at least 180 days, and scale to future models without re-architecting; what should you do?
You trained an automated scholarship eligibility classifier for a national education nonprofit using Vertex AI on 1.2 million labeled applications, reaching an offline ROC AUC of 0.95; the review board is concerned that predictions may be biased by applicant demographics (e.g., gender, ZIP-code–derived income bracket, first-generation college status) and asks you to deliver transparent insight into how the model makes decisions for 500 sampled approvals and denials and to identify any fairness issues across these cohorts. What should you do?
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
You are building a deep neural network classifier for a ride-sharing fraud detection system with 30 million training records; several categorical features have very high cardinality (driver_id ≈ 320,000 unique values, vehicle_vin ≈ 110,000, pickup_zip ≈ 42,000), and due to a 16 GB GPU memory cap you cannot materialize a full one-hot vocabulary for each column. Which encoding should you use to feed these categorical features into the model so that the representation scales, remains sparse, and does not impose artificial ordinality?
You work for a wind farm operator. You have been asked to develop a model to predict whether a turbine will require unscheduled maintenance on a given day. Your team has processed 18 months of turbine telemetry (~2.4 million rows) and created a table with the following rows: • Turbine_id • Site_id • Date • Hours_since_last_service (measured in hours) • Average_vibration_frequency (measured in Hz) • Temperature_delta_7d (measured in °C) • Unscheduled_maintenance (binary class, if maintenance occurred on the Date) Models must be deployed in us-central1, and you need to interpret the model’s results for each individual online prediction with per-instance feature contributions. What should you do?
Your media analytics team is building a Vertex AI Pipelines workflow running on a private GKE cluster in europe-west1, and the first task must run a parameterized BigQuery SQL that filters the last 24 hours of event logs (~30 million rows, ~15 GB scanned) and pass the query output directly as the input artifact to the next task; you want the simplest, lowest-effort approach that integrates cleanly into the pipeline with minimal custom code—what should you do?
Your team is fine-tuning a multilingual speech-to-text Transformer on Vertex AI using PyTorch DDP with 2 worker pools, each VM having 4x NVIDIA A100 40GB GPUs (total 8 GPUs) and a global batch size of 1024. You plan to use the Reduction Server strategy to accelerate cross-node gradient aggregation and will add a third worker pool dedicated to the reduction service. How should you configure the worker pools and container images for this distributed training job?
Your media subscription platform retrains a custom churn model every month on 48 GB of CSVs in Cloud Storage (~25 million rows) and then runs a batch job that scores 8.2 million users for the next 30 days; compliance demands auditable end-to-end lineage linking the exact data snapshot, container image digest, trained model version, and each batch prediction output URI, retained for 12 months; you need a repeatable batch process with built-in lineage for both the model and the predictions; what should you do?
Lernzeitraum: 1 month
Just want to say a massive thank you to the entire Cloud pass, for helping me pass my exam first time. I wont lie, it wasn't easy, especially the way the real exam is worded, however the way practice questions teaches you why your option was wrong, really helps to frame your mind and helps you to understand what the question is asking for and the solutions your mind should be focusing on. Thanks once again.
Lernzeitraum: 1 month
Good questions banks and explanations that help me practise and pass the exam.
Lernzeitraum: 1 month
강의 듣고 바로 문제 풀었는데 정답률 80% 가량 나왔고, 높은 점수로 시험 합격했어요. 앱 잘 이용했습니다
Lernzeitraum: 1 month
Good mix of theory and practical scenarios
Lernzeitraum: 1 month
I used the app mainly to review the fundamentals—data preparation, model tuning, and deployment options on GCP. The explanations were simple and to the point, which really helped before the exam.
Kostenlose App holen