
Simuliere die echte Prüfungserfahrung mit 50 Fragen und einem Zeitlimit von 120 Minuten. Übe mit KI-verifizierten Antworten und detaillierten Erklärungen.
KI-gestützt
Jede Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
Your team deployed a regression model that predicts hourly water usage for industrial chillers. Four months after launch, a vendor firmware update changed sensor sampling and units for three input features, and the live feature distributions diverged: 5 of 18 features now have a population stability index > 0.25, 27% of temperature readings fall outside the training range, and production RMSE increased from 0.62 to 1.45. How should you address the input differences in production?
You are building an end-to-end scikit-learn MLOps workflow in Vertex AI Pipelines (Kubeflow Pipelines) that ingests 50 GB of CSV data from Cloud Storage, performs data cleaning, feature selection, model training, and model evaluation, then writes a .pkl model artifact to a versioned path in a GCS bucket. You are iterating on multiple versions of the feature selection and training components, submitting each version as a new pipeline run in us-central1 on n1-standard-4 CPU-only executors; each end-to-end run currently takes about 80 minutes. You want to reduce iteration time during development without increasing your GCP costs; what should you do?
Your team must deliver an ML solution on Google Cloud to triage warranty claim emails for a global appliance manufacturer into 8 categories within 4 weeks. You are required to use TensorFlow to maintain full control over the model's code, serving, and deployment, and you will orchestrate the workflow with Kubeflow Pipelines. You have 30,000 labeled examples and want to accelerate delivery by leveraging existing resources and managed services instead of training a brand-new model from scratch. How should you build the classifier?
You are building an anomaly detection model for an industrial IoT platform using Keras and TensorFlow. The last 24 months of sensor events (~900 million rows, ~2.6 TB) are stored in a single partitioned table in BigQuery, and you need to apply feature scaling, categorical encoding, and time-window aggregations in a cost-effective and efficient way before training. The trained model will be used to run weekly batch inference directly in BigQuery against newly ingested partitions. How should you implement the preprocessing workflow?
You are building an MLOps workflow for a smart‑city traffic analytics project that stitches together data preprocessing, model training, and model deployment across different Google Cloud services; traffic cameras upload 40–60 JSONL files (~50 MB each) per hour into a Cloud Storage bucket named gs://city-traffic-raw with bursty arrivals, you have already written code for each task, and you now need an orchestration layer that runs only when new files have arrived since the last successful run while minimizing always-on compute costs for orchestration; what should you do?
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
You work for a real-time multiplayer gaming company. You must design a system that stores and manages player telemetry features (e.g., positions, actions, and matches completed) and server locations over time. The system must provide sub-50 ms online retrieval of the latest features to feed a fraud-detection model for live inference, while the data science team must retrieve a point-in-time consistent snapshot of historical features (e.g., as-of a given timestamp) for training and backtesting. The solution should handle ingestion of approximately 200 million feature rows per day, support feature versioning, and require minimal operational effort. What should you do?
You are setting up a weekly demand-forecasting workflow for a nationwide grocery chain: you train a custom model on 85 GB of historical sales data stored in Cloud Storage and produce about 6 million batch predictions per run; compliance requires an auditable end-to-end lineage that links the exact training data snapshot, the resulting model artifact, and each weekly batch prediction job for at least 90 days; what should you do to ensure this lineage is automatically captured across training and prediction?
Your analytics guild is preparing a time-boxed 3-week prototype, and you must provide a shared Vertex AI Workbench user-managed notebook VM in us-central1 for exactly 8 external contractors while preventing the other 500 project users from opening or running the environment. You will provision the notebook instance yourself and need to follow least-privilege and ensure that notebook code can call Vertex AI APIs during experiments. What should you do to configure access correctly?
You are training custom models with Vertex AI Training to classify defects in 12-megapixel manufacturing photos, and each week you swap in new neural architectures from research to benchmark them on the same fixed 600 GB dataset; you want automatic retraining to occur only when code changes are pushed to the main branch, keep full version control of code and build artifacts, and minimize costs by avoiding always-on orchestration or manual steps. What should you do to meet these requirements?
You are organizing a 24-hour internal ML sprint for a team of 12 data scientists who need to explore and prototype PySpark and Spark SQL transformations on 40 TB of Parquet data stored in Cloud Storage. The environment must be accessible via web-based notebooks, support distributed Spark execution out of the box, and require minimal setup with no manual package installs. What is the fastest way to provide a robust, scalable notebook environment for this sprint?
Lernzeitraum: 1 month
Just want to say a massive thank you to the entire Cloud pass, for helping me pass my exam first time. I wont lie, it wasn't easy, especially the way the real exam is worded, however the way practice questions teaches you why your option was wrong, really helps to frame your mind and helps you to understand what the question is asking for and the solutions your mind should be focusing on. Thanks once again.
Lernzeitraum: 1 month
Good questions banks and explanations that help me practise and pass the exam.
Lernzeitraum: 1 month
강의 듣고 바로 문제 풀었는데 정답률 80% 가량 나왔고, 높은 점수로 시험 합격했어요. 앱 잘 이용했습니다
Lernzeitraum: 1 month
Good mix of theory and practical scenarios
Lernzeitraum: 1 month
I used the app mainly to review the fundamentals—data preparation, model tuning, and deployment options on GCP. The explanations were simple and to the point, which really helped before the exam.
Kostenlose App holen