
Simulasikan pengalaman ujian sesungguhnya dengan 50 soal dan batas waktu 120 menit. Berlatih dengan jawaban terverifikasi AI dan penjelasan detail.
Didukung AI
Setiap jawaban diverifikasi silang oleh 3 model AI terkemuka untuk memastikan akurasi maksimum. Dapatkan penjelasan detail per opsi dan analisis soal mendalam.
You deployed a TensorFlow recommendation model to a Vertex AI Prediction endpoint in us-central1 with autoscaling enabled. Over the last week, you observed sustained traffic of ~1,200 requests per hour (about 20 RPS) during business hours, which is 2x higher than your original estimate, and you need to keep P95 latency under 150 ms during future surges. You want the endpoint to scale efficiently to handle this higher baseline and upcoming spikes without causing user-visible latency. What should you do?
You are part of a data science team at a ride‑sharing platform and need to train and compare multiple TensorFlow models on Vertex AI using 850 million labeled trip records (≈2.3 TB) stored in a BigQuery table; training will run on 4–8 workers and you want to minimize data‑ingestion bottlenecks while ensuring the pipeline remains scalable and repeatable. What should you do?
You are designing a TensorFlow Extended (TFX) pipeline with standard TFX components for a global media-streaming platform that analyzes user interaction logs; the pipeline includes feature engineering and data validation steps, and after promotion to production it must process up to 120 TB of historical clickstream data per day stored in BigQuery across 12 daily partitions (with an additional 2 TB ingested each day); you need the preprocessing steps to scale efficiently, automatically publish metrics and parameters to Vertex AI Experiments, and track all artifacts with Vertex ML Metadata. How should you configure the pipeline run?
You are part of an operations team managing a fleet of 250 refrigerated delivery trucks. Each truck’s refrigeration unit streams telemetry at 10-second intervals, including compressor current (A), condenser coil temperature (°C), discharge pressure (kPa), and vibration RMS (g), resulting in roughly 14 months of historical data per truck. No breakdowns or incident events have been hand-labeled yet. Management asks for a predictive maintenance solution that can detect potential refrigeration unit failures with at least a 24-hour lead time so that routes can be rescheduled. What should you do first?
A logistics platform has trained three versions of an ETA prediction model (v1, v2, v3), imported them into Vertex AI Model Registry, and deployed them to a single online prediction endpoint; you expect about 120,000 prediction requests per day and want to run a 7-day A/B/n test by initially routing 50%/25%/25% of traffic to v1/v2/v3 while tracking per-version accuracy and p95 latency with the least engineering overhead. What should you do to identify the best-performing model using the simplest approach?
Ingin berlatih semua soal di mana saja?
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.
Your supply chain analytics team plans to run 180 training jobs per day for 10 days (3 feature sets × 4 model architectures × 15 hyperparameter grids) using containerized trainers; they must log per-run metrics (AUC, F1, and loss) with timestamps and be able to query trends over time (for example, 7-day rolling averages and the top 10 configurations by mean F1 in the last 30 days) via an API while minimizing manual effort. Which approach should they use to track and report these experiments?
You are the lead ML engineer at a smart-meter analytics company; you trained a TensorFlow model that flags consumption anomalies, and each day at 01:00 UTC your ETL job writes the previous day’s readings (~1.5 million records, ~60 GB) as newline-delimited JSON to Cloud Storage under prefixes like gs://meter-prod/daily/2025-08-31/*.jsonl; you need to run inference over the entire daily batch with minimal manual intervention and do not require low-latency, per-request responses; what should you do?
You are fine-tuning a Vision Transformer classifier on 1.2 million 224x224 product images using Keras; on a single NVIDIA T4 GPU with a global batch size of 64, each epoch takes about 90 minutes, and you have already enabled tf.data prefetch(AUTOTUNE), caching, and mixed precision. You switch to a VM with 4 T4 GPUs and wrap model creation/training with tf.distribute.MirroredStrategy, making no other changes and keeping the global batch size at 64; however, the epoch time remains ~90 minutes and per-GPU utilization hovers at 30–40%. Disk throughput and input pipeline profiling show no bottlenecks. What should you do to reduce wall-clock training time?
You work for an online marketplace that must automatically flag product photos containing restricted brand logos; each image belongs to exactly one class (logo present vs. not present). You trained a convolutional neural network, deployed a model version to Vertex AI Prediction, and attached a model evaluation job to that version. At a softmax decision threshold of 0.50, the evaluation reports precision = 0.71, but the business requires precision >= 0.90. To increase precision by changing only the final layer softmax threshold, what should happen as a consequence of your adjustment?
You work for a nationwide e-commerce marketplace. After receiving approval to collect the necessary customer behavior data, you trained a Vertex AI AutoML Tabular model to predict the probability that an order will be returned within 30 days. You deployed the model to online prediction, and it serves about 200,000 predictions per day. Seasonal promotions and marketing campaigns may change how features such as discount_rate, shipping_speed, and product_category interact, which could degrade accuracy over time. You want to be alerted if feature interactions change and to understand which features drive the predictions, while keeping monitoring costs low. What should you do?
Masa belajar: 1 month
Just want to say a massive thank you to the entire Cloud pass, for helping me pass my exam first time. I wont lie, it wasn't easy, especially the way the real exam is worded, however the way practice questions teaches you why your option was wrong, really helps to frame your mind and helps you to understand what the question is asking for and the solutions your mind should be focusing on. Thanks once again.
Masa belajar: 1 month
Good questions banks and explanations that help me practise and pass the exam.
Masa belajar: 1 month
강의 듣고 바로 문제 풀었는데 정답률 80% 가량 나왔고, 높은 점수로 시험 합격했어요. 앱 잘 이용했습니다
Masa belajar: 1 month
Good mix of theory and practical scenarios
Masa belajar: 1 month
I used the app mainly to review the fundamentals—data preparation, model tuning, and deployment options on GCP. The explanations were simple and to the point, which really helped before the exam.
Dapatkan aplikasi gratis