
50개 문제와 120분 시간 제한으로 실제 시험을 시뮬레이션하세요. AI 검증 답안과 상세 해설로 학습하세요.
AI 기반
모든 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.
Your team deployed a regression model that predicts hourly water usage for industrial chillers. Four months after launch, a vendor firmware update changed sensor sampling and units for three input features, and the live feature distributions diverged: 5 of 18 features now have a population stability index > 0.25, 27% of temperature readings fall outside the training range, and production RMSE increased from 0.62 to 1.45. How should you address the input differences in production?
You are building an end-to-end scikit-learn MLOps workflow in Vertex AI Pipelines (Kubeflow Pipelines) that ingests 50 GB of CSV data from Cloud Storage, performs data cleaning, feature selection, model training, and model evaluation, then writes a .pkl model artifact to a versioned path in a GCS bucket. You are iterating on multiple versions of the feature selection and training components, submitting each version as a new pipeline run in us-central1 on n1-standard-4 CPU-only executors; each end-to-end run currently takes about 80 minutes. You want to reduce iteration time during development without increasing your GCP costs; what should you do?
Your team must deliver an ML solution on Google Cloud to triage warranty claim emails for a global appliance manufacturer into 8 categories within 4 weeks. You are required to use TensorFlow to maintain full control over the model's code, serving, and deployment, and you will orchestrate the workflow with Kubeflow Pipelines. You have 30,000 labeled examples and want to accelerate delivery by leveraging existing resources and managed services instead of training a brand-new model from scratch. How should you build the classifier?
You are building an anomaly detection model for an industrial IoT platform using Keras and TensorFlow. The last 24 months of sensor events (~900 million rows, ~2.6 TB) are stored in a single partitioned table in BigQuery, and you need to apply feature scaling, categorical encoding, and time-window aggregations in a cost-effective and efficient way before training. The trained model will be used to run weekly batch inference directly in BigQuery against newly ingested partitions. How should you implement the preprocessing workflow?
스마트 시티 교통 분석 프로젝트를 위한 MLOps workflow를 구축하고 있으며, 이 workflow는 서로 다른 Google Cloud 서비스 전반에 걸쳐 data preprocessing, model training, model deployment를 연결합니다. 교통 카메라는 시간당 40–60개의 JSONL 파일(각각 약 50 MB)을 gs://city-traffic-raw라는 Cloud Storage bucket에 bursty하게 업로드합니다. 각 작업에 대한 코드는 이미 작성했으며, 이제 마지막 successful run 이후 새 파일이 도착했을 때만 실행되고 orchestration을 위한 항상 켜져 있는 compute 비용을 최소화하는 orchestration layer가 필요합니다. 어떻게 해야 합니까?
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
You work for a real-time multiplayer gaming company. You must design a system that stores and manages player telemetry features (e.g., positions, actions, and matches completed) and server locations over time. The system must provide sub-50 ms online retrieval of the latest features to feed a fraud-detection model for live inference, while the data science team must retrieve a point-in-time consistent snapshot of historical features (e.g., as-of a given timestamp) for training and backtesting. The solution should handle ingestion of approximately 200 million feature rows per day, support feature versioning, and require minimal operational effort. What should you do?
You are setting up a weekly demand-forecasting workflow for a nationwide grocery chain: you train a custom model on 85 GB of historical sales data stored in Cloud Storage and produce about 6 million batch predictions per run; compliance requires an auditable end-to-end lineage that links the exact training data snapshot, the resulting model artifact, and each weekly batch prediction job for at least 90 days; what should you do to ensure this lineage is automatically captured across training and prediction?
Your analytics guild is preparing a time-boxed 3-week prototype, and you must provide a shared Vertex AI Workbench user-managed notebook VM in us-central1 for exactly 8 external contractors while preventing the other 500 project users from opening or running the environment. You will provision the notebook instance yourself and need to follow least-privilege and ensure that notebook code can call Vertex AI APIs during experiments. What should you do to configure access correctly?
You are training custom models with Vertex AI Training to classify defects in 12-megapixel manufacturing photos, and each week you swap in new neural architectures from research to benchmark them on the same fixed 600 GB dataset; you want automatic retraining to occur only when code changes are pushed to the main branch, keep full version control of code and build artifacts, and minimize costs by avoiding always-on orchestration or manual steps. What should you do to meet these requirements?
You are organizing a 24-hour internal ML sprint for a team of 12 data scientists who need to explore and prototype PySpark and Spark SQL transformations on 40 TB of Parquet data stored in Cloud Storage. The environment must be accessible via web-based notebooks, support distributed Spark execution out of the box, and require minimal setup with no manual package installs. What is the fastest way to provide a robust, scalable notebook environment for this sprint?
학습 기간: 1 month
Just want to say a massive thank you to the entire Cloud pass, for helping me pass my exam first time. I wont lie, it wasn't easy, especially the way the real exam is worded, however the way practice questions teaches you why your option was wrong, really helps to frame your mind and helps you to understand what the question is asking for and the solutions your mind should be focusing on. Thanks once again.
학습 기간: 1 month
Good questions banks and explanations that help me practise and pass the exam.
학습 기간: 1 month
강의 듣고 바로 문제 풀었는데 정답률 80% 가량 나왔고, 높은 점수로 시험 합격했어요. 앱 잘 이용했습니다
학습 기간: 1 month
Good mix of theory and practical scenarios
학습 기간: 1 month
I used the app mainly to review the fundamentals—data preparation, model tuning, and deployment options on GCP. The explanations were simple and to the point, which really helped before the exam.
무료 앱 받기