
50개 문제와 120분 시간 제한으로 실제 시험을 시뮬레이션하세요. AI 검증 답안과 상세 해설로 학습하세요.
AI 기반
모든 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.
Your marketing analytics team needs to run a weekly PySpark batch job on Google Cloud Dataproc to score customer churn propensity using input data in Cloud Storage and write results to BigQuery; testing shows the workload completes in about 35 minutes on a 16-worker n1-standard-4 cluster when triggered every Friday at 02:00 UTC; you are asked to cut infrastructure costs without rewriting the job or changing the schedule—how should you configure the cluster for cost optimization?
Your company runs a private Google Kubernetes Engine (GKE) cluster in a custom VPC in us-central1 using a subnetwork named analytics-subnet; due to the organization policy constraints/compute.vmExternalIpAccess, all nodes have only internal IPs with no external IPs. A nightly Kubernetes Job must download 500 MB CSV files from Cloud Storage and load transformed results into BigQuery using the BigQuery Storage Write API, but pods fail with DNS resolution/connection errors when contacting storage.googleapis.com and bigquery.googleapis.com. What should you do to allow access to Google APIs while keeping the nodes on internal IPs only?
Your mobility startup needs to build a predictive maintenance model with BigQuery ML and deploy a near–real-time prediction endpoint on Vertex AI; you will ingest continuous telemetry from 12 scooter OEMs averaging 80,000 messages per minute with an end-to-end latency target under 3 seconds, and incoming payloads may include malformed JSON, missing fields, and outliers (for example, speed > 120 km/h); what should you do to reliably ingest, validate, and deliver this data for training and inference?
A media intelligence firm receives irregularly timed 2–5 GB CSV files from 50 partners into a dedicated Cloud Storage bucket via Storage Transfer Service, after which a Dataproc PySpark job must standardize the files and write them to BigQuery, followed by table-specific BigQuery SQL transformations that vary by table and can run for up to 3 hours across roughly 600 destination tables, and you must design the most efficient and maintainable workflow to process all tables promptly and deliver the freshest results to analysts—what should you do?
In a fintech company, Business Intelligence developers hold the Project Owner role in their respective Google Cloud projects to work across multiple services. Your compliance policy requires that all Cloud Storage Data Access audit logs be retained for 180 days, and only the internal audit team may read these logs across all current and future projects. What should you do?
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
A global ride-hailing platform is migrating driver and trip ledgers from multiple transactional sources (Cloud SQL for MySQL and an on-prem PostgreSQL cluster) into BigQuery; these systems emit log-based CDC events (operation type INSERT/UPDATE/DELETE, commit_ts, and primary key) at a steady 7,500 rows/sec with spikes to 18,000 rows/sec; product managers require that changes become queryable in a BigQuery reporting table within 60 seconds, and the data team must reduce slot consumption for applying changes by at least 40% compared to per-row DML; you will stream the CDC events continuously into BigQuery; which two steps should you take so that changes reach the reporting table with minimal latency while keeping compute overhead low? (Choose two.)
Your retail analytics team receives mixed-format files (Avro and JSON) from branch exports and a partner SFTP feed, totaling about 300 GB per day and up to 2 million objects per month; you must land all files in a Cloud Storage bucket encrypted with your own Customer-Managed Encryption Key (CMEK), and you want to build the ingestion with a GUI-driven pipeline where you can explicitly configure an object sink that uses your KMS key; What should you do?
You are migrating a nightly batch ETL for an e-commerce company: at 02:00 UTC, about 300 GB of gzip-compressed JSON files with sensitive purchase data land in a Google Cloud Storage bucket (gs://orchid-orders-batch), and a PySpark job on a temporary Cloud Dataproc cluster (1 master, 8 workers) transforms them and writes aggregated results to a BigQuery dataset (analytics.orders_agg) in the same project. You currently trigger the job manually with your user account, but you want to automate it while following security best practices and the principle of least privilege. How should you run this workload securely?
A Singapore-based fintech platform ingests real-time authorization events from point-of-sale terminals worldwide, and the primary ledger table grows by approximately 280,000 rows per second. Multiple partner banks integrate your query APIs to embed live risk and compliance checks into their own systems. Your query APIs must meet the following requirements:
Your team is building a Google Cloud–hosted tool to auto-tag up to 80 customer support emails per second with topic labels so agents can route them, you must release this in 10 business days with zero additional headcount and no team ML experience, and the labels only need to capture subject matter such as product names or issue types; what should you do?
학습 기간: 1 month
I tend to get overwhelmed with large exams, but doing a few questions every day kept me on track. The explanations and domain coverage felt balanced and practical. Happy to say I passed on the first try.
학습 기간: 2 months
Thank you ! These practice questions helped me pass the GCP PDE exam at the first try.
학습 기간: 1 month
The layout and pacing make it comfortable to study on the bus or during breaks. I solved around 20–30 questions a day, and after a few days I could feel my confidence improving.
학습 기간: 1 month
해설이 영어 기반이긴 하지만 나름 도움 됐어요! 실제 시험이랑 문제도 유사하고 좋네요 ㅎㅎ
학습 기간: 2 months
I combined this app with some hands-on practice in GCP, and the mix worked really well. The questions pointed out gaps I didn’t notice during practice labs. Good companion for PDE prep.
무료 앱 받기