
50개 문제와 120분 시간 제한으로 실제 시험을 시뮬레이션하세요. AI 검증 답안과 상세 해설로 학습하세요.
AI 기반
모든 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.
Your media-streaming platform runs an AlloyDB for PostgreSQL cluster in us-central1 that is accessible only via a private IP; compliance forbids opening a public endpoint or installing agents on database VMs, and you must continuously replicate a subset of 12 tables (about 1.8 TB total, up to 2,500 row changes per second) into BigQuery for analytics and ML with end-to-end latency under 10 seconds and 99.9% delivery reliability using Google-managed services that automatically handle schema changes and scale without downtime; what should you do?
You are designing a global order reconciliation service for a multinational retail platform. Applications in North America, Europe, and Asia must be able to read and write concurrently, with p95 cross-region transaction commit latency under 200 ms. The database must be fully managed, relational (ANSI SQL), provide global external consistency with RPO=0, support online schema changes, and meet a 99.99% availability target. Which Google Cloud service should you choose?
Your company needs to relocate a high-traffic payment ledger database from a co-location data center to Google Cloud. The source is MySQL 8.0.28 using the InnoDB engine with binary logging enabled in ROW format; the dataset is 1.6 TB, averaging 900 TPS with bursts up to 1,500 TPS. A dedicated Cloud VPN tunnel provides 1 Gbps bandwidth between on-premises and Google Cloud. You must preserve ACID transactions and keep the production cutover under 3 minutes at 02:00 UTC. Cloud SQL for MySQL supports the source version. What should you do?
Your team runs a meal-delivery ordering platform in us-central1. The API is served from Cloud Run, and transactional data is stored in a single Cloud SQL for PostgreSQL instance with automatic maintenance updates enabled. 92% of customers are in the America/Chicago time zone and expect the app to be available every day from 6:00 to 22:00 local time. Security policy requires that database maintenance patches be applied within 7 days of release. You need to apply regular Cloud SQL maintenance without creating downtime for users during operating hours. What should you do?
You operate a Cloud SQL for PostgreSQL deployment in Google Cloud. The primary instance runs in zone europe-west1-b, and a read replica runs in zone europe-west1-c within the same region. An alert reports that the read replica in europe-west1-c was unreachable for 11 minutes due to a zonal network disruption. You must ensure that the read-only workload continues to function and that the replica remains available with minimal manual intervention. What should you do?
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
You manage a Cloud SQL for PostgreSQL instance and currently run a 120-line Python 3.11 script that executes SELECT statements to collect table bloat and index usage metrics, writes a CSV to a Cloud Storage bucket, and publishes a Pub/Sub message; the script is stateless and completes in under 90 seconds. You need to run this job automatically every Monday at 02:00 UTC while minimizing operational cost and operational overhead. What should you do?
A nightly containerized ETL job running as the service account sa-etl@project-id.iam.gserviceaccount.com must read rows from a Cloud SQL for MySQL instance in us-central1 via the Cloud SQL Auth Proxy using a standard MySQL user (not IAM DB Auth) and then upsert up to 50,000 records into a regional Cloud Spanner database in us-central1. You must grant only predefined IAM roles following the principle of least privilege so the job can connect to Cloud SQL to read data and modify a table in Cloud Spanner; which roles should you assign to the service account?
Your logistics company is launching a live fleet-tracking map powered by Cloud Bigtable; the service performs approximately 150,000 random point-read operations per second with a 60:1 read-to-write ratio, requires 95th percentile read latency under 10 ms in a single region, and stores about 25 TB of frequently accessed hot data; to meet these latency and throughput targets, which Bigtable storage type should you choose?
You are designing the database layer for a global real-time loyalty points platform for a retail conglomerate. The system must support ACID transactions with strong consistency for balance updates and redemptions. It must store both relational data (customers, stores, balances) and semi-structured JSON attributes (dynamic campaign metadata, per-store rules). It must scale transparently with multi-region writes and low-latency access across at least 3 continents, with RPO=0 and RTO<60 seconds during regional failures. You need a Google Cloud database that satisfies all of these requirements. What should you choose?
You operate a Cloud SQL for MySQL primary instance (sql-prod-eu) in europe-west1 with two cross-region read replicas (sql-dr-us in us-central1 and sql-dr-asia in asia-southeast1); during a 45-minute disaster recovery exercise, you stopped sql-prod-eu and promoted sql-dr-us to primary to handle up to 3,000 write QPS, and now that the exercise is complete you must restore the pre-exercise topology with sql-prod-eu as the primary in europe-west1 and cross-region replicas in us-central1 and asia-southeast1 while preserving all writes that occurred during the test—what should you do?
학습 기간: 1 month
Cloud Pass helped me master through practical and realistic questions. The explanations were clear and helped me understand.
학습 기간: 1 month
문제 다 풀고 시험쳤는데 유사한 문제가 많았어요. 다른 분들도 화이팅입니다!
학습 기간: 2 weeks
I studied with Cloud Pass for two weeks, doing around 20 ~30 questions a day. The difficulty level was very similar to the real PCDBE exam. If you’re preparing for this certification, this app is a must have.
학습 기간: 1 month
Very close to the real exam. Thanks
학습 기간: 1 month
Being able to reset my progress and re-solve the hard questions helped me a lot. Passed!
무료 앱 받기