
GCP
163+ 무료 연습 문제 (AI 검증 답안 포함)
AI 기반
모든 Google Professional Cloud Database Engineer 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.
Your media-streaming platform uses Memorystore for Redis (Standard Tier, Redis 6.x) as a cache for user session tokens and frequently requested metadata; during live event traffic spikes, p95 cache latency jumps from 6 ms to 180–250 ms, and Cloud Monitoring shows memory utilization at 95–98% with Redis INFO reporting ~25,000 evicted_keys/min and an eviction policy of allkeys-lru; average key size is ~1.5 KB, average TTL is 5 minutes, CPU stays under 40%, and network RTT between your GKE cluster and the Redis instance is ~1 ms in the same region; you need to reduce the frequency and impact of these latency spikes. What should you do?
Your healthcare analytics company is migrating a self-managed PostgreSQL 12 database from an on-premises datacenter to Cloud SQL for PostgreSQL; after cutover, the system must tolerate a single-zone outage in the target region with no more than 3 minutes of disruption (RTO ≤ 3 minutes) and zero transaction loss (RPO = 0), and you want to follow Google-recommended practices for the migration—what should you do?
Your edtech platform uses Cloud Firestore for storage and serves a React web app to a global audience. Each day at 00:00 UTC, you publish the same Top 20 practice tips (20 documents, ~5 KB each; total payload ~100 KB) to approximately 5 million daily active users across North America, Europe, and APAC. You need to cut Firestore read costs and achieve sub-150 ms p95 load times for this daily list while the content remains identical for 24 hours. What should you do?
Your travel-tech company operates a globally distributed seat allocation platform on Cloud Spanner with 3 read-write and 6 read-only regions; after importing 12 million seat and route records from a partner, you observe write latency spikes and CPU hotspots on 2 of 8 leader replicas, and Cloud Monitoring shows hot ranges on a table keyed by a monotonically increasing ticket_id and a composite key (route_id, class) where class has only 3 distinct values; at peak 35,000 writes/second, 70% of writes concentrate in a narrow key range; to follow Google-recommended schema design practices and avoid hotspots without sacrificing strong consistency or availability, what should you do? (Choose two.)
You are building a Pub/Sub–triggered service on Cloud Functions (2nd gen) in us-central1 that must connect to a Cloud SQL for PostgreSQL instance. Your security policy requires the database to accept connections only from workloads inside the prod-vpc VPC, with no public internet exposure. The function can scale up to 200 concurrent requests during peak load, and you need stable connection management. What should you do to meet the security and performance requirements?
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
Your healthcare analytics platform is closing its private data center and must move a 2-node Oracle RAC 19c OLTP cluster (24 TB) that uses ASM and SCAN listeners to Google Cloud. The application depends on RAC services with FAN/TAF-based failover and requires minimal to no code changes, while maintaining equivalent performance (~5 ms storage latency and >40,000 TPS) after migration. You need a supported landing zone that preserves the RAC architecture and existing Oracle licensing with the least rework. What should you do?
Your media-streaming platform runs an AlloyDB for PostgreSQL cluster in us-central1 that is accessible only via a private IP; compliance forbids opening a public endpoint or installing agents on database VMs, and you must continuously replicate a subset of 12 tables (about 1.8 TB total, up to 2,500 row changes per second) into BigQuery for analytics and ML with end-to-end latency under 10 seconds and 99.9% delivery reliability using Google-managed services that automatically handle schema changes and scale without downtime; what should you do?
Your company runs a global event-ticketing platform on Google Cloud. Your engineering team is building a real-time seat reservation service that must prevent double-booking via strongly consistent, durable writes and elastically scale with live traffic spikes. The system must sustain up to 120,000 write operations per second at peak, keep p95 write latency under 15 ms, and incur less than 5 minutes of unplanned downtime per month (99.99% availability). You need a primary data store with very low-latency, high write throughput, and a 99.99% uptime SLA for production. What should you do?
You operate a real-time sports ticketing platform that uses Cloud SQL for MySQL in asia-northeast1. At 10:12 JST on a weekday launch, the instance entered an automatic maintenance event that caused 6 minutes of downtime, impacting over 15,000 concurrent users. Your SLO requires that any maintenance occur only outside your peak window of 09:00–22:00 JST, and you must make an immediate configuration change without migrating platforms or adding new components. What should you do to prevent future maintenance from occurring during peak hours?
You are designing a global order reconciliation service for a multinational retail platform. Applications in North America, Europe, and Asia must be able to read and write concurrently, with p95 cross-region transaction commit latency under 200 ms. The database must be fully managed, relational (ANSI SQL), provide global external consistency with RPO=0, support online schema changes, and meet a 99.99% availability target. Which Google Cloud service should you choose?
A global event ticketing platform runs its reservation and seat-allocation system on Cloud SQL for PostgreSQL 14; your SLOs require RTO ≤ 5 minutes and RPO ≤ 5 minutes, you must tolerate a single-zone outage with zero data loss and also be able to recover from a regional outage within minutes, and you need to choose a high-availability and disaster-recovery topology that meets these constraints without changing the application.
You manage a development analytics environment running on Cloud SQL for MySQL in us-central1. The instance stores approximately 1.5 TB of data, with automated backups enabled to run daily and retain 7 copies. The workload is not mission-critical, and an RPO of 24 hours is acceptable. You need to lower monthly backup storage charges without disabling backups or changing the instance machine type. What should you change to reduce backup costs?
You are designing a new centralized fleet maintenance scheduling system for 180 municipal depots, each with approximately 450 GB of historical data; you plan to onboard 12 depots per week over a 15-week phased rollout; the solution must use an SQL database, minimize costs and user disruption during each regional cutover, and allow capacity to scale up during weekday peaks and down on nights and public holidays to control spend; what should you do?
Your company needs to relocate a high-traffic payment ledger database from a co-location data center to Google Cloud. The source is MySQL 8.0.28 using the InnoDB engine with binary logging enabled in ROW format; the dataset is 1.6 TB, averaging 900 TPS with bursts up to 1,500 TPS. A dedicated Cloud VPN tunnel provides 1 Gbps bandwidth between on-premises and Google Cloud. You must preserve ACID transactions and keep the production cutover under 3 minutes at 02:00 UTC. Cloud SQL for MySQL supports the source version. What should you do?
During a quarterly disaster-recovery review at a fintech company, you discover that a production Cloud SQL for MySQL 8.0 instance (single-zone, us-central1-a, 800 GB with storage auto-increase enabled) is not configured for high availability (HA), while SLOs require RTO < 60 seconds and zero data loss during zonal failures; you have a 30-minute maintenance window this weekend and cannot accept multi-hour downtime for data migration. Following Google-recommended practices, how should you enable HA on this existing instance with the least operational risk?
Your team runs a meal-delivery ordering platform in us-central1. The API is served from Cloud Run, and transactional data is stored in a single Cloud SQL for PostgreSQL instance with automatic maintenance updates enabled. 92% of customers are in the America/Chicago time zone and expect the app to be available every day from 6:00 to 22:00 local time. Security policy requires that database maintenance patches be applied within 7 days of release. You need to apply regular Cloud SQL maintenance without creating downtime for users during operating hours. What should you do?
You are deploying a new Java service on a Windows Server 2019 VM in your company’s on-premises data center (no Cloud VPN or Interconnect to Google Cloud), and the service uses JDBC on port 5432 to connect to a Cloud SQL for PostgreSQL instance that has a private IP 10.20.30.40 and a public IP 34.98.120.10, with SSL disabled on the instance; you must ensure the service can access the database without making any configuration changes to the Cloud SQL instance—what should you do?
You manage a small, non-critical Cloud SQL for PostgreSQL instance (2 vCPUs, 50-GB storage) used by a staging QA pipeline; the business accepts a recovery point objective of up to 72 hours and you want to minimize ongoing operational and storage costs while still retaining basic recovery capability—what backup/restore configuration should you choose?
You operate a Cloud SQL for PostgreSQL deployment in Google Cloud. The primary instance runs in zone europe-west1-b, and a read replica runs in zone europe-west1-c within the same region. An alert reports that the read replica in europe-west1-c was unreachable for 11 minutes due to a zonal network disruption. You must ensure that the read-only workload continues to function and that the replica remains available with minimal manual intervention. What should you do?
You are migrating a vendor CRM database from a legacy SQL Server 2014 Enterprise instance running on a 3‑node VMware cluster with a Fibre Channel SAN to a single Cloud SQL for SQL Server instance in Google Cloud. Storage telemetry from the SAN shows peak read workloads reaching approximately 27,000 IOPS with read latency under 2 ms during quarterly reporting. You want to size the Cloud SQL instance to maximize read performance while keeping licensing costs minimal. What should you do?
학습 기간: 1 month
Cloud Pass helped me master through practical and realistic questions. The explanations were clear and helped me understand.
학습 기간: 1 month
문제 다 풀고 시험쳤는데 유사한 문제가 많았어요. 다른 분들도 화이팅입니다!
학습 기간: 2 weeks
I studied with Cloud Pass for two weeks, doing around 20 ~30 questions a day. The difficulty level was very similar to the real PCDBE exam. If you’re preparing for this certification, this app is a must have.
학습 기간: 1 month
Very close to the real exam. Thanks
학습 기간: 1 month
Being able to reset my progress and re-solve the hard questions helped me a lot. Passed!

Professional

Associate

Professional

Associate

Foundational

Professional

Professional

Professional

Professional

Professional
무료 앱 받기