
GCP
199+ 무료 연습 문제 (AI 검증 답안 포함)
AI 기반
모든 Google Professional Cloud Architect 답안은 3개의 최고 AI 모델로 교차 검증하여 최고의 정확도를 보장합니다. 선택지별 상세 해설과 심층 문제 분석을 제공합니다.
A global logistics firm operates 650 smart loading bays across 8 distribution centers on 3 continents, and each bay has a beam-break sensor that reports its state once per second; each event contains only a sensor ID and several discrete status flags, and analysts must correlate this data with driver account records, scheduled dock assignments, and warehouse location metadata to run join-heavy queries and produce operational reports; which database type should you use?
A ticketing company needs to expose a public HTTPS webhook API written in Go 1.20 that is idle most of the day but can surge from ~0 to 10,000 requests per minute within 90 seconds when major sales go live; the service must maintain at least 99.95% availability during spikes, support request timeouts up to 60 seconds, auto-scale without pre-provisioning capacity, and minimize operational overhead by avoiding server, cluster, or OS management—what deployment approach should you recommend?
Your media analytics firm stores 150 TB of sensitive video feature vectors on regional SSD persistent disks attached to a Dataproc cluster in us-central1. For regulatory reasons, you must be able to rotate the encryption key protecting the data at rest on these disks at least every 90 days without manually re-encrypting data or changing Spark jobs. Pipelines must continue to run with minimal operational overhead and no plaintext copies written to intermediate storage. You want to follow Google-recommended practices for security. What should you do?
Cinemetrix, a media analytics firm, enabled Google Workspace (domain: cinemetrix.com) last week and created a Google Cloud Organization; they currently have 85 projects across 6 folders. The security office now mandates that, effective immediately, no principals from outside cinemetrix.com may be granted any IAM role on any project, folder, or resource within the Organization. The control must be enforced centrally at the Organization level, automatically apply to newly created projects, work without custom scripts or periodic jobs, take effect within 30 minutes, and must not break existing or future service-account-based workloads. What should you do?
Your retail analytics team needs to run 40 legacy Hadoop MapReduce jobs on Google Cloud for weekly batch processing without changing any job code or deployment scripts; they want to minimize both compute cost and infrastructure management effort; each batch runs 3–6 hours and the pipelines can tolerate up to 20% worker loss without failing. What should you do?
이동 중에도 모든 문제를 풀고 싶으신가요?
Cloud Pass를 무료로 다운로드하세요 — 모의고사, 학습 진도 추적 등을 제공합니다.
A regional logistics company on Google Cloud needs to process live telemetry from delivery vans (8,000–12,000 events/second with sub-5-second aggregation windows) while also running hourly batch reconciliations of daily manifests; they have no existing analytics codebase and want a fully managed service that supports both batch and streaming with minimal operations overhead. Which technology should they use?
A regional film studio needs to move a one-time 12-TB set of 4K render files from its on-premises NAS into a Google Cloud Storage bucket for archival. The studio has a reliable 1 Gbps internet connection to Google Cloud and a 48-hour maintenance window. They want to minimize total transfer time and cost while following Google-recommended practices. What should they do?
Your fintech company runs a payment authorization microservice as a Kubernetes Deployment with 12 replicas in a regional GKE Autopilot cluster in us-central1. During rolling updates (maxUnavailable=0, maxSurge=2), production-only configuration mistakes have intermittently caused 5xx errors at ~1,500 RPS because new Pods begin receiving traffic before the bad settings cause runtime failures. You need a platform-level preventive control so that only healthy Pods receive traffic during the rollout and faulty versions do not trigger outages. What should you do?
You are deploying a telemetry ingestion service in Google Cloud that must privately exchange data with a vendor-managed control system hosted in a third-party colocation facility outside Google Cloud. The average sustained traffic is 300 kbps, with occasional bursts up to 1 Mbps. The business requires 99.99% connectivity availability and emphasizes cost optimization. You need to design private connectivity between the Google Cloud VPC and the external site to meet these requirements. What should you provision?
A retail analytics team runs a Google Kubernetes Engine (GKE) worker that pulls events from a Pub/Sub pull subscription (orders-events) and writes batch outputs to Cloud Storage; the app was deployed as a single pod and Pub/Sub monitoring shows a backlog of about 12,000 undelivered messages with the 90th percentile wait time at 8 minutes, while each pod processes roughly 6 messages per second when disk throughput is near 80 MB/s, indicating the workload is I/O-intensive; you must scale the processing so the backlog stays under 500 messages and per-message latency remains under 60 seconds without changing the application code; what should you do?
Your media streaming company wants to maintain a near–real-time warm standby of its on-premises billing MariaDB cluster in Google Cloud for disaster recovery. The dataset is approximately 4 TB, and change data capture can burst to 800 Mbps during nightly reconciliations (hundreds of GB per hour). The replication engine requires endpoints to communicate over RFC1918 private IP space and must not traverse NAT or public addresses; you also need predictable, low-latency connectivity. Which networking approach should you use?
An aerospace research division must move 12 Windows Server 2022 Datacenter VMs from a classified on-premises lab to Google Cloud within 30 days while reusing existing Microsoft Windows Server licenses covered by active Software Assurance; no Google-provided Windows licensing fees are permitted. Workloads must run in the europe-west4 region with host-level isolation under your control, and each migrated VM must boot from its original OS disk to preserve in-guest agents and local policies. You need a repeatable approach for the first validation VM that keeps licensing compliant and maintains the existing OS state. What should you do?
Your media-streaming company deploys a Node.js service to Google Kubernetes Engine (GKE) across dev, staging, and prod using Cloud Build and Artifact Registry. Compliance requires that every production deployment can be traced to the exact Git commit that produced the running container, and that this linkage is auditable via registry and CI/CD records for at least 180 days; what should you do?
You operate a healthcare data processing environment on Google Cloud. All Compute Engine VMs run in VPC 'med-analytics-vpc' on subnet 'proc-subnet' (10.24.0.0/20), and an organization policy prohibits any VM from having an external (public) IP. A firewall rule denies ingress tcp:22 from 0.0.0.0/0, and there is no Cloud VPN or Cloud Interconnect to your office network (198.51.100.0/24). You must initiate an SSH session to a specific VM named 'etl-node-03' in us-central1-a for an urgent diagnosis, without assigning any public IPs or opening new inbound internet access. What should you do?
Your online gaming operations team enabled a trace export from all match servers to push every gameplay event to Google Cloud Storage for later analysis; each event payload is between 50 KB and 15 MB, and traffic can spike to 3,000 events per second during peak tournaments. You need to minimize data loss while sustaining high write throughput; which process should you implement?
A telehealth company runs a consultation-booking application on Google Kubernetes Engine (Autopilot) across two prod clusters in us-central1 and europe-west1, with managed Anthos Service Mesh and Config Sync enabled; over the last 15 minutes (10:00–10:15 UTC), the 95th percentile latency for the /book endpoint increased from 200 ms to 1.3 s while error rate stayed below 0.2% and pod CPU usage remained under 40%; you need to pinpoint which microservice-to-microservice call is introducing the delay without redeploying any workloads. What should you do?
Your company runs a city-event ticketing API on Compute Engine behind a global HTTP(S) Load Balancer, with containerized NGINX (web) and Node.js (app) tiers each deployed as autoscaled managed instance groups across two zones (europe-west3-a and europe-west3-b), while the database runs on Cloud SQL for MySQL with HA but a fixed machine type (no autoscaling); under a limited beta of 500 authenticated testers the system meets a 99.99% availability SLA at a peak of 120 requests/second, but next month the API will be opened to the public (including unauthenticated requests) with an expected 10x increase to 1,200 requests/second and existing autoscaling policies target 60% CPU on the MIGs; you must design a resiliency testing strategy to validate that the system will maintain the SLA under the additional load without harming real users; what should you do?
A genomics research institute needs to import 82 TB of raw sequencing data from its on-premises NAS into Google Cloud within 30 days. Their internet link averages 500 Mbps during off-peak hours and must remain available for other workloads. They will store the data in Cloud Storage and want to follow Google-recommended, secure, and predictable migration practices. What should they do?
All Compute Engine instances in your project must be able to connect only to an internal code repository at 10.30.40.50 over TCP ports 22 and 443; any other outbound traffic from these instances must be blocked. You will enforce this using VPC firewall rules. How should you configure the firewall rules?
You are leading the migration of a digital media platform to Google Cloud, and a major pain point is the on-premises scale-out NAS that requires frequent and costly upgrades to handle diverse workloads identified as follows: 18 TB of audit video fragments that must be retained for 7 years for compliance; 600 GB of VM images and golden templates used for occasional rebuilds; 700 GB of photo preview thumbnails served to end users; and 250 GB of user playlist/cart session state that must persist so users can resume activity even after being offline for up to 5 days. Which of the following best reflects your recommendations for a cost-effective storage allocation?
학습 기간: 1 month
Many of the scenario patterns I saw on Cloud Pass showed up in the actual PCA exam. The explanations were detailed, which helped strengthen my weak areas. I’ll definitely use this app again for other GCP exams.
학습 기간: 1 month
실제 시험 문제하고 유사했던게 15개 이상 나온거 같아요! 굿굿
학습 기간: 1 month
앱 너무 좋네요
학습 기간: 2 weeks
These questions forced me to deeply understand GCP networking, IAM, storage trade-offs, and high-availability design. After a month of consistent practice, I finally passed my PCA exam. Amazing app!
학습 기간: 1 month
I passed on the first try. The practice questions were tough but very close to the real exam. The explanations helped me understand why each answer was correct. Highly recommended.

Professional

Associate

Professional

Associate

Foundational

Professional

Professional

Professional

Professional

Professional
무료 앱 받기