
Simulez l'expérience réelle de l'examen avec 50 questions et une limite de temps de 120 minutes. Entraînez-vous avec des réponses vérifiées par IA et des explications détaillées.
Propulsé par l'IA
Chaque réponse est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.
Your company runs a payments API behind an NGINX Ingress Controller on a GKE Standard cluster with three n2-standard-4 nodes; the Ops Agent DaemonSet is deployed on all nodes and forwards access logs to Cloud Logging. In the past hour you observed suspicious traffic from the IP address 198.51.100.77, and you need to visualize the per-minute count of requests from this IP in Cloud Monitoring without changing application code or deploying additional collectors. What should you do to achieve this with minimal operational overhead?
You are the on-call SRE for a live trivia streaming platform running on Google Kubernetes Engine (GKE) behind a global external HTTP(S) Load Balancer with geo-based routing; each of 4 regions (us-central1, europe-west1, asia-southeast1, southamerica-east1) contains 3 regional GKE clusters serving traffic via NEG backends, and at 18:05 UTC you receive a page that asia-southeast1 users have had 100% connection failures (HTTP 502) for the past 7 minutes while other regions are healthy and asia-southeast1 normally serves 25% of global requests and the availability SLO is 99.95% monthly with a rapid burn alert firing; you want to resolve the incident following SRE best practices. What should you do first?
Your video analytics platform is deploying a frame-processing microservice on both GKE Autopilot in us-central1 (200 pods across 5 namespaces) and 30 on-premises Linux servers in a private data center; you must collect detailed, function-level performance data (CPU and heap profiles) with under 5% overhead, keep profiles for 30 days, and visualize everything centrally in a single Google Cloud project without building or operating your own metrics pipeline—what should you do?
You work for a fintech company headquartered in Frankfurt where an Organization Policy enforces constraints/gcp.resourceLocations to allow only europe-west3 and europe-west1 for all resources. When you tried to create a secret in Secret Manager using automatic replication, you received the error: "Constraint constraints/gcp.resourceLocations violated for [orgpolicy:projects/1234567890] attempting to create a secret in [global]". You must resolve the error while remaining compliant and ensure the secret’s data resides only in the allowed EU regions. What should you do?
During a controlled traffic-shift rollout, your ride-hailing platform running across three GKE regions suffered a 2 hours 45 minutes outage that impacted 100% of rider and driver requests; after approximately 3 hours of incident response, service is fully restored with SLIs back to baseline, and you have 30 minutes to deliver an incident summary to executives, customer support leads, and key city partners following SRE-recommended practices. What should you do first?
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
Your team operates a multi-tenant fraud-scoring API written in Node.js and deployed on Cloud Run (fully managed) with concurrency set to 50 and minimum instances set to 2. During load tests (~200 errors per minute), you must customize the error data sent to Cloud Error Reporting to include tenant_id and op_id, set a custom service/version for grouping, and use a custom fingerprint while ensuring no PII is logged. What should you do?
Your production machine learning inference services run on a 30-node GKE cluster in asia-northeast1 while your Jenkins build agents run in europe-west1, and each rollout requires all nodes to pull a 1.5-GB image within a 10-minute deployment window; to maximize pull bandwidth and use a scalable registry, where should you push the images?
Your organization lets product squads self-manage Google Cloud projects, including project-level IAM; the network platform team operates a Shared VPC host project named net-host-prod that connects 18 service projects across 3 folders, and a lien has already been placed on the host project to prevent accidental deletion; you must implement a control so that only principals who hold the resourcemanager.projects.updateLiens permission at the organization level can remove the lien and delete the host project; what should you do?
Your platform team distributes Envoy WASM plugins as OCI-compliant artifacts to 150 edge gateways across 3 regions, and some plugins are currently sourced weekly from 4 public registries while others come from 2 internal teams; the security team has flagged the use of public registries as a supply-chain risk and requires repository-level IAM, audit logging, and enforcement within a VPC Service Controls perimeter with no public egress at deployment time using Private Google Access. You want to manage all plugins uniformly with native access control, unified auditability, and VPC Service Controls while remaining compatible with OCI clients; what should you do?
Your fintech startup's real-time payments API runs on GKE with a 99.9% monthly availability SLO and a latency SLI of p95 < 250 ms. Over the last quarter, there have been 3 production incidents per month where p95 latency exceeded 1,200 ms and error rate surpassed 5% for 30-minute windows. Engineers push feature branches and execute schema migrations directly against the production cluster during business hours, and data scientists run configuration experiments in production. QA teams also perform load tests that ramp from 50 rps to 500 rps against the production endpoint twice a week, saturating the autoscaler and causing throttling. You must redesign the environment to reduce production bugs and outages while allowing QA to load test new features at realistic scale. What should you do?
Période de préparation: 1 month
The exam has many operational scenarios, and Cloud Pass prepared me well for them. The explanations were clear and helped me understand not just the “what” but the “why” behind each solution.
Période de préparation: 1 month
문제와 해설이 있어서 좋았고, 시험에서 무난하게 합격했어요. 시험에서 안보이던 유형도 나왔는데 잘 풀긴 했네요
Période de préparation: 1 month
The practice questions were challenging in a good way, and many matched the style of the real exam. I passed!
Période de préparation: 1 month
very close to the real exam format
Période de préparation: 1 month
I used Cloud Pass during my last week of preparation, and it helped me fill in gaps I didn’t even know I had.
Obtenir l'application gratuite