
Simulez l'expérience réelle de l'examen avec 50 questions et une limite de temps de 120 minutes. Entraînez-vous avec des réponses vérifiées par IA et des explications détaillées.
Propulsé par l'IA
Chaque réponse est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.
During a disaster recovery drill, your workload fails over to a standby Google Cloud project in europe-west1. Within 20 seconds, a newly created and previously idle Cloud Storage bucket begins handling approximately 2100 object write requests per second and 7800 object read requests per second. As the surge continues, services calling the Cloud Storage JSON API start receiving intermittent HTTP 429 and occasional 5xx errors. You need to reduce failed responses while preserving as much throughput as possible. What should you do?
Your CI pipeline runs 200 unit tests per commit for a fleet-management analytics service that consumes messages from a Cloud Pub/Sub topic named 'telemetry' using ordering keys set to vehicle_id; for each test you need to publish a sequence of 50 messages for a single vehicle_id and assert that your subscriber processes them strictly in order within the key, while keeping the tests reliable, fast, and at zero additional GCP cost; what should you do?
You operate a telemetry collection microservice on Cloud Run that accepts HTTP POST requests with JSON metrics, averaging 50,000 requests per minute. For compliance reasons, the ingestion endpoint must be hosted on a different domain than the user-facing app. The Cloud Run service is exposed at https://ingest.acme-data.io, while the single-page web app and an embedded mobile WebView both originate from https://dashboard.acme.io. You need to add a header to the service's HTTP response so that only pages and WebViews served from https://dashboard.acme.io can submit metrics via cross-origin requests. Which response header should you add?
You are deploying a GKE Deployment for a background worker that pulls jobs from a Cloud Pub/Sub subscription. You must include a health check that verifies the container can reach the Pub/Sub endpoint every 10 seconds, with a 2-second timeout, and mark it unhealthy after 3 consecutive failures; if the worker becomes unhealthy due to lost connectivity, the container must execute /app/shutdown.sh (which can take up to 15 seconds) to gracefully drain and checkpoint work before termination. How should you configure the Deployment?
Your mobile gaming platform’s matchmaking service (running on Cloud Run behind an external HTTP(S) Load Balancer) has been instrumented with OpenTelemetry and uses a 10% sampling rate; you need to validate end-to-end latency in Cloud Trace for a single test curl request from your laptop without changing the global sampler. How can you guarantee that this specific request is traced regardless of the default sampling decision?
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
Your analytics team needs to run a 30-minute spike test reaching up to 8,000 requests per second against a new HTTPS API running on GKE Autopilot in us-central1 using JMeter. You must generate load from a consistent, in-region source, capture request and latency logs for rapid analysis, and publish a dashboard within 1 hour after the test. Following Google-recommended practices, which setup should you use to orchestrate the test and analysis?
You containerized a Node.js API and deployed it to Cloud Run (2 vCPU, 1 GiB RAM, max concurrency 80, min instances 0); locally the /orders endpoint averages 120 ms per request under a load test of 200 RPS, but in production Cloud Monitoring shows p95 latency at 1.6 s with CPU at ~30% and memory at ~45%, and you want to pinpoint which parts of the code are causing the delay in production with minimal overhead and without adding custom timing code. What should you do?
Your data engineering team maintains batch analytics jobs packaged as containers and stored in an Artifact Registry repository named analytics-jobs in europe-west1. Images can be pushed from multiple paths: a standard CI sequence, ad hoc docker push from engineers' laptops during on-call hotfixes, and occasional emergency retags. You are responsible for configuring CI/CD automation in Cloud Build. Every time any new image or tag is pushed to Artifact Registry, a separate Cloud Build pipeline must start within 1 minute to run a vulnerability scan and write results to a BigQuery dataset. You want to meet this requirement with the least engineering effort and without missing manual pushes. What should you do?
Your media analytics company is breaking a legacy ad-serving platform into roughly 40 independently deployable microservices running on Google Kubernetes Engine (GKE) and Cloud Run; three consumer teams (Java, Go, and Python) will integrate with these services, weekly releases must not break existing clients, backward compatibility must be maintained for at least 12 months, average per‑service load is 600 RPS with p95 latency SLO of 200 ms and bursts up to 2,500 RPS during major events, and you need to choose which design aspects to implement to meet these requirements while enabling composability and team autonomy; which two should you prioritize? (Choose two.)
You must stress-test a GraphQL mutation webhook hosted on Cloud Functions (2nd gen) behind HTTPS. The endpoint accepts HTTP POST requests with a 1 KB JSON payload and must sustain burst traffic of 5,000 requests per second for 3 minutes. Your load tests must meet the following requirements: • Load is initiated from multiple parallel threads per generator. • Client traffic to the endpoint originates from multiple source IP addresses. • Load can be scaled up by adding additional test instances as needed. • Test runs are repeatable and provide latency/throughput metrics. You want to follow Google-recommended best practices. How should you configure the load testing?
Période de préparation: 2 months
The scenarios in this app were extremely useful. The explanations made even the tricky deployment questions easy to understand. Definitely worth using.
Période de préparation: 2 months
The questions weren’t just easy recalls — they taught me how to approach real developer scenarios. I passed this week thanks to these practice sets.
Période de préparation: 1 month
1개월 구독하니 빠르게 풀어야 한다는 강박이 생기면서 더 열심히 공부하게 됐던거 같아요. 다행히 문제들이 비슷해서 쉽게 풀 수 있었네요
Période de préparation: 1 month
이 앱의 문제들과 실제 시험 문제들이 많이 비슷해서 쉽게 풀었어요! 첫 시험만에 합격하니 좋네요 굿굿
Période de préparation: 1 month
I prepared for three weeks using Cloud Pass and the improvement was huge. The difficulty level was close to the real Cloud Developer exam, and the explanations helped me fill in my knowledge gaps quickly.
Obtenir l'application gratuite