
Simulez l'expérience réelle de l'examen avec 50 questions et une limite de temps de 120 minutes. Entraînez-vous avec des réponses vérifiées par IA et des explications détaillées.
Propulsé par l'IA
Chaque réponse est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.
You are designing a tablet app for municipal tree inspectors that must store hierarchical observations (city -> district -> park -> tree -> inspection) with up to 5 nested levels and support offline work for up to 72 hours; upon reconnect, the app must automatically sync local changes and handle conflicts gracefully. A backend on Cloud Run will use a dedicated service account to enrich the same records (e.g., geocoding, policy tags) directly in the database, performing up to 5,000 writes per minute at peak. The solution must scale securely to 250,000 monthly active users in the first quarter and provide client SDKs with built-in offline caching and synchronization. Which database and IAM role should you assign to the backend service account?
Your team uses Cloud Build to run CI for a Go microservice stored in a private GitHub repository mirrored to Cloud Source Repositories; one of the build steps requires a specific static analysis tool (version 3.7.2) that is not present in the default Cloud Build environment, the tool is ~120 MB and must be available within 5 seconds at step start to keep total build time under a 10-minute SLA, outbound internet access during builds is restricted, and you need reproducible results across ~50 builds per day—what should you do?
You are building an external review portal for a film festival that stores high‑bitrate video dailies in Cloud Storage, and you must let reviewers—some of whom do not have Google accounts—securely access only their assigned files with the ability to read, upload replacements, or delete them for a strict 24-hour window; how should you provide access to the objects?
You are setting up a new workstation to provision Google Cloud infrastructure with Terraform for a video analytics project at AuroraStream. Company policy requires that all resources be created by a dedicated deployment service account (tf-deployer@proj.example.iam.gserviceaccount.com) and forbids downloading long-lived service account keys. Your Cloud Identity user has the iam.serviceAccountTokenCreator role on that service account and the necessary project permissions to run Terraform. You will configure the Terraform Google provider to use impersonate_service_account pointing to the deployment service account. Following Google-recommended best practices, what should you do on your workstation to authenticate Terraform?
You are preparing nightly releases of a serverless event-processing platform on Cloud Run across two regions. Each day at 18:00 UTC, your CI/CD pipeline builds and pushes 30–40 distinct Linux-based container images to a single Artifact Registry Docker repository, and the production rollout begins at 20:00 UTC. Security requires that you be alerted to any known OS-level vulnerabilities in the newly pushed images before the rollout, and you want to follow Google‑recommended best practices without adding custom scanning code to your pipeline. What should you do?
Envie de vous entraîner partout ?
Téléchargez Cloud Pass gratuitement — inclut des tests d'entraînement, le suivi de progression et plus encore.
You are configuring a Cloud Build trigger for a Node.js REST service that builds a Docker image and pushes it to Artifact Registry (us-central1, repository 'apis') with the tag $SHORT_SHA. Compliance requires the pipeline to first run unit tests and then run integration tests against a disposable test database before the image is pushed. If any test fails, you must be able to tell from the Cloud Build history exactly which stage (unit or integration) failed without reading through logs. What should you do?
Your organization runs a Cloud Build CI/CD pipeline with 4 build steps for a Python API: (1) run unit tests, (2) generate a 6 KB text report containing the commit SHA and changed files, (3) build and push a container image, and (4) run a security gate that consumes the report; the report must be accessible to steps 3 and 4 within the same build and must not be persisted after the build; the pipeline executes up to 30 times per hour. How should you store the report so that all required build steps can access it?
You operate a city-wide food delivery platform on Google Cloud. An order-processing microservice currently invokes an HTTP Cloud Function to send SMS status updates (order accepted, courier en route) through a third-party SMS gateway. After launching a 30%-off promotion, peak load spiked to about 18,000 notifications per minute, and the Cloud Function intermittently returns HTTP 500 errors. Some customers report missing SMS updates, and logs show the sender aborts on 500 responses without persisting the messages. You need to change how SMS messages are handled to minimize message loss without significantly increasing operational complexity. What should you do?
You are launching a single App Engine Standard application (service: default, region: us-central1) and must make it accessible only at http://www.northwind.news/; your DNS is hosted in Cloud DNS (zone: public-zone, TTL: 300s), the domain is not yet verified, and you do not require any path- or service-based routing—what should you do?
You are the lead developer for a city transit incident dashboard running on Cloud Run (512 MiB memory per instance, max 80 concurrent requests) backed by Firestore in Native mode; the web UI provides infinite scroll so users can browse all incident reports, and three months after launch you observe that during 07:30–09:30 peak hours several Cloud Run instances return HTTP 500 with out-of-memory errors while Firestore read throughput spikes to ~250 QPS. You need to stop Cloud Run from crashing and reduce Firestore reads using a performance-optimized approach without merely increasing resource limits; what should you do?
Période de préparation: 2 months
The scenarios in this app were extremely useful. The explanations made even the tricky deployment questions easy to understand. Definitely worth using.
Période de préparation: 2 months
The questions weren’t just easy recalls — they taught me how to approach real developer scenarios. I passed this week thanks to these practice sets.
Période de préparation: 1 month
1개월 구독하니 빠르게 풀어야 한다는 강박이 생기면서 더 열심히 공부하게 됐던거 같아요. 다행히 문제들이 비슷해서 쉽게 풀 수 있었네요
Période de préparation: 1 month
이 앱의 문제들과 실제 시험 문제들이 많이 비슷해서 쉽게 풀었어요! 첫 시험만에 합격하니 좋네요 굿굿
Période de préparation: 1 month
I prepared for three weeks using Cloud Pass and the improvement was huge. The difficulty level was close to the real Cloud Developer exam, and the explanations helped me fill in my knowledge gaps quickly.
Obtenir l'application gratuite