
Simula la experiencia real del examen con 50 preguntas y un límite de tiempo de 120 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
Your IoT analytics company runs a multi-tenant data pipeline on a Google Kubernetes Engine (GKE) Autopilot cluster in us-central1 for 120 production customers, and you promote releases with Cloud Deploy; during a 2-week refactor of a telemetry aggregator service (container image <250 MB), developers will edit code every 5–10 minutes on laptops with 8 GB RAM and limited CPU and must validate changes locally before pushing to the remote repository. Requirements: • Automatically rebuild and redeploy on local code changes (hot-reload loop ≤ 10 seconds). • Local Kubernetes deployment should closely emulate the production GKE manifests and deployment flow. • Use minimal local resources and avoid requiring a remote container registry for inner-loop builds. Which tools should you choose to build and run the container locally on a developer laptop while meeting these constraints?
Your mobile health platform currently stores per-user workout telemetry and personalized settings in a single PostgreSQL instance; records vary widely by user and evolve frequently as new device firmware adds fields (e.g., heart-rate variability, sleep stages), resulting in weekly schema migrations, downtime risks, and high operational overhead; you expect up to 8 million users, peak 45,000 writes/second concentrated by userId, simple key-based reads per user, and only per-user transactional consistency is required, not multi-user joins or complex cross-entity transactions. To simplify development and scaling while accommodating highly user-specific, evolving state without rigid schemas, which Google Cloud storage option should you choose?
You are rolling out an internal reporting service on a fleet of e2-standard-4 Compute Engine VMs in us-central1-a using Terraform; one VM restored from a snapshot has been stuck in 'Starting' for 12 minutes and the serial console shows repeated boot attempts—what two investigations should you prioritize to resolve the launch failure? (Choose two.)
You are building a Rust-based microservice for a logistics analytics platform on Google Cloud that must be packaged as a container image; the service links against two in-house native .so libraries and requires OpenSSL 1.1 during build, exposes HTTP on port 8080, must autoscale from 0 instances to handle bursts up to 400 requests per second, and needs cold starts under 2 seconds; your team does not want to provision, patch, or manage any servers or clusters. How should you deploy the microservice?
You inherit a public-facing marketing microsite hosted on a managed instance group of 3 Compute Engine VMs behind an external HTTPS load balancer (https://promo.example.com), and before a campaign launch you must automatically crawl up to 500 pages to verify whether any bundled client-side libraries have known vulnerabilities and whether the site is susceptible to reflected or stored XSS; which Google Cloud service should you use to run this security scan and generate a findings report?
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
You operate a Python microservice on Cloud Run in us-central1 that writes time-series telemetry to Firestore (Native mode) and must sustain 8,000 document writes per minute with p95 write latency under 40 ms while using Application Default Credentials, retries with backoff, deadlines, and connection pooling per Google best practices. To optimize performance and minimize boilerplate, how should the service write to Firestore?
Your microservices application named fintrack-ui runs on three GKE clusters—fintrack-ui-dev, fintrack-ui-uat, and fintrack-ui-prod—in us-central1, and your security team requires that only container images signed by a designated release attestor (using a Cloud KMS key in projects/123456/locations/us/keyRings/prod/cryptoKeys/release) are allowed to be deployed to the fintrack-ui-prod cluster while dev and uat remain permissive; following Google-recommended practices, how should you implement this so that enforcement applies only to the production cluster?
You are monitoring a media transcoding microservice written in Node.js that runs on Cloud Run (fully managed). Each revision is configured with 2 vCPUs and 1.5 GiB memory, and Cloud Monitoring shows sustained spikes of ~90% CPU and ~1.3 GiB memory for 15-minute intervals during peak traffic (~800 RPS). You must identify which function is consuming the most CPU cycles and heap memory with minimal overhead (<1% CPU) and without adding significant latency in production. What should you do?
You are rolling out a reporting microservice on a Compute Engine VM (10.10.2.4) in the analytics-vpc (CIDR 10.10.0.0/16, region us-central1) that must connect to a Cloud SQL for PostgreSQL instance via the Cloud SQL Auth Proxy. The Cloud SQL instance resides in a separate db-vpc (CIDR 10.20.0.0/16) and has both public and private IPs; its private address is 10.20.3.5. For compliance, all database traffic must use the private IP. In testing, connections to the instance’s public IP succeed, but connections to 10.20.3.5 time out even though firewall rules in both VPCs allow TCP:5432 from their respective CIDR ranges and the proxy is started with --private-ip. How should you fix this issue?
Your company runs a fleet-telemetry API on Cloud Run in the us-central1 region under the production project fleet-prod. For every release candidate, you must spin up an ephemeral QA environment that is fully isolated from production (separate Google Cloud project for networking/IAM/billing), is created and torn down automatically by your CI pipeline on each pull request, mirrors production settings (region: us-central1, CPU: 1, min instances: 0), and completes provisioning in under 10 minutes without sending any traffic to production. You want the approach that provides full automation with the least ongoing effort while enabling automated end-to-end tests. What should you do?
Periodo de estudio: 2 months
The scenarios in this app were extremely useful. The explanations made even the tricky deployment questions easy to understand. Definitely worth using.
Periodo de estudio: 2 months
The questions weren’t just easy recalls — they taught me how to approach real developer scenarios. I passed this week thanks to these practice sets.
Periodo de estudio: 1 month
1개월 구독하니 빠르게 풀어야 한다는 강박이 생기면서 더 열심히 공부하게 됐던거 같아요. 다행히 문제들이 비슷해서 쉽게 풀 수 있었네요
Periodo de estudio: 1 month
이 앱의 문제들과 실제 시험 문제들이 많이 비슷해서 쉽게 풀었어요! 첫 시험만에 합격하니 좋네요 굿굿
Periodo de estudio: 1 month
I prepared for three weeks using Cloud Pass and the improvement was huge. The difficulty level was close to the real Cloud Developer exam, and the explanations helped me fill in my knowledge gaps quickly.
Obtén la app gratis