CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Google Professional Cloud Security Engineer
Google Professional Cloud Security Engineer

Practice Test #3

Simula la experiencia real del examen con 50 preguntas y un límite de tiempo de 120 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.

50Preguntas120Minutos700/1000Puntaje de aprobación
Explorar preguntas de práctica

Impulsado por IA

Respuestas y explicaciones verificadas por triple IA

Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.

GPT Pro
Claude Opus
Gemini Pro
Explicaciones por opción
Análisis profundo de preguntas
Precisión por consenso de 3 modelos

Preguntas de práctica

1
Pregunta 1

Your marketing analytics unit (120 users) plans to adopt Google Cloud for BigQuery and Vertex AI within 30 days, and company policy requires that all identities remain company-owned and all sign-ins use the corporate SAML 2.0 IdP; while attempting to create a new Cloud Identity tenant for example.com, the Platform Engineer discovers that example.com is already verified and actively used by an internal Google Workspace deployment with 850 active accounts and existing SAML SSO, and needs guidance on how to proceed with the least disruption and without violating the policy. What should you advise?

Domain contestation is designed for situations where domain ownership is disputed or an unauthorized party controls the domain in a tenant. Here, the domain is already used internally with 850 accounts and SAML SSO, so contestation would be highly disruptive and unnecessary. It risks account conflicts, service interruption, and complex migration. It does not align with the “least disruption” requirement and is not the right tool for internal segmentation.

Using a new domain (analytics-example.com) could technically enable a separate Cloud Identity tenant, but it violates the spirit of “all identities remain company-owned” under the primary corporate domain and creates identity fragmentation. Users would need alternate usernames or aliases, complicating lifecycle management, group membership, and access governance. It also increases operational overhead and can introduce inconsistent SSO and policy enforcement across tenants.

Granting Super Admin to a marketing program manager is excessive and violates least privilege. Super Admin has broad control over identity, security settings, and domain-wide configurations, creating significant risk. The requirement is to onboard a unit with minimal disruption and maintain policy, not to delegate top-level tenant control. Proper onboarding should be done by existing admins with scoped admin roles and group/OU-based controls.

This is the least disruptive and most policy-aligned approach. The domain is already verified and actively used with SAML SSO, so the correct action is to coordinate with the existing Super Administrator to onboard the marketing unit into the current tenant. Use OUs and groups to segment users, enforce SAML sign-in, and apply appropriate policies. Then grant BigQuery/Vertex AI access via group-based IAM in the correct Google Cloud projects.

Análisis de la pregunta

Core concept: This question tests identity architecture on Google Cloud: Cloud Identity/Google Workspace tenancy, domain verification ownership, and federated SSO (SAML 2.0) as the authoritative sign-in method. It also touches organizational governance (least disruption, centralized policy enforcement) aligned with the Google Cloud Architecture Framework’s security foundations (centralized identity, consistent policy, least privilege). Why the answer is correct: Because example.com is already verified and actively used in an existing Google Workspace tenant with SAML SSO, the company already has an authoritative identity boundary for that domain. A verified domain cannot be cleanly “re-created” in a separate Cloud Identity tenant without significant disruption. The least disruptive, policy-compliant path is to collaborate with the existing Super Administrator and onboard the marketing analytics unit into the current tenant. This preserves company-owned identities (same corporate domain), continues using the corporate SAML 2.0 IdP for sign-in, and avoids breaking existing users, groups, and app integrations. The unit can then access BigQuery and Vertex AI via Cloud Identity/Workspace identities, groups, and (ideally) Cloud Identity Groups mapped to Google Cloud IAM. Key features / configurations: Use Organizational Units (OUs) and Groups to segment the marketing unit, apply context-aware access and security policies, and manage app access. Ensure SAML SSO remains enforced, and use group-based IAM bindings in Google Cloud projects for BigQuery/Vertex AI. If needed, use separate Google Cloud organizations/projects and centralized IAM with groups, rather than separate identity tenants. Common misconceptions: It may seem simpler to create a new tenant for the unit, but domain verification and identity lifecycle become fragmented, and SSO policy consistency becomes harder. Domain contestation is not a routine “move” mechanism and is intended for ownership disputes, not internal re-architecture. Exam tips: When a domain is already verified in Workspace/Cloud Identity, assume it is the authoritative tenant. Prefer onboarding via OUs/groups and Google Cloud resource hierarchy (org/folders/projects) rather than creating parallel identity tenants. Look for answers that preserve centralized identity, minimize disruption, and maintain federation requirements.

2
Pregunta 2
(Selecciona 2)

Your company operates eight autonomous product studios, each with approximately 3,000 users and contractors, and about 1,200 Google Cloud projects distributed across those studios. You must delegate access control administration with the following requirements: ✑ Each studio must administer access only for its own projects and not see or change other studios' projects. ✑ Access must be manageable at scale across hundreds of projects per studio. ✑ When a user transfers to a different studio or leaves the company, their access must be removed within 1 hour. ✑ The authoritative source for users and groups is the on-premises Active Directory, and Google accounts are in Cloud Identity. What should you do? (Choose two.)

VPC Service Controls creates service perimeters to reduce data exfiltration risk and control access to Google-managed services from outside a perimeter. It does not provide a scalable delegation model for IAM administration, nor does it inherently prevent a studio admin from viewing/changing IAM on other studios’ projects if they have permissions. It addresses boundary protection, not organizational access administration.

Creating a folder per studio and placing that studio’s projects under it is the canonical Google Cloud approach for delegating administration. Folder-level IAM policies inherit to all descendant projects, enabling consistent access across hundreds of projects. You can grant studio admins permissions (e.g., Project IAM Admin) only on their folder, preventing visibility/control over other studios’ folders and projects.

Cloud Identity Organizational Units (OUs) are used for applying identity and device policies within Cloud Identity/Google Workspace. Google Cloud IAM does not support assigning IAM policies directly to Cloud Identity OUs as principals or as a scoping mechanism. Access control in Google Cloud is scoped by resource hierarchy (org/folder/project) and granted to principals like users, groups, or service accounts.

Using project naming conventions with IAM Conditions is fragile and hard to operate at scale. Conditions evaluate request attributes and can be complex to maintain, and naming conventions are not a security boundary (projects can be renamed or created incorrectly). This also doesn’t naturally support delegated administration or prevent cross-studio visibility; it’s an anti-pattern compared to folders and group-based IAM.

Google Cloud Directory Sync (GCDS) synchronizes users and group memberships from on-prem Active Directory into Cloud Identity, which is critical when AD is authoritative. By tying Google Cloud IAM bindings to Cloud Identity groups, changes in AD group membership (transfer/termination) propagate and remove access quickly, meeting the 1-hour deprovisioning requirement when sync is scheduled appropriately.

Análisis de la pregunta

Core concept: This question tests scalable IAM delegation using Google Cloud resource hierarchy (organization/folders/projects) and centralized identity lifecycle using Cloud Identity integrated with on-prem Active Directory. Why the answer is correct: To ensure each studio can administer access only for its own projects and do so at scale, you should align projects to the resource hierarchy: create one folder per studio and move that studio’s projects under it. Then grant IAM roles to studio-specific Google Groups at the folder level. Folder-level IAM policies inherit to all projects beneath, enabling consistent access across hundreds of projects without per-project administration. To meet the “remove access within 1 hour” requirement with AD as the authoritative source, you must synchronize users and group memberships from AD into Cloud Identity using Google Cloud Directory Sync (GCDS). When a user changes studios or leaves, removing them from the relevant AD groups (or disabling the account) propagates to Cloud Identity and therefore removes access tied to those groups. Key features / best practices: - Use Cloud Resource Manager hierarchy (Org → Folders → Projects) for administrative boundaries and policy inheritance. - Use Google Groups as the unit of authorization; bind roles to groups, not individuals, for auditability and scale. - Use GCDS for near-real-time sync (frequency configurable) and pair with Cloud Identity/Google Workspace account suspension to rapidly revoke access. - Optionally combine with organization policies and least-privilege roles to reduce blast radius. Common misconceptions: VPC Service Controls is often mistaken as an IAM delegation tool; it is primarily a data exfiltration/boundary control mechanism and does not prevent visibility or IAM changes across projects by itself. Cloud Identity OUs are for identity/device policy management, not a target for Google Cloud IAM bindings. IAM Conditions based on project names are brittle and not a true administrative boundary. Exam tips: When you see “hundreds of projects” and “delegate admin per business unit,” think “folders + group-based IAM.” When you see “authoritative source is AD” and “rapid deprovisioning,” think “GCDS (and/or federation) + group lifecycle in AD,” ensuring access is driven by group membership rather than manual IAM edits.

3
Pregunta 3

A team operates 60 Compute Engine VMs in a managed instance group that must securely read database passwords and third‑party API tokens both at boot and on demand; security policy requires centrally stored secrets with per‑secret IAM controls, access over TLS, versioning and rotation every 90 days, audit logs for every read, optional CMEK support, and a prohibition on storing secrets in instance metadata or guest attributes—what should you recommend?

Cloud KMS is for managing cryptographic keys and performing encrypt/decrypt/sign operations. While it supports IAM, audit logs, and rotation of keys, it is not a full secret storage system for passwords and API tokens with secret-version retrieval semantics. You could build an envelope-encryption solution, but it adds operational complexity and doesn’t natively satisfy “centrally stored secrets with per-secret IAM controls” and straightforward audited reads like Secret Manager.

Compute Engine guest attributes can store small pieces of data retrievable by the VM, but they are not intended for sensitive secrets governance. They lack dedicated secret lifecycle features such as robust per-secret IAM, built-in versioning/rotation workflows, and standardized secret access audit patterns. The question explicitly prohibits storing secrets in guest attributes, making this option noncompliant regardless of technical feasibility.

Compute Engine custom metadata is commonly used for configuration and bootstrapping, but it is not a secrets manager. Metadata is accessible to processes on the VM and can be exfiltrated if the instance is compromised; it also lacks strong secret lifecycle controls like versioning and rotation management. The policy explicitly forbids storing secrets in instance metadata, so this option is disallowed.

Secret Manager is the correct service for centrally storing and accessing database passwords and API tokens. It provides per-secret IAM, secret versioning, access over TLS via Google APIs, and integrates with Cloud Audit Logs for read events (with Data Access logs enabled). It supports automated rotation patterns (rotation schedules and Pub/Sub notifications) and optional CMEK using Cloud KMS, matching all stated requirements and avoiding metadata/guest attributes.

Análisis de la pregunta

Core Concept: This question tests choosing the correct managed secret storage service for applications running on Compute Engine, emphasizing centralized secret management, fine-grained IAM, auditability, rotation/versioning, and secure retrieval. In Google Cloud, Secret Manager is the purpose-built service for application secrets, while Cloud KMS is primarily for key management and cryptographic operations. Why the Answer is Correct: Secret Manager directly meets every stated requirement: secrets are centrally stored; access is controlled with per-secret IAM (Secret Manager Secret/Secret Version permissions); retrieval uses Google APIs over TLS; secrets are versioned (each update creates a new version); rotation can be implemented with built-in rotation schedules and Pub/Sub notifications (commonly paired with Cloud Functions/Cloud Run jobs) on a 90-day cadence; and Cloud Audit Logs record Admin Activity and Data Access logs for secret reads (when Data Access logging is enabled for the project). It also supports optional CMEK via Cloud KMS to encrypt secrets at rest with customer-managed keys. Finally, it avoids prohibited patterns like instance metadata or guest attributes. Key Features / Best Practices: Use Workload Identity (service accounts attached to the MIG) and grant least privilege at the secret level (e.g., roles/secretmanager.secretAccessor on only required secrets). Prefer accessing secrets at runtime via the Secret Manager API rather than baking them into images. Use version aliases like "latest" carefully; pin versions for controlled rollouts. Implement rotation with Secret Manager rotation + an automated rotator that updates the upstream system (DB password/API token) and then adds a new secret version. Ensure Data Access audit logs are enabled and routed to a central logging sink for compliance. Common Misconceptions: Cloud KMS is often chosen because it is “security” and supports CMEK, but KMS manages encryption keys, not application secrets lifecycle (no native per-secret secret retrieval semantics, rotation workflows for passwords/tokens, or secret version access patterns). Metadata and guest attributes are convenient for bootstrapping but violate the explicit policy prohibition and are not designed for strong secret governance. Exam Tips: On the Security Engineer exam, map requirements to the managed service designed for that artifact: keys/cert crypto operations -> Cloud KMS/Certificate Authority Service; application secrets (passwords, tokens) -> Secret Manager. Look for keywords like versioning, rotation, per-secret IAM, and audit logs for reads—these strongly indicate Secret Manager. Also note policy constraints (no metadata/guest attributes) that eliminate common bootstrapping shortcuts.

4
Pregunta 4
(Selecciona 2)

Your security team just created a custom-mode VPC named seg-west (10.30.0.0/16) with one subnet us-west1-prim (10.30.10.0/24) and intentionally no user-defined firewall rules; a VM named gw-01 in that subnet has an ephemeral external IP, and the team tests: (1) from gw-01, curl https://8.8.8.8:443, (2) from the public internet, attempt SSH (TCP/22) and HTTP (TCP/80) to gw-01, and (3) attempt SMTP egress on TCP/25 from gw-01; which two behaviors are guaranteed solely by Google Cloud's implied VPC firewall rules before any custom rules are added? (Choose two.)

Correct. Google Cloud VPC has an implied egress allow rule that permits outbound traffic to any destination (0.0.0.0/0) unless you add an explicit egress deny (or more restrictive egress allow-only posture using priorities and rules). With an ephemeral external IP, gw-01 can reach the internet directly, so outbound HTTPS to 8.8.8.8:443 is allowed by default.

Correct. Google Cloud VPC has an implied ingress deny rule that blocks all new inbound connections unless you create an ingress allow rule. Therefore, unsolicited inbound SSH (tcp/22) and HTTP (tcp/80) from the public internet to gw-01 will be denied. This is independent of whether the VM has an external IP; the external IP only makes it reachable, not allowed.

Incorrect. TCP/25 is not universally blocked by implied VPC firewall rules. Any SMTP limitations are typically due to anti-abuse measures, organization policy constraints, or external provider controls, not the baseline implied firewall behavior. From a pure VPC firewall perspective, egress is allowed by default, so you cannot claim TCP/25 is implicitly blocked regardless of user rules.

Incorrect. This describes a default-deny egress posture, which is not how GCP VPC works out of the box. By default, egress is allowed via an implied rule. You can implement default-deny egress by creating higher-priority egress deny rules and then adding explicit egress allow rules for required destinations/ports, but that is not the implied baseline.

Incorrect. There is no implied allow for inbound HTTP (tcp/80). Inbound traffic is denied by default unless explicitly allowed. HTTP access would require an ingress firewall rule (e.g., allow tcp:80 from 0.0.0.0/0 or a restricted CIDR) and, in many architectures, would typically be fronted by an external HTTP(S) load balancer with appropriate firewall rules for health checks and proxy ranges.

Análisis de la pregunta

Core concept: This question tests Google Cloud VPC firewall behavior when you create a VPC network with no user-defined firewall rules. In GCP, VPC firewall rules are stateful and evaluated at the VPC level. Even if you create no custom rules, Google provides implied (system) rules that define baseline ingress/egress behavior. Why the answer is correct: A is correct because an implied egress allow rule exists: by default, egress traffic from VM instances is allowed to all destinations (0.0.0.0/0) unless you create higher-priority egress deny rules. Therefore, gw-01 can initiate outbound connections such as curl to https://8.8.8.8:443, assuming routing/NAT is available (it has an ephemeral external IP, so direct internet egress works). B is correct because an implied ingress deny rule exists: by default, new inbound connections to VM instances are denied unless you create an ingress allow rule (for example, allow tcp:22 from a trusted source range). Thus, inbound SSH (22) and HTTP (80) attempts from the public internet will be blocked by default. Key features and best practices: - Implied rules: (1) deny all ingress, (2) allow all egress. These are foundational boundary protections. - Stateful firewalling: return traffic for established connections is allowed automatically, so outbound-initiated flows work without an explicit ingress rule. - Least privilege: best practice is to add explicit ingress allows only for required ports and restricted source ranges (e.g., IAP TCP forwarding or a bastion), and consider explicit egress controls (deny/allow lists) for exfiltration prevention. Common misconceptions: Many confuse “no user-defined rules” with “no traffic allowed.” In GCP, that is only true for ingress; egress is allowed by default. Another misconception is that certain ports (like SMTP/25) are always blocked by VPC firewall; in reality, SMTP restrictions are an organization policy/platform constraint in some contexts, not an implied VPC firewall rule. Exam tips: Memorize the two implied VPC firewall rules (deny ingress, allow egress) and remember statefulness. When questions mention “before any custom rules,” focus on these implied behaviors. Also separate VPC firewall behavior from other controls (Org Policies, provider anti-abuse SMTP limits, routes, Cloud NAT, and external IP presence).

5
Pregunta 5

Your video analytics platform operates 400 Compute Engine VMs across 12 projects in us-central1, europe-west1, and asia-southeast1. Rapid hiring has caused base image drift, inconsistent CIS-level hardening, and missed critical OS patches. Security requires that: (1) all new instances launch only from organization-approved hardened images; (2) critical OS patches are applied within 48 hours across all projects; and (3) baseline controls remain enforced throughout each VM's lifecycle. You need a centrally managed approach to standardize images and automate enforcement from provisioning through ongoing operations. What should you do?

Correct. OS Config (VM Manager) provides centralized patch deployments/jobs, compliance reporting, and OS policy assignments to continuously enforce baseline configuration and patch SLAs across multiple projects. A dedicated image project with hardened images and image families standardizes provisioning so new VMs launch from approved golden images. Together, this covers provisioning control, 48-hour patching, and lifecycle enforcement.

Incorrect. Sole-tenant nodes address workload isolation and certain compliance requirements, not image drift or patch automation. Policy Controller (Anthos) is primarily for Kubernetes admission control and configuration policy; it is not the right mechanism to restrict Compute Engine VM image sources or enforce OS patching on VMs. This option doesn’t meet the 48-hour patch requirement or VM lifecycle enforcement.

Incorrect. A Cloud Build pipeline for baking hardened images is useful for standardization, but Artifact Registry is for container and language artifacts, not Compute Engine VM images (which live in Compute Engine image resources). More importantly, image baking alone doesn’t ensure existing VMs receive critical patches within 48 hours or that baseline controls remain enforced throughout the VM lifecycle without an additional enforcement tool like OS Config.

Incorrect. Security Command Center Enterprise improves visibility (asset discovery, posture findings, vulnerability insights) and can integrate with response tooling, but it is not the primary service to enforce OS patching or continuously apply configuration state on Compute Engine VMs. SCC is largely detect/monitor; you would still need OS Config (or equivalent) to meet the explicit 48-hour patch and ongoing enforcement requirements.

Análisis de la pregunta

Core Concept: This question tests centralized VM security operations: standardized golden images plus continuous configuration/patch enforcement at scale across projects and regions. The key Google Cloud services are Compute Engine image management (image projects, image families, IAM/Org Policy) and VM Manager (OS Config) for patching and OS policy compliance. Why the Answer is Correct: Option A combines the two required control planes: (1) a controlled image supply chain by maintaining hardened, organization-approved images in a dedicated image project and exposing them via image families for consistent provisioning; and (2) ongoing enforcement using OS Config to apply critical patches within a defined window (48 hours) and to continuously evaluate/enforce baseline configuration through OS policies. This meets all three requirements: approved images for new instances, rapid patch rollout across 12 projects, and lifecycle enforcement (not just point-in-time checks). Key Features / How to Implement: - Golden images: Create a central “image factory” project that publishes hardened images (CIS-aligned) as Compute Engine images and uses image families to ensure instance templates always pick the latest approved version. Control access with IAM and optionally Organization Policy constraints (e.g., restrict image projects). - Patching: Use OS Config patch jobs and patch deployments (with schedules and disruption controls) to target fleets by labels, zones/regions, or projects, and to enforce patch SLAs. - Baseline controls: Use OS Config OS policies (OS policy assignments) to enforce packages, files, services, and configuration state continuously, with compliance reporting. - Operations alignment: This supports Google Cloud Architecture Framework operational excellence and security principles by standardizing builds, automating remediation, and providing auditable compliance signals. Common Misconceptions: - Monitoring-only tools (like Security Command Center) don’t enforce patches/config by themselves. - CI/CD image baking alone doesn’t ensure existing VMs stay patched within 48 hours. - Sole-tenant nodes are about isolation/compliance placement, not image governance and patch automation. Exam Tips: When requirements include both “only approved images at provisioning” and “continuous enforcement + patch SLAs,” look for a pairing of image governance (central image project + families + IAM/Org Policy) with OS Config for patching and policy-based configuration management. Distinguish detection (SCC) from enforcement (OS Config/Org Policy).

¿Quieres practicar todas las preguntas en cualquier lugar?

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.

6
Pregunta 6

In a fintech company's Google Cloud environment, the SOC needs the on-premises SIEM to ingest only Compute Engine Admin Activity audit logs and VPC Flow Logs from two projects (proj-sec-01 and proj-sec-02). Access must be read-only, limited to the most recent 30 days, and must not require creating or distributing any long-lived service account keys. Your enterprise IdP is SAML 2.0 and supports OIDC via workforce identity federation; the SIEM can call Google Cloud APIs using short-lived OIDC tokens. You want to minimize data exposure in Google Cloud and avoid copying all logs to external systems. What should you do?

Routing all logs to BigQuery copies data into another analytics store and usually increases exposure surface (dataset access, query results, extracts). Even if the SIEM filters in BigQuery, you still granted access to a broader dataset unless you build additional controls (authorized views, row-level security). It also adds cost (streaming ingestion/storage/query) and is not the most direct way to provide least-privilege read access to recent logs.

This is the best fit because Cloud Logging can natively retain logs for only 30 days in a dedicated bucket and expose just the required subset through a log view. The view can be filtered to only Compute Engine Admin Activity audit logs and VPC Flow Logs from proj-sec-01 and proj-sec-02, which satisfies the least-privilege and read-only requirements. Workforce Identity Federation lets the SIEM use short-lived OIDC-based credentials from the enterprise IdP to call the Cloud Logging API without creating or distributing long-lived service account keys. This approach also avoids unnecessary copying of logs into external systems or secondary analytics stores.

A Pub/Sub sink plus Dataflow forwarding is an export pipeline that pushes logs toward the on-prem SIEM, which contradicts the requirement to avoid copying all logs to external systems and to minimize data exposure. It also introduces operational complexity (pipeline reliability, scaling, backpressure, DLQs) and additional cost. While it can filter logs, it is not the least-exposure approach compared to a Logging view.

Providing a JSON service account key violates the explicit requirement to avoid long-lived keys. It also increases risk of credential leakage and key management burden (rotation, revocation, audit). Storing logs in Cloud Storage is another form of copying/exporting away from Cloud Logging’s native controls, and fine-grained, time-bounded access is harder to enforce compared to Logging retention and views.

Análisis de la pregunta

Core concept: This tests Cloud Logging’s native log storage controls (log buckets, retention, and log views) combined with external identity access using Workforce Identity Federation. The goal is least-privilege, time-bounded access to specific log types across projects without exporting/copying logs broadly or using long-lived service account keys. Why the answer is correct: Option B keeps the logs in Cloud Logging, which minimizes data exposure and avoids unnecessary exports to other storage or analytics systems. A dedicated log bucket can enforce 30-day retention, and a log view can expose only Compute Engine Admin Activity audit logs and VPC Flow Logs from proj-sec-01 and proj-sec-02. The SIEM can authenticate using short-lived OIDC tokens from the enterprise IdP through Workforce Identity Federation, and then read logs through the Cloud Logging API with read-only access. Key features/configurations: - Create a dedicated Cloud Logging bucket and set retention to 30 days. - Route only the required logs from proj-sec-01 and proj-sec-02 into that bucket using log sinks if centralization is needed. - Create a log view that filters to: - Compute Engine Admin Activity audit logs for Compute Engine - VPC Flow Logs - Grant the external workforce principal least-privilege read access to the log view, such as roles/logging.viewAccessor or equivalent minimal logging read permissions. - Configure Workforce Identity Federation with the SAML/OIDC-capable enterprise IdP so the SIEM obtains short-lived credentials without service account keys. Common misconceptions: - BigQuery is useful for analytics, but exporting logs there creates another copy of the data and broadens the exposure surface unless additional controls are added. - Pub/Sub and Dataflow are common for streaming exports to SIEMs, but they increase operational complexity and move data out of Cloud Logging rather than exposing only a tightly scoped view. - Service account keys are not appropriate when the requirement explicitly forbids long-lived credentials. Exam tips: When the requirement mentions an external enterprise IdP, short-lived OIDC-based access, and no service account keys for external users or systems acting as workforce identities, think Workforce Identity Federation. When the requirement emphasizes least-privilege access to only a subset of logs without broad export, think Cloud Logging buckets, retention policies, log views, and IAM-scoped access.

7
Pregunta 7
(Selecciona 2)

Your team is rolling out a global event-ticketing web front end that peaks at 10,000 requests per second across us-central1, europe-west1, and asia-southeast1; to block XSS/SQLi and abusive IP ranges before traffic hits your services, you plan to enforce Google Cloud Armor WAF and rate limiting at the edge—what two infrastructure prerequisites must be in place for the Cloud Armor security policy to actually evaluate and filter requests? (Choose two.)

Incorrect. An external SSL proxy load balancer is not the required entry point for Cloud Armor WAF protections aimed at HTTP attacks such as XSS and SQL injection. Those protections depend on HTTP(S)-aware request inspection, which is provided by external HTTP(S) load balancing rather than generic SSL proxying. The question specifically describes a web front end and WAF behavior, which maps to external HTTP(S) load balancers, not SSL Proxy.

Incorrect. This option describes a rule-matching characteristic rather than an infrastructure prerequisite that must exist for policy enforcement. Cloud Armor does evaluate Layer 7 request attributes for HTTP(S) traffic, but knowing that does not establish the required deployment architecture. Also, Cloud Armor can match on more than just URI and headers, so the statement is not a precise prerequisite and is partly misleading.

Incorrect. Premium Network Service Tier is commonly used with global external HTTP(S) load balancing, but it is not the prerequisite the question is testing for Cloud Armor policy evaluation. The essential requirements are the supported external HTTP(S) load-balancer entry point and an eligible external backend service where the policy is attached. Treating Premium Tier itself as one of the two required prerequisites overstates its role and makes this option less accurate than D.

Correct. Cloud Armor security policies are attached to backend services that are used by supported external load balancers, so the backend service must be configured for external load balancing. In exam terms, this is represented by loadBalancingScheme set to EXTERNAL, which identifies the backend service as part of an external load-balancing path. Without an eligible external backend service, there is no supported place for the Cloud Armor policy to be enforced for incoming web traffic.

Correct. Cloud Armor WAF and HTTP rate limiting are evaluated when requests enter through an external HTTP(S) load balancer, such as the external Application Load Balancer or classic external HTTP(S) load balancer. This Layer 7 proxy path gives Google Cloud Armor access to HTTP request attributes and enables preconfigured WAF signatures for threats like XSS and SQL injection. If traffic bypasses this entry point, the policy cannot inspect and filter the requests as described in the question.

Análisis de la pregunta

Core concept: Google Cloud Armor security policies are enforced only on supported external load-balancing paths, primarily external HTTP(S) load balancers for WAF features such as XSS/SQLi protection and HTTP rate limiting. For the policy to actually inspect and filter requests, traffic must traverse the external HTTP(S) proxy layer and the policy must be attached to an eligible external backend service. Why the answer is correct: E is correct because Cloud Armor WAF and HTTP rate limiting are evaluated on requests handled by an external HTTP(S) load balancer, such as the external Application Load Balancer or classic external HTTP(S) load balancer. This is the Layer 7 entry point where Google can inspect HTTP attributes, apply preconfigured WAF signatures, and enforce rate-based rules before forwarding traffic to backends. If traffic does not enter through this supported HTTP(S) load-balancing stack, the Cloud Armor policy will not evaluate the web requests in the intended way. D is correct because the Cloud Armor security policy is associated with a backend service used by the external load balancer, and that backend service must be an external backend service. In practice, this means the backend service is configured for external load balancing (loadBalancingScheme=EXTERNAL), which is the supported attachment point for the policy in this scenario. Without an eligible external backend service behind the external HTTP(S) load balancer, there is no valid enforcement point for the Cloud Armor policy. Key features: - Cloud Armor WAF and HTTP rate limiting are Layer 7 protections evaluated on supported external HTTP(S) load balancers. - Security policies are attached to backend services, not directly to instances or unmanaged public endpoints. - External Application Load Balancer and classic external HTTP(S) load balancer are the common exam-relevant entry points for Cloud Armor web protection. Common misconceptions: - Premium Tier is often associated with global load balancing, but it is not the core prerequisite being tested here for Cloud Armor policy evaluation. - External SSL Proxy load balancers are not the standard answer when the question explicitly asks about WAF protections like XSS and SQLi, which are HTTP-specific. - A statement about what fields rules can match is not the same as an infrastructure prerequisite. Exam tips: When a question mentions Cloud Armor WAF, XSS/SQLi, or HTTP rate limiting, first identify the supported Layer 7 entry point: an external HTTP(S) load balancer. Then look for the valid policy attachment target: the external backend service behind that load balancer. Distinguish true deployment prerequisites from descriptive statements about rule behavior or unrelated load balancer types.

8
Pregunta 8

You manage security for a media analytics firm hosting 3 internal web dashboards (2 on Cloud Run and 1 on a managed instance group behind a global external HTTP(S) Load Balancer) that must be reachable over the public internet but only by 500 employees in your Google Workspace domain; you need to enforce per-user and group-based access with OAuth 2.0 sign-in, optionally apply device-based restrictions via Access Context Manager, avoid using client VPNs or custom reverse proxies, and capture detailed access audit logs in Cloud Logging. Which Google Cloud service should you use to centrally enforce authentication and fine-grained access control for these applications?

Identity-Aware Proxy (IAP) provides centralized, identity-based access control for web apps. It enforces Google identity OAuth 2.0 authentication and uses IAM to authorize specific users or Google Workspace groups per application. IAP integrates with Access Context Manager for device/context-based access levels and produces detailed audit logs in Cloud Logging. It avoids VPNs and custom reverse proxies while protecting Cloud Run and apps behind HTTP(S) Load Balancing.

Cloud NAT enables outbound (egress) internet access for private resources without external IPs. It does not authenticate end users, perform OAuth sign-in, or enforce group-based authorization for inbound access to web applications. Cloud NAT is a network translation service, not an application access control layer, and it would not meet requirements for per-user access policies or detailed user access auditing for dashboards.

Google Cloud Armor is a WAF and DDoS protection service for HTTP(S) Load Balancers. It can enforce IP allow/deny lists, geo restrictions, rate limiting, and preconfigured WAF rules, but it does not provide per-user authentication via OAuth 2.0 or group-based access tied to Google Workspace identities. It may complement IAP for threat protection, but it cannot replace IAP for identity-aware access control.

Shielded VMs protect VM instances against boot-level and rootkit-style attacks using secure boot, vTPM, and integrity monitoring. This improves host security posture for Compute Engine workloads, but it does not provide centralized user authentication, OAuth sign-in, or authorization for web dashboards. It also does not address the requirement for identity-based access control and user-level audit logging for application access.

Análisis de la pregunta

Core Concept: This question tests centralized, identity-based access control for web applications exposed to the public internet without using VPNs or custom proxies. In Google Cloud, the primary service for per-user OAuth 2.0 authentication and authorization in front of HTTP(S) apps is Identity-Aware Proxy (IAP), often paired with Google Workspace identities and optionally Access Context Manager (BeyondCorp). Why the Answer is Correct: Identity-Aware Proxy sits in front of supported Google Cloud backends (including Cloud Run and external HTTP(S) Load Balancers) and enforces Google identity sign-in (OAuth 2.0) before any request reaches the application. It supports fine-grained access via IAM (users, groups, and service accounts), which directly matches the requirement to allow only 500 employees in a Google Workspace domain and to apply group-based access. Because IAP is identity-aware, it avoids network-based controls like VPNs and does not require building a custom reverse proxy. Key Features / Configurations / Best Practices: 1) IAM-based authorization: grant the IAP-secured Web App User role to Workspace groups for each app. 2) OAuth consent and brand: configure an OAuth brand and client for IAP. 3) Device/context restrictions: integrate with Access Context Manager to require access levels (e.g., managed devices, compliant OS, trusted IP ranges) for IAP-protected resources. 4) Centralized logging: IAP generates detailed audit logs (Admin Activity / Data Access depending on configuration) and request logs that can be exported via Log Router to SIEM or storage, satisfying the Cloud Logging audit requirement. 5) Architecture Framework alignment: implements “Identity and access management” and “Network security” principles by shifting trust from network location to verified identity and context (BeyondCorp). Common Misconceptions: Cloud Armor can restrict traffic by IP/geo and mitigate L7 attacks, but it does not provide per-user OAuth sign-in or group-based authorization. Cloud NAT is egress-only and unrelated to inbound user authentication. Shielded VMs harden VM boot integrity but do not control who can access web dashboards. Exam Tips: When you see “public internet but only specific employees,” “OAuth 2.0 sign-in,” “group-based access,” “no VPN,” and “Cloud Logging auditability,” think IAP + IAM + (optional) Access Context Manager. Also remember IAP works naturally with HTTP(S) Load Balancing and Cloud Run, making it a common exam pattern for BeyondCorp-style access.

9
Pregunta 9

Your security team plans to roll out VPC Service Controls across 12 production projects organized into 4 service perimeters and wants a 14-day evaluation period to test perimeter rule changes and observe potential violations in logs without interrupting any existing access paths (including BigQuery and Cloud Storage requests from on-prem via Private Service Connect). Which VPC Service Controls mode should you use to validate the impact safely while ensuring no requests are blocked during the evaluation window?

Cloud Run is a serverless compute platform and is unrelated to VPC Service Controls operating modes. While Cloud Run services can be placed behind VPC controls indirectly (for example, accessing protected services), it does not provide a “mode” for VPC SC evaluation or enforcement. Selecting Cloud Run confuses an application runtime product with a security boundary control feature.

“Native” is not a VPC Service Controls mode used to evaluate or enforce perimeters. VPC SC perimeters are configured and then applied either in enforced behavior or in dry-run evaluation. If you see terms like “native,” they typically relate to other product contexts (for example, native integrations) and not to the operational modes required for safe perimeter testing.

Enforced mode actively blocks requests that violate the service perimeter rules. This is appropriate when you are ready to prevent data exfiltration and can tolerate/expect access denials for non-compliant paths. In this scenario, the requirement explicitly says to avoid interrupting any existing access paths during a 14-day evaluation, so enforced mode would be too risky and would likely cause outages.

Dry run mode evaluates requests against the proposed perimeter policy and produces violation logs, but it does not block traffic. This enables a safe evaluation period to test perimeter rule changes, identify unexpected dependencies (including hybrid/on-prem access via Private Service Connect), and tune access levels and ingress/egress rules before switching to enforced mode. It best matches “observe without interrupting.”

Análisis de la pregunta

Core Concept: VPC Service Controls (VPC SC) provides a service perimeter around Google-managed services (for example, BigQuery and Cloud Storage) to reduce data exfiltration risk. Perimeters can be applied across multiple projects and can restrict access based on identity, network, and access level conditions. VPC SC supports an evaluation approach using “dry-run” to observe what would be denied without enforcing blocks. Why the Answer is Correct: “Dry run” mode is designed specifically to validate perimeter configuration changes safely. In dry-run, VPC SC evaluates requests against the dry-run perimeter configuration and records violations in logs, but it does not block the requests. This matches the requirement for a 14-day evaluation window where the team wants to test rule changes and observe potential violations without interrupting any existing access paths, including on-premises access patterns such as BigQuery/Cloud Storage requests coming through Private Service Connect. Key Features / Best Practices: 1) Dry-run perimeters: You can keep the enforced perimeter stable while iterating on a dry-run configuration to see the impact before turning on enforcement. 2) Logging and monitoring: Dry-run generates VPC SC violation logs (and related audit logs) that can be routed to SIEM/SOAR for analysis. This supports the Google Cloud Architecture Framework principle of “operational excellence” and “security by design” through measurable controls. 3) Gradual rollout: With 12 projects and 4 perimeters, dry-run enables staged validation per perimeter and reduces blast radius. After the evaluation, you can promote the tested rules to enforced mode. Common Misconceptions: Many confuse “native” or “enforced” as required for “real” testing. However, enforced mode will actively deny non-compliant requests, which contradicts the requirement to avoid interruption. Another misconception is that you must enforce to see violations; dry-run is explicitly built to surface violations without blocking. Exam Tips: When a question emphasizes “observe violations,” “test changes,” “no disruption,” or “evaluate impact,” the correct VPC SC choice is almost always “dry run.” If the question instead requires actively preventing data exfiltration immediately, then “enforced” is appropriate. Also remember that complex access paths (hybrid/on-prem, PSC, VPN/Interconnect) are exactly where dry-run helps uncover unexpected dependencies before enforcement.

10
Pregunta 10

You are launching a payment reconciliation service on Cloud Run in asia-southeast1. A regulatory requirement mandates that application logs be retained for 10 years and that all log data remain within Singapore (asia-southeast1) at all times. The service emits approximately 40 GB of logs per day, and your team wants a low-overhead, cost-effective, Google-managed approach. What should you do?

Writing logs directly to Cloud Storage can meet retention and regionality if the bucket is in asia-southeast1 and has a retention policy. However, it requires application changes, loses Cloud Logging’s native querying/alerting, and is higher operational overhead. It also risks missing platform/system logs that Cloud Run sends to Cloud Logging. This is not the most Google-managed, low-overhead approach for Cloud Run logging compliance.

Cloud Run does not support installing traditional logging agents like on VMs, and sidecars add complexity and operational burden (build, deploy, patch, resource sizing). Also, you don’t need an agent to get logs into Cloud Logging—Cloud Run already streams stdout/stderr to Cloud Logging. The correct pattern is configuring Cloud Logging buckets and Log Router, not adding custom log shipping components.

Exporting logs via Pub/Sub to Cloud Storage introduces an extra pipeline (sink to Pub/Sub, subscriber, delivery guarantees, backpressure handling) and increases operational overhead. It can be costlier and more failure-prone at 40 GB/day. While you can keep the Storage bucket regional and set lifecycle/retention, this is not the simplest Google-managed solution compared to routing directly into a regional Cloud Logging bucket.

Creating a regional Cloud Logging log bucket in asia-southeast1 and routing Cloud Run logs to it via Log Router directly addresses both requirements: enforced regional data residency and 10-year retention using managed logging storage. It requires no application changes and minimal operations. You can precisely filter Cloud Run logs and avoid storing them in the default bucket, aligning with compliance and cost-control best practices.

Análisis de la pregunta

Core concept: This question tests Cloud Logging data residency and retention controls using Log Router and regional log buckets. For Cloud Run, application logs are automatically ingested into Cloud Logging; the compliant, low-overhead approach is to control where logs are stored and how long they are retained using Cloud Logging’s managed storage. Why the answer is correct: A regulatory requirement mandates (1) 10-year retention and (2) all log data must remain in Singapore (asia-southeast1) at all times. Creating a regional Cloud Logging log bucket in asia-southeast1 and routing the Cloud Run service logs to that bucket satisfies both: the bucket’s location enforces regional storage, and the bucket’s retention setting enforces long-term retention without building custom pipelines. Updating the Log Router ensures only the relevant logs are stored in that bucket, and you can optionally exclude them from the default bucket to avoid noncompliant storage locations. Key features and best practices: - Regional log buckets: Cloud Logging supports log buckets with a specified location (regional). Storing logs in a regional bucket is the primary mechanism for log data residency. - Custom retention: Configure the bucket retention to 3650 days (10 years). This is enforced by Cloud Logging and reduces operational burden. - Log Router sinks: Create an inclusion filter targeting the Cloud Run service (resource.type="cloud_run_revision" and service name/region labels) and route to the regional bucket. Use exclusions on the default bucket if needed to prevent duplicate storage. - Compliance alignment: This maps to the Google Cloud Architecture Framework’s compliance and data governance principles—use managed controls, least operational overhead, and explicit data location constraints. Common misconceptions: It’s tempting to export logs to Cloud Storage for “cheap long-term storage,” but that adds pipeline complexity and can violate “remain within Singapore at all times” if logs land in the default global logging bucket first or if routing is misconfigured. Similarly, adding agents/sidecars is not aligned with Cloud Run’s fully managed model and increases maintenance. Exam tips: For residency + retention requirements, prefer Cloud Logging regional buckets + Log Router over DIY exports. Remember: Cloud Run already integrates with Cloud Logging—focus on configuring sinks/buckets rather than modifying the app. Also consider volume (40 GB/day): managed logging buckets scale without you managing ingestion infrastructure, and you can apply filters to control cost.

Historias de éxito(6)

P
P***********Nov 25, 2025

Periodo de estudio: 2 months

I used Cloud Pass during my last week of study, and it helped reinforce everything from beyondcorp principles to securing workloads. It’s straightforward, easy to use, and genuinely helps you understand security trade-offs.

길
길**Nov 23, 2025

Periodo de estudio: 1 month

문제 다 풀고 시험에 응했는데 바로 합격했어요! 시험이랑 문제는 비슷한게 40% 조금 넘었던거 같고, 처음 보는 유형은 제 개념 이해를 바탕으로 풀었어요.

D
D***********Nov 12, 2025

Periodo de estudio: 1 month

I would like to thanks the team of Cloud Pass for these greats materials. This helped me passing the exam last week. Most of the questions in exam as the sample questions and some were almost similar. Thank you again Cloud Pass

O
O**********Oct 29, 2025

Periodo de estudio: 1 month

Absolutely invaluable resource to prepare for the exam. Explanations and questions are spot on to give you a sense of what is expected from you on the actual test.

O
O**********Oct 29, 2025

Periodo de estudio: 1 month

I realized I was weak in log-based alerts and access boundary configurations. Solving questions here helped me quickly identify and fix those gaps. The question style wasn’t identical to the exam, but the concepts were spot-on.

Otros exámenes de práctica

Practice Test #1

50 Preguntas·120 min·Aprobación 700/1000

Practice Test #2

50 Preguntas·120 min·Aprobación 700/1000
← Ver todas las preguntas de Google Professional Cloud Security Engineer

Comienza a practicar ahora

Descarga Cloud Pass y comienza a practicar todas las preguntas de Google Professional Cloud Security Engineer.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

App de práctica para certificaciones TI

Get it on Google PlayDownload on the App Store

Certificaciones

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

Preguntas frecuentesPolítica de privacidadTérminos de servicio

Empresa

ContactoEliminar cuenta

© Copyright 2026 Cloud Pass, Todos los derechos reservados.

¿Quieres practicar todas las preguntas en cualquier lugar?

Obtén la app

Descarga Cloud Pass — incluye exámenes de práctica, seguimiento de progreso y más.