CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Google Professional Cloud Security Engineer
Google Professional Cloud Security Engineer

Practice Test #2

Simuliere die echte Prüfungserfahrung mit 50 Fragen und einem Zeitlimit von 120 Minuten. Übe mit KI-verifizierten Antworten und detaillierten Erklärungen.

50Fragen120Minuten700/1000Bestehensgrenze
Übungsfragen durchsuchen

KI-gestützt

Dreifach KI-verifizierte Antworten & Erklärungen

Jede Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.

GPT Pro
Claude Opus
Gemini Pro
Erklärungen zu jeder Option
Tiefgehende Fragenanalyse
Konsensgenauigkeit durch 3 Modelle

Übungsfragen

1
Frage 1
(2 auswählen)

Your company operates a single Google Cloud organization with 10 folders and 150 projects, and the SOC requires that all Google Cloud Console sign-in events and API calls that change resource configurations be streamed to an external SIEM in under 60 seconds, with coverage for all existing and future projects. Requirements:

  • Collect and export the relevant logs for the entire organization hierarchy (folders and projects).
  • Deliver logs to the SIEM in near real time (<60 seconds).
  • Include Console login events and admin activity that modifies configurations. What should you do? (Choose two.)

An organization-level sink is directionally correct for covering all projects, but Cloud Storage is not appropriate for near real-time SIEM ingestion. Storage sinks are optimized for durable archival and batch processing; SIEMs typically ingest via streaming endpoints. You could build additional components (e.g., notifications + processing), but that adds complexity and may not reliably meet the <60 seconds requirement compared to Pub/Sub.

Correct. An organization-level aggregated sink with includeChildren captures logs from all folders and projects, including future projects, without per-project setup. Routing to Pub/Sub supports near real-time delivery and is a common SIEM integration pattern (subscriber pulls messages or a connector streams them onward). This meets the <60 seconds requirement and centralizes export management at the organization level.

Enabling Data Access audit logs org-wide is not required for the stated needs and can significantly increase log volume and cost. The requirement is for configuration-changing API calls (covered by Admin Activity logs, enabled by default) and Console sign-in events (identity audit logs). Data Access logs are for read/write access to user data (e.g., object reads), not primarily for admin configuration changes.

Correct. Cloud Console sign-in events are typically captured in Google Workspace (or Cloud Identity) audit logs rather than Cloud Audit Logs. Enabling export of Workspace audit logs to Cloud Logging makes those sign-in events available in Cloud Logging, where the organization-level sink can route them to Pub/Sub for SIEM ingestion. This is the key step to include Console login events in the same pipeline.

Parsing AuthenticationInfo can help identify the principal for many Cloud Audit Logs entries, but it does not solve the core requirements: it does not create org-wide coverage, does not provide a streaming export mechanism, and does not add Console sign-in events if they are not present in Cloud Logging. It is an implementation detail for SIEM enrichment, not one of the two required architectural actions.

Fragenanalyse

Core concept: This question tests organization-wide logging architecture using Cloud Logging sinks and Cloud Audit Logs, and how to stream security-relevant events to an external SIEM with low latency. It also tests understanding of what “Google Cloud Console sign-in events” actually are and where they originate. Why the answer is correct: To cover all existing and future projects across an organization hierarchy, you should create an organization-level aggregated log sink with includeChildren enabled. This ensures logs from all folders and projects (present and future) are captured without per-project configuration. For near real-time delivery (<60 seconds), Pub/Sub is the correct sink destination because it supports streaming consumption patterns and low-latency delivery to external systems. Cloud Storage is primarily batch/archival and does not meet the near-real-time requirement. Console sign-in events are not Cloud Audit Logs “Admin Activity” entries. Interactive sign-ins to the Cloud Console are identity events typically recorded in Google Workspace (or Cloud Identity) audit logs. To route those into Cloud Logging so they can be exported via the same org-level sink to Pub/Sub, you must enable export of Google Workspace audit logs to Cloud Logging. This provides the required coverage for Console login events. Key features and best practices: - Organization-level sinks with includeChildren provide centralized governance and scale for 150+ projects and future growth. - Pub/Sub sinks are the standard pattern for SIEM streaming; the SIEM can pull or receive pushed messages via a subscriber/connector. - Admin Activity audit logs are enabled by default and capture configuration-changing API calls; they are retained and cannot be disabled. - Aligns with Google Cloud Architecture Framework “Operational Excellence” and “Security, Privacy, and Compliance” by centralizing telemetry and enabling rapid detection. Common misconceptions: Many assume Cloud Audit Logs already include Console sign-in events; they generally do not. Another trap is enabling Data Access logs (often expensive and high-volume) when the requirement is specifically configuration-changing activity (Admin Activity) plus sign-ins. Exam tips: When you see “all projects including future,” think org-level sink + includeChildren. When you see “<60 seconds to SIEM,” think Pub/Sub (or sometimes BigQuery for analytics, but not streaming). Distinguish between Cloud Audit Logs (API activity) and identity sign-in logs (Workspace/Cloud Identity).

2
Frage 2

You lead network security for a fintech trading platform on Google Cloud. You currently detect anomalies using VPC Flow Logs exported to BigQuery with a 5-minute aggregation interval across three VPCs. A red team exercise now requires examining full packet payloads and L4/L7 headers for east-west traffic between two production subnets (10.20.0.0/24 and 10.20.1.0/24) in a single VPC and forwarding a copy of up to 8 Gbps of this traffic to a third-party NIDS running on a Compute Engine VM, without altering original packets. Which Google Cloud product should you use?

Cloud IDS provides managed intrusion detection (based on Palo Alto Networks technology) for monitoring network threats in Google Cloud. It’s appropriate when you want Google-managed detection and alerting, not when you must forward raw mirrored traffic to a third-party NIDS VM. Cloud IDS consumes traffic via Packet Mirroring under the hood, but it does not satisfy the explicit requirement to send a copy of traffic to your own NIDS without altering packets.

VPC Service Controls logs relate to service perimeter events and access to Google-managed services (e.g., BigQuery, Cloud Storage) to reduce data exfiltration risk. They do not capture VPC east-west packet payloads or L4/L7 headers and cannot forward traffic to a network IDS. This option might seem relevant due to “fintech” and compliance, but it’s the wrong telemetry type for packet-level inspection.

VPC Flow Logs provide sampled/aggregated network flow metadata (5-tuple, bytes, packets, TCP flags, etc.) and are useful for traffic analysis and anomaly detection at scale. However, they do not include full packet payloads and cannot reconstruct L7 content. The question explicitly requires examining payloads and forwarding a copy of up to 8 Gbps of traffic, which Flow Logs cannot do.

Google Cloud Armor is an edge security service (WAF and DDoS protection) for external HTTP(S) load balancers and some other front-door scenarios. It does not provide packet capture for internal east-west traffic between subnets, nor does it mirror traffic to a third-party NIDS. It’s commonly confused with general network security tooling, but it’s specifically for protecting internet-facing applications.

Packet Mirroring is the correct choice because it creates out-of-band copies of VM network traffic, including packet headers and payloads, for deep inspection. That directly satisfies the requirement to analyze east-west traffic between two subnets and preserve the original packets unchanged. In Google Cloud, mirrored traffic is delivered to a collector architecture behind an internal passthrough Network Load Balancer, which is how a third-party NIDS on Compute Engine can receive and inspect the traffic. It also supports selective mirroring and filtering so the scope can be limited to the relevant production subnet traffic rather than the entire VPC.

Fragenanalyse

Core concept: This question is about choosing the Google Cloud feature that provides full-fidelity packet copies for deep inspection of east-west traffic. The requirement is to inspect full packet payloads plus L4/L7 headers and send a copy of traffic to a third-party NIDS without modifying the original packets. Why correct: Packet Mirroring is the Google Cloud capability designed for this exact use case. It mirrors traffic from selected VM instances in a VPC and sends those packet copies out-of-band to a collector behind an internal passthrough Network Load Balancer, enabling third-party NIDS or packet analysis tools to inspect the traffic while production flows continue unchanged. Unlike VPC Flow Logs, Packet Mirroring provides actual packet data rather than aggregated metadata. Key features: Packet Mirroring supports filtering and scoping by subnet, instance, tags, direction, protocol, and CIDR ranges, which fits the requirement to focus on traffic between two production subnets in a single VPC. It is intended for deep packet inspection, forensic analysis, and integration with partner or self-managed security appliances. For an 8 Gbps requirement, the collector architecture and load balancer backend capacity must be sized appropriately. Common misconceptions: Cloud IDS is a managed IDS service, but it is not the product you choose when the requirement explicitly says to forward mirrored traffic to your own third-party NIDS VM. VPC Flow Logs only provide flow metadata, not payloads. Cloud Armor is for edge application protection, and VPC Service Controls logs are about Google API perimeter events rather than packet capture. Exam tips: If the question mentions full packet payloads, packet copies, east-west inspection, or sending traffic to a third-party appliance, think Packet Mirroring. If it mentions aggregated network metadata, think VPC Flow Logs. If it mentions managed IDS detections, think Cloud IDS. If it mentions WAF or DDoS at the edge, think Cloud Armor.

3
Frage 3

Your company has a three-level resource hierarchy: Organization > Business Unit folders > Team folders, and you are onboarding 12 platform squads that each receive a dedicated Terraform provisioner service account; each squad must be able to create and fully manage projects only under its assigned team folder (for example, folders/789012345678) while adhering to least privilege and preventing project creation in any other location; you need a scalable, centrally managed approach that supports Infrastructure as Code and avoids granting broad administrative control at the folder or organization level; what should you do?

Granting roles/resourcemanager.folderAdmin on the team folder is broader than necessary for the stated requirement. That role is intended for administering the folder itself, including folder management capabilities that exceed simply creating projects beneath it. Because the question explicitly asks to avoid broad administrative control at the folder or organization level, this violates least privilege. It also creates unnecessary risk if the provisioner service account is compromised, since the blast radius extends to folder administration rather than only project creation.

roles/resourcemanager.projectCreator at the organization level provides the ability to create projects, and an IAM Condition restricting resource.parent to the squad’s team folder enforces a hard boundary on where projects can be created. This is scalable (repeatable pattern for 12 squads), centrally managed, IaC-friendly, and aligns with least privilege by avoiding folder/organization admin roles while still preventing project creation in other locations.

Granting roles/editor at the organization level is excessively permissive and clearly conflicts with least-privilege design. It provides broad access across organizational resources and does not create a strong, centrally enforced boundary limiting project creation to one specific folder. Telling squads to apply finer-grained IAM later is reactive rather than preventive and leads to inconsistent governance. This option is therefore both overprivileged and operationally weak for centralized control.

roles/resourcemanager.folderIamAdmin allows the principal to manage IAM policy on the folder, which is a sensitive administrative capability. That permission can enable privilege escalation because the service account could potentially grant itself or others additional access. It also does not directly solve the core need of controlled project creation under only one assigned folder. Since the question asks for a centrally managed, least-privilege approach that avoids broad administrative control, this option is not appropriate.

Fragenanalyse

Core concept: This question tests Google Cloud IAM design for resource hierarchy governance (Organization > Folders > Projects), specifically controlling project creation location using least privilege. It also touches IAM Conditions (conditional role bindings) and scalable, centrally managed access patterns suitable for Terraform-based provisioning. Why the answer is correct: To let each squad create and fully manage projects only under its own team folder, you need two things: (1) the ability to create projects, and (2) a hard boundary preventing creation elsewhere. Granting roles/resourcemanager.projectCreator at the organization level provides the project creation permission (resourcemanager.projects.create). Adding an IAM Condition that checks the target parent (for example, resource.parent == "folders/789012345678") constrains where that permission can be exercised. This is scalable because you can manage a consistent pattern centrally (org-level bindings) while still scoping each squad to a specific folder via the condition. It also avoids giving broad folder/organization admin privileges. Key features and best practices: - IAM Conditions allow attribute-based access control on IAM bindings, enabling “create only under this folder” constraints. - Centralized governance: org-level binding + condition is easier to audit and standardize across 12 squads than delegating folder admin. - Least privilege: projectCreator is narrower than folderAdmin/editor and avoids granting permissions to modify folder structure or IAM broadly. - For “fully manage projects,” you typically pair this with project-level roles granted after creation (often via automation): e.g., roles/resourcemanager.projectIamAdmin and/or predefined admin roles on the newly created project. The question’s key requirement is preventing creation outside the assigned folder; conditional projectCreator is the control that enforces that boundary. Common misconceptions: - “Just grant folderAdmin on the team folder” seems to scope correctly, but it grants extensive folder management capabilities (including moving resources, deleting folders, and potentially manipulating hierarchy) beyond what’s needed for project provisioning. - “Use editor at org” is overly broad and violates least privilege; it also doesn’t enforce location constraints. - “Give folder IAM admin” enables changing IAM policies, which can lead to privilege escalation and does not directly grant project creation. Exam tips: - For controlling where projects can be created, think roles/resourcemanager.projectCreator plus IAM Conditions on resource.parent. - Prefer conditional, centrally managed bindings for scalable multi-team governance. - Watch for privilege escalation: roles that allow setting IAM (folderIamAdmin, projectIamAdmin) are powerful and should be tightly controlled. - Map requirements to the minimal role that grants the needed permission, then add conditions to enforce boundaries aligned with the Google Cloud Architecture Framework’s security principle of least privilege and centralized policy management.

4
Frage 4

Your retail analytics platform runs on two Compute Engine instances behind a load balancer and authenticates to Google APIs using a user-managed service account key stored in Secret Manager (secret name: retail-sa-key), and your security policy mandates rotation every 90 days with no more than 2 minutes of reduced capacity. To follow Google-recommended practices when rotating this user-managed service account key, what should you do?

Incorrect. There is no standard, Google-recommended “enable-auto-rotate” capability for user-managed service account keys as described. Google’s guidance is to avoid long-lived keys where possible, and if keys are required, rotate them via a controlled process (create new key, deploy, then delete old). Relying on a nonexistent/incorrect command would not meet the 90-day policy or the 2-minute reduced-capacity requirement.

Incorrect. Key rotation is not typically performed by supplying a “NEW_KEY” to a rotate command for service account keys. In Google Cloud, you create a new key for the service account, distribute it securely (here, via Secret Manager), update workloads, and then delete the old key. A single-step “rotate” command is misleading and doesn’t address safe rollout, verification, or Secret Manager versioning.

Correct. This is the recommended operational pattern: create a new service account key, store it as a new Secret Manager version, roll out the change (rolling restart across the two instances behind the load balancer), verify API access, then delete the old key from IAM. This minimizes downtime (overlapping keys during cutover) and reduces risk by promptly revoking the old credential after validation.

Incorrect. Keeping the old key on VMs for 30 days increases exposure and undermines the purpose of rotation. It also conflicts with least privilege and credential hygiene: old keys should be revoked once the new key is confirmed working. If rollback is needed, use Secret Manager versioning and controlled deployment, not lingering key files on disk (which are prone to theft and hard to audit).

Fragenanalyse

Core concept: This question tests secure rotation of user-managed service account keys used by workloads, and how to do it with minimal downtime. In Google-recommended practice, long-lived service account keys are a risk and should be avoided when possible (prefer Workload Identity/attached service accounts). If you must use keys, rotate them safely using overlapping validity and controlled rollout. Why the answer is correct: Option C describes the standard, recommended rotation pattern: create a new key, store it as a new Secret Manager version, update the application to use the new version, validate that API calls succeed, and then delete the old key from the service account. This achieves near-zero downtime because both keys can exist simultaneously during the cutover. With two instances behind a load balancer, you can do a rolling restart/refresh (one VM at a time) so capacity reduction stays within the 2-minute constraint. Key features and configurations: Secret Manager supports versioning, enabling you to add a new version without breaking existing consumers. Applications should reference the secret (and ideally “latest”) and be able to reload credentials (or be restarted in a rolling manner). After verification, deleting the old key in IAM immediately invalidates it, reducing exposure. This aligns with the Google Cloud Architecture Framework’s security principles: reduce credential leakage risk, enforce credential lifecycle management, and operationalize security with repeatable runbooks. Common misconceptions: Some assume there is an “auto-rotate” command for service account keys (there isn’t for user-managed keys in the way implied), or that “rotate” is a single gcloud command. In practice, rotation is a process: create new key, deploy, verify, then revoke old. Keeping old keys “just in case” increases breach blast radius and violates rotation intent. Exam tips: On the Security Engineer exam, when you see “user-managed service account key” + “Secret Manager” + “rotation,” expect a versioned-secret + rolling deployment + delete-old-key workflow. Also remember the best-practice alternative: avoid keys entirely by using attached service accounts on Compute Engine (metadata server) or Workload Identity Federation for external workloads.

5
Frage 5

Your compliance team is launching an internal meeting-notes summarization pipeline on Google Cloud that uses a generative model to create summaries from audio transcripts, it must process up to 3,000 transcripts per day (average 1 MB each) with under 200 ms filtering latency per request, and company policy mandates that no personally identifiable information (PII)—such as names, email addresses, phone numbers, or government IDs—may appear in either the prompts sent to the model or the summaries returned, so you need a managed, scalable control that detects and automatically redacts PII on both ingress and egress before any storage or display; what should you do?

Cloud KMS protects data by encrypting it at rest and controlling access to encryption keys, which is valuable for confidentiality and key management. However, encryption does not inspect the contents of transcripts or summaries and does not remove sensitive values before they are sent to a model or shown to users. Once the application decrypts the data for processing, any PII is still present unless a separate inspection and redaction step is applied. Therefore, KMS is useful as a complementary control but does not satisfy the core requirement in the question.

Cloud DLP is Google Cloud’s native managed service for discovering and de-identifying sensitive data in text and other content. In this pipeline, it can inspect transcript text before it is turned into a prompt and then inspect the generated summary before that output is stored or displayed, ensuring sensitive values are removed on both ingress and egress. It supports de-identification techniques such as masking, redaction, replacement, and tokenization, which aligns directly with the requirement to prevent PII from appearing in prompts or returned summaries. This makes it the most appropriate managed and scalable control for compliance-focused content filtering in a generative AI workflow.

VPC Service Controls helps reduce the risk of data exfiltration by placing supported services inside a service perimeter and restricting access paths. That control governs where data can move, but it does not analyze request or response payloads to determine whether they contain PII. Sensitive data could still be passed to the model and returned in summaries entirely within the perimeter, which would still violate the stated policy. As a result, VPC-SC is defense in depth, not the primary solution for content redaction.

A third-party Marketplace product might offer scanning or redaction features, but the question asks for the most appropriate managed and scalable Google Cloud control. Introducing an external product adds procurement, integration, operational, and compliance review overhead, and it is not the standard native answer for Google Cloud certification scenarios. The option also emphasizes encryption, which still does not inherently guarantee that PII is removed from prompts and summaries before use. Cloud DLP is purpose-built for sensitive-data inspection and de-identification, making it the better fit.

Fragenanalyse

Core Concept: This question tests managed data protection controls for compliance—specifically detecting and de-identifying PII in content flowing into and out of a generative AI summarization pipeline. In Google Cloud, the primary managed service for PII discovery and redaction is Cloud Data Loss Prevention (Cloud DLP). Why the Answer is Correct: Option B is correct because Cloud DLP can inspect text for sensitive data types (names, emails, phone numbers, government IDs, etc.) and then automatically de-identify (mask, redact, replace, tokenize, or format-preserve) the detected findings. The requirement explicitly states that PII must not appear in prompts sent to the model (ingress) nor in summaries returned (egress), and that the control must be managed, scalable, and applied before storage or display. Cloud DLP is designed for exactly this: programmatic inspection + de-identification as part of an application workflow. Key Features / How to Implement: - Use Cloud DLP inspect + de-identify APIs in the request path before calling the model (prompt construction) and again on model output before persisting or rendering. - Configure infoTypes (built-in and custom), likelihood thresholds, and exclusion rules to reduce false positives. - Choose transformations: masking/redaction for strict removal, or tokenization/crypto-based pseudonymization when you need consistent replacements. - For performance, keep processing regional where possible and design for horizontal scale (e.g., Cloud Run/GKE calling DLP). The stated throughput (3,000 x 1 MB/day) is modest; the key is meeting per-request latency targets by minimizing extra hops and tuning inspection scope. Common Misconceptions: Encryption (A) protects confidentiality at rest but does not remove PII from prompts or summaries. VPC Service Controls (C) reduces data exfiltration risk but cannot detect/redact PII within allowed flows. Third-party tools (D) may work but are not the most appropriate managed native control and add supply-chain/compliance complexity. Exam Tips: When the requirement is “detect and redact PII” (especially for compliance) think Cloud DLP first. Pair it with IAM least privilege, logging, and boundary controls as defense-in-depth, but DLP is the control that actually enforces “no PII in content” on ingress/egress. Map such requirements to the Google Cloud Architecture Framework’s security and compliance principles: data classification, prevention controls, and automated policy enforcement.

Möchtest du alle Fragen unterwegs üben?

Lade Cloud Pass herunter – mit Übungstests, Fortschrittsverfolgung und mehr.

6
Frage 6

Your organization uses a Shared VPC where net-hub-prod is the host project, and all firewall rules, subnets, and an HA VPN with Cloud Router are configured in the host; you need to let the Data Science Blue group attach Compute Engine VMs in service project ml-svc-02 only to the us-central1 subnetwork 172.16.20.0/24 and prevent attachment to any other subnet—what should you grant to the group to meet this requirement?

Granting Compute Network User at the host project level is too broad. It would allow the group to attach VMs from the service project to any subnetwork in the host project’s Shared VPC, not just 172.16.20.0/24 in us-central1. This violates the requirement to prevent attachment to any other subnet. Project-level grants are common but fail least-privilege constraints.

This is correct because IAM can be applied directly to the specific subnetwork resource in the host project. roles/compute.networkUser on the 172.16.20.0/24 subnetwork allows the group to use (attach NICs to) that subnetwork when creating VMs in the service project, while not granting permissions to use other subnetworks. This precisely enforces the requested restriction.

Compute Shared VPC Admin (roles/compute.xpnAdmin) at the host project level is intended for administering Shared VPC: enabling a host project, associating/dissociating service projects, and managing Shared VPC-level settings. It is far more privileged than needed and does not directly implement a “only this one subnet” usage constraint. It also increases risk by expanding administrative capabilities.

Compute Shared VPC Admin at the service project level is not the right control point for subnet attachment. The subnetworks are owned by the host project, so the effective permission to use a subnet must be granted on the host project’s subnetwork resource. Additionally, xpnAdmin is an administrative role and is overly permissive relative to the requirement.

Fragenanalyse

Core concept: This question tests Shared VPC IAM delegation and the principle of least privilege for attaching resources (Compute Engine VMs) in a service project to subnets that live in a host project. In Shared VPC, subnets and firewall rules are owned by the host project, but service projects can create VM NICs that attach to those host subnets only if they have the right IAM permissions. Why the answer is correct: To allow the Data Science Blue group to create/attach VM network interfaces only on a single subnetwork (us-central1, 172.16.20.0/24) and prevent use of any other subnet, you must scope the permission to that specific subnetwork resource. Granting Compute Network User (roles/compute.networkUser) on the specific subnetwork in the host project allows the group to use that subnetwork when creating VM instances (or instance templates/MIGs) from the service project, but does not grant the ability to use other subnets in the Shared VPC. Key features / best practices: - Shared VPC uses host-project-owned network resources; service-project principals need explicit IAM on those network resources. - roles/compute.networkUser includes permissions such as compute.subnetworks.use and compute.subnetworks.useExternalIp (depending on configuration) that enable VM NIC attachment. - Resource-level IAM on subnetworks is the recommended way to enforce subnet-level boundaries (least privilege) rather than broad project-level grants. - This aligns with the Google Cloud Architecture Framework security principles: strong identity, least privilege, and clear segmentation/boundary control. Common misconceptions: A common trap is granting roles/compute.networkUser at the host project level, which would allow attachment to all subnetworks in that host project, violating the requirement. Another misconception is using Shared VPC Admin roles, which are intended for configuring and managing Shared VPC (associating service projects, enabling host project, etc.), not for narrowly controlling which subnet a workload can attach to. Exam tips: - For “can use a specific subnet” questions in Shared VPC, think: roles/compute.networkUser on the subnetwork resource. - For “can manage Shared VPC relationships/configuration,” think: roles/compute.xpnAdmin (Shared VPC Admin) at the appropriate project scope. - Always match the scope (project vs. subnetwork) to the requirement (all subnets vs. one subnet).

7
Frage 7

Your production Google Cloud project runs a managed instance group behind an external HTTP(S) load balancer; a team of 10 release engineers must roll out new application versions by updating instance templates and triggering deployments via Cloud Build, but they must not be able to create, update, or delete any VPC firewall rules in the shared network; only a 2-person NetOps group may change firewall rules, and auditors require least privilege with clear separation of duties and auditable assignments. What should you do?

Incorrect. Granting Network Admin provides broad network permissions, including the ability to create, update, and delete firewall rules. Relying on instructions (“don’t modify firewall rules”) violates least privilege and does not meet separation-of-duties or audit requirements. In exams, any option that depends on user behavior instead of technical enforcement is typically wrong for compliance-driven scenarios.

Incorrect. Access Context Manager (BeyondCorp/Context-Aware Access) is primarily for controlling access based on context (IP, device posture, identity) to supported resources, not for carving out fine-grained IAM permissions on Compute Engine firewall rule administration. Even if you restricted where changes can be made from, release engineers would still have the authorization to change firewall rules if granted the underlying IAM permissions.

Correct. Custom IAM roles allow precise permission scoping: release engineers get only deployment-related permissions (instance templates, MIG updates, Cloud Build actions), while NetOps gets only firewall administration permissions. This enforces least privilege, provides clear separation of duties, and is easily auditable through IAM bindings to groups. It also fits Shared VPC governance by keeping firewall control in the host project with NetOps.

Incorrect. While IAM deny policies can block specific permissions, granting Editor is overly broad and violates least privilege because it grants many unrelated permissions across services. Deny policies are powerful but are best used as an additional guardrail, not as a justification to assign overly permissive base roles. Auditors typically expect minimal allow permissions first, then optional denies for defense-in-depth.

Fragenanalyse

Core Concept: This question tests IAM least privilege and separation of duties in Google Cloud. The goal is to allow release engineers to deploy (update instance templates and trigger managed instance group rollouts via Cloud Build) while preventing them from administering shared VPC firewall rules. It also emphasizes auditable, role-based access aligned to compliance expectations. Why the Answer is Correct: Creating two custom IAM roles cleanly separates deployment permissions from network security administration. A “deployer” role can include only the permissions required to update instance templates, update/roll managed instance groups, and run Cloud Build (and any required service account impersonation). A separate “network security” role can include only Compute Engine firewall rule permissions (e.g., compute.firewalls.create/update/delete) and be granted exclusively to the NetOps group. This directly enforces least privilege and provides clear, auditable assignments (group membership + IAM policy bindings) that satisfy auditors. Key Features / Best Practices: - Use Google Groups for the 10 release engineers and 2 NetOps engineers, then bind roles to groups for auditability and operational simplicity. - Prefer predefined roles where possible (e.g., compute.instanceAdmin.v1, compute.loadBalancerAdmin, cloudbuild.builds.editor) but use custom roles when you must exclude sensitive permissions like firewall administration. - In Shared VPC, firewall rules typically live in the host project; ensure the release engineers have no firewall permissions in the host project while still having needed permissions in the service project(s). - Aligns with Google Cloud Architecture Framework: “Security, privacy, and compliance” (least privilege, separation of duties, centralized governance). Common Misconceptions: People often think “Network Admin + policy/process” is acceptable, but auditors require technical enforcement. Others confuse Access Context Manager (context-based access) with authorization; it can restrict where access comes from, not replace least-privilege permissions. Deny policies can help, but granting broad roles like Editor is still poor practice and can create other unintended privileges. Exam Tips: When you see “must not be able to” + “auditors require least privilege and separation of duties,” default to IAM role design (custom roles and group-based bindings) rather than procedural controls. Use the host/service project boundary in Shared VPC to ensure network controls remain with NetOps while app teams retain deployment autonomy.

8
Frage 8
(2 auswählen)

Your retail chain streams point-of-sale logs every 5 minutes from 300 branches via Pub/Sub into a Dataflow pipeline that writes to Cloud Bigtable for fraud analytics, and you discover that two PII fields (national ID numbers and phone numbers) are included; you must obfuscate these fields during ingestion to prevent analysts from seeing raw values, yet be able to re-identify the original values for regulatory investigations within 7 years while maintaining consistent tokens to enable joins and group-bys; which two components should you use? (Choose two.)

Secret Manager is designed to store and access secrets such as API keys, passwords, and certificates. While it can store sensitive material, it is not the primary service for managing encryption keys used for deterministic encryption/decryption workflows at scale. It lacks the dedicated cryptographic key lifecycle controls (key versions, rotation semantics for encryption, HSM options) and standard integration pattern expected for DLP-based de-identification compared to Cloud KMS.

Cloud Key Management Service (KMS) is the correct key management component to protect and control the cryptographic keys used to obfuscate and later re-identify PII. It supports IAM-based access control, audit logging, key versioning, and rotation planning—critical for a 7-year regulatory window. KMS integrates with Cloud DLP for KMS-wrapped keys, enabling separation of duties and controlled decryption during investigations.

Cloud DLP with cryptographic hashing can create consistent pseudonymous values (the same input hashes to the same output), which supports joins and group-bys. However, hashing is one-way and therefore does not meet the requirement to re-identify original national IDs and phone numbers for investigations. Even with salts/keys (HMAC), it is still not decryption; it remains non-reversible, making it unsuitable here.

Cloud DLP automatic text redaction removes or masks sensitive substrings (for example replacing digits with Xs). This can prevent analysts from seeing raw PII, but it typically destroys the ability to perform consistent joins/group-bys and does not allow reliable re-identification of the original values. Redaction is appropriate for display/log sanitization, not for reversible tokenization needed for regulated investigations.

Cloud DLP deterministic encryption using AES-SIV is designed for consistent, reversible obfuscation. Deterministic encryption produces the same ciphertext for the same plaintext (with the same key/context), enabling stable tokens for analytics (joins, group-bys) while still allowing authorized re-identification via decryption. When combined with Cloud KMS-managed keys, it provides strong security controls, auditability, and long-term key governance required by compliance.

Fragenanalyse

Core concept: This question tests tokenization/pseudonymization of PII during streaming ingestion while preserving analytic utility (joins/group-bys) and enabling later re-identification. In Google Cloud, this is commonly implemented with Cloud DLP de-identification using deterministic encryption, backed by customer-managed keys in Cloud KMS. Why the answer is correct: You must (1) obfuscate national ID and phone numbers so analysts cannot see raw values, (2) keep tokens consistent so the same input always maps to the same output (enabling joins and aggregations), and (3) be able to recover the original values for up to 7 years. Cloud DLP deterministic encryption using AES-SIV produces stable ciphertext for the same plaintext (given the same key/context), which functions as a consistent token. Unlike hashing, deterministic encryption is reversible, satisfying re-identification requirements. Cloud KMS provides the cryptographic key management needed to protect, rotate, and audit access to the key material used for encryption/decryption over the 7-year period. Key features / configurations / best practices: - Use Cloud DLP de-identification transform: deterministic encryption (AES-SIV) on the two PII fields in the Dataflow pipeline before writing to Bigtable. - Store and manage the wrapping/KEK in Cloud KMS (CMEK). Control access via IAM, separation of duties, and audit via Cloud Audit Logs. - Plan for long retention: ensure key lifecycle policies, backups, and rotation strategy that does not break re-identification (rotation must be handled carefully; keep prior key versions available for decrypting historical data). - Consider using DLP with KMS-wrapped keys so only a tightly controlled investigation workflow can decrypt. Common misconceptions: - Cryptographic hashing in DLP seems to provide consistent tokens, but it is not reversible, so it fails the re-identification requirement. - Automatic text redaction removes data entirely and breaks joins/group-bys and investigations. - Secret Manager stores secrets but is not a cryptographic key management system for encryption operations and does not provide the same key lifecycle controls as KMS. Exam tips: When you see “consistent token for analytics” + “must re-identify later,” think deterministic encryption/tokenization (not redaction, not hashing). When you see “7 years,” “regulatory investigations,” and “control/audit of keys,” pair the de-identification method with Cloud KMS for CMEK, IAM control, and auditability. This aligns with the Google Cloud Architecture Framework’s data protection principles: strong encryption, centralized key management, and least privilege access.

9
Frage 9
(2 auswählen)

Your company runs 20 CI pipelines in GitHub Actions and Azure DevOps outside Google Cloud that deploy to 8 Google Cloud projects using workload identity federation; service account keys are prohibited by policy. You must prevent attackers from spoofing another pipeline's identity (for example, by manipulating mutable claims like email or display name) to obtain unauthorized access to Google Cloud resources, while keeping existing federation flows and token lifetimes (1 hour) unchanged. What should you do? (Choose two.)

Enabling Data Access audit logs for IAM/STS can help investigate who exchanged tokens and when, and can support alerting. However, it does not prevent an attacker from successfully spoofing identity if the trust policy accepts mutable claims. The question asks to prevent spoofing while keeping flows and token lifetime unchanged, so logging alone is insufficient and is primarily a detective control.

There is no typical, recommended control in WIF/IAM to cap the number of external identities that can impersonate a service account as a spoofing mitigation. Even if such a cap existed, it would not stop an attacker from impersonating one of the allowed identities by manipulating claims. The core issue is claim selection and authorization conditions, not quantity limits.

A dedicated security project for workload identity pools/providers is a good organizational pattern for centralized governance and consistent policy management across multiple projects. However, it does not directly prevent spoofing between pipelines. Spoofing is prevented by using immutable claims in attribute mapping/conditions and correct IAM principal bindings; centralization without those controls still leaves the vulnerability.

This is the key preventative control. Configure attribute mapping and provider attribute conditions to rely only on immutable, provider-controlled identifiers (e.g., GitHub repository_id, workflow identifiers tied to the repo, Azure AD object_id/app ID) when setting google.subject and when writing IAM bindings (principalSet selectors). This prevents attackers from altering mutable claims like email/display name to impersonate another pipeline identity.

Least-privilege IAM on each service account and on target resources reduces the impact of any compromised pipeline identity. With 1-hour tokens unchanged, you must assume a stolen token could be used until expiry; limiting permissions prevents cross-project or broad access. This is a core Google Cloud security best practice and complements strong identity binding to provide defense in depth.

Fragenanalyse

Core concept: This question tests secure configuration of Workload Identity Federation (WIF) for external CI/CD systems (GitHub Actions, Azure DevOps) and how to prevent identity spoofing during Security Token Service (STS) token exchange and service account impersonation. The key is to bind access to stable, non-user-editable identifiers and enforce least privilege on the resulting Google identity. Why the answer is correct: (D) is the primary control to prevent one pipeline from impersonating another by manipulating mutable claims (email, display name, branch name, etc.). In WIF, you map external token claims into Google attributes (including google.subject) and then use IAM principalSet/principal bindings and provider attribute conditions to allow only specific identities. If you base authorization on mutable claims, an attacker who can influence those claims could satisfy the condition and impersonate a different pipeline. Using immutable identifiers (e.g., GitHub repository_id, workflow_ref with pinned refs, environment IDs, or Azure AD object_id / application (client) ID) makes spoofing materially harder because those values are stable and controlled by the identity provider. (E) complements D by limiting blast radius if any single pipeline identity is compromised. Even with strong identity binding, a stolen OIDC token or compromised runner could still impersonate that pipeline for up to the unchanged 1-hour lifetime. Least-privilege IAM on the target service accounts and resources ensures the attacker cannot laterally move across projects or access unrelated assets. Key features and best practices: Use provider attribute mapping to set google.subject to an immutable claim and optionally map custom attributes (attribute.repository_id, attribute.aud, attribute.actor_id). Then enforce provider attribute conditions and IAM bindings using principalSet://.../attribute.* selectors. Combine with per-pipeline (or per-repo/environment) service accounts and narrowly scoped roles (resource-level IAM where possible). This aligns with Google Cloud Architecture Framework principles: least privilege, strong identity, and defense in depth. Common misconceptions: Centralizing pools (C) improves manageability but does not inherently prevent spoofing if mappings/conditions still rely on mutable attributes. Audit logs (A) help detection/forensics, not prevention. Artificial caps (B) are not a standard IAM/WIF control and don’t address claim spoofing. Exam tips: For WIF questions, look for “attribute mapping/conditions” and “immutable identifiers” as the prevention mechanism, and pair it with least privilege to reduce impact. Logging is usually a secondary control unless the question asks for detection/monitoring.

10
Frage 10

A media analytics company performs a 7-day manual security review for every new service that verifies service-to-service transit paths, request handling, and VPC firewall rules across 3 projects and 2 Shared VPCs. With 12 squads releasing about 25 GKE and Cloud Run services per month, this process delays releases and consumes security bandwidth. They want teams to deploy without the full manual review while ensuring that violations (for example, 0.0.0.0/0 on TCP:22, publicly readable Cloud Storage buckets, or egress to restricted RFC1918 ranges) are prevented before merge/deploy rather than detected in production. They already use GitHub and Cloud Build and cannot fund a dedicated security reviewer per squad. What should you recommend?

Post-deployment scanning (SCC findings, audits) is valuable for detection and continuous monitoring, but it is reactive. The question explicitly requires preventing violations before merge/deploy, not discovering them in production. Relying on remediation after go-live still allows exposure windows (e.g., public buckets or SSH open to the world) and does not remove the release-blocking manual review process.

IaC plus policy-as-code validation in CI/CD is the correct preventive control. By running terraform plan validation (terraform-validator/Config Validator/OPA policies) in Cloud Build on GitHub PRs, noncompliant firewall rules, bucket ACL/public access settings, and routing/egress constraints can be blocked before merge and before deployment. This scales security enforcement without needing a dedicated reviewer per squad and reduces lead time.

Forcing all egress through inspection appliances is a runtime boundary/monitoring approach. It can help detect or block certain traffic, but it does not reliably prevent configuration violations like publicly readable Cloud Storage buckets or overly permissive firewall rules from being created. It also adds cost, operational complexity, potential latency, and becomes a central bottleneck—contrary to the goal of enabling fast, safe deployments.

Keeping production on-premises does not solve the core problem (scalable preventive controls) and is an extreme, impractical architectural change. It increases operational burden, reduces the benefits of managed services (GKE/Cloud Run), and does not inherently eliminate misconfiguration risk—just shifts it elsewhere. It also contradicts the intent to enable teams to deploy safely in Google Cloud with automated guardrails.

Fragenanalyse

Core concept: This question tests “shift-left” preventive controls using policy-as-code and infrastructure-as-code (IaC) so security requirements are enforced automatically in CI/CD, rather than relying on manual reviews or post-deployment detection. It aligns with the Google Cloud Architecture Framework’s security principles: automate guardrails, use least privilege, and continuously validate compliance. Why the answer is correct: Option B directly addresses the requirement to prevent violations before merge/deploy. By mandating IaC (e.g., Terraform) and embedding policy checks in Cloud Build triggered from GitHub PRs, the organization can block noncompliant changes (e.g., firewall rules allowing 0.0.0.0/0 to TCP:22, public Cloud Storage buckets, or disallowed egress routes) before they ever reach production. This removes the 7-day manual bottleneck while scaling across 12 squads and many monthly releases. Key features / how it works: - Policy-as-code in CI: Use terraform-validator, Config Validator (based on OPA/Rego), or OPA Gatekeeper-style constraints to evaluate planned resources against org policies. - Pre-merge enforcement: Run checks on pull requests; fail builds and block merges when policies are violated. - Consistency across projects/Shared VPC: Centralize constraints and reuse them across all repos/pipelines; enforce Shared VPC firewall and routing standards uniformly. - Complementary controls: While not required by the question, this approach pairs well with Organization Policy Service (e.g., domain restricted sharing, public access prevention for buckets) and SCC for detection, but the key is prevention in CI. Common misconceptions: Teams often default to “detect and remediate” (SCC findings) because it’s easy to enable, but that violates the requirement to prevent issues before deployment. Others propose runtime inspection appliances, but that detects traffic patterns rather than preventing misconfigurations like public buckets or overly permissive firewall rules. Exam tips: When the prompt emphasizes “before merge/deploy,” “reduce manual reviews,” and “scale across many teams,” the best answer is almost always automated preventive guardrails: IaC + policy-as-code in CI/CD. Map this to compliance automation and continuous controls rather than operational after-the-fact monitoring.

Erfolgsgeschichten(6)

P
P***********Nov 25, 2025

Lernzeitraum: 2 months

I used Cloud Pass during my last week of study, and it helped reinforce everything from beyondcorp principles to securing workloads. It’s straightforward, easy to use, and genuinely helps you understand security trade-offs.

길
길**Nov 23, 2025

Lernzeitraum: 1 month

문제 다 풀고 시험에 응했는데 바로 합격했어요! 시험이랑 문제는 비슷한게 40% 조금 넘었던거 같고, 처음 보는 유형은 제 개념 이해를 바탕으로 풀었어요.

D
D***********Nov 12, 2025

Lernzeitraum: 1 month

I would like to thanks the team of Cloud Pass for these greats materials. This helped me passing the exam last week. Most of the questions in exam as the sample questions and some were almost similar. Thank you again Cloud Pass

O
O**********Oct 29, 2025

Lernzeitraum: 1 month

Absolutely invaluable resource to prepare for the exam. Explanations and questions are spot on to give you a sense of what is expected from you on the actual test.

O
O**********Oct 29, 2025

Lernzeitraum: 1 month

I realized I was weak in log-based alerts and access boundary configurations. Solving questions here helped me quickly identify and fix those gaps. The question style wasn’t identical to the exam, but the concepts were spot-on.

Weitere Übungstests

Practice Test #1

50 Fragen·120 Min.·Bestehen 700/1000

Practice Test #3

50 Fragen·120 Min.·Bestehen 700/1000
← Alle Google Professional Cloud Security Engineer-Fragen anzeigen

Jetzt mit dem Üben beginnen

Lade Cloud Pass herunter und beginne alle Google Professional Cloud Security Engineer-Übungsfragen zu üben.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT-Zertifizierungs-Übungs-App

Get it on Google PlayDownload on the App Store

Zertifizierungen

AWSGCPMicrosoftCiscoCompTIADatabricks

Rechtliches

FAQDatenschutzrichtlinieNutzungsbedingungen

Unternehmen

KontaktKonto löschen

© Copyright 2026 Cloud Pass, Alle Rechte vorbehalten.

Möchtest du alle Fragen unterwegs üben?

App holen

Lade Cloud Pass herunter – mit Übungstests, Fortschrittsverfolgung und mehr.