
GCP
340+ kostenlose Übungsfragen mit KI-verifizierten Antworten
KI-gestützt
Jede Google Professional Cloud Security Engineer-Antwort wird von 3 führenden KI-Modellen kreuzverifiziert, um maximale Genauigkeit zu gewährleisten. Erhalte detaillierte Erklärungen zu jeder Option und tiefgehende Fragenanalysen.
Your company operates a single Google Cloud organization with 10 folders and 150 projects, and the SOC requires that all Google Cloud Console sign-in events and API calls that change resource configurations be streamed to an external SIEM in under 60 seconds, with coverage for all existing and future projects. Requirements:
An organization-level sink is directionally correct for covering all projects, but Cloud Storage is not appropriate for near real-time SIEM ingestion. Storage sinks are optimized for durable archival and batch processing; SIEMs typically ingest via streaming endpoints. You could build additional components (e.g., notifications + processing), but that adds complexity and may not reliably meet the <60 seconds requirement compared to Pub/Sub.
Correct. An organization-level aggregated sink with includeChildren captures logs from all folders and projects, including future projects, without per-project setup. Routing to Pub/Sub supports near real-time delivery and is a common SIEM integration pattern (subscriber pulls messages or a connector streams them onward). This meets the <60 seconds requirement and centralizes export management at the organization level.
Enabling Data Access audit logs org-wide is not required for the stated needs and can significantly increase log volume and cost. The requirement is for configuration-changing API calls (covered by Admin Activity logs, enabled by default) and Console sign-in events (identity audit logs). Data Access logs are for read/write access to user data (e.g., object reads), not primarily for admin configuration changes.
Correct. Cloud Console sign-in events are typically captured in Google Workspace (or Cloud Identity) audit logs rather than Cloud Audit Logs. Enabling export of Workspace audit logs to Cloud Logging makes those sign-in events available in Cloud Logging, where the organization-level sink can route them to Pub/Sub for SIEM ingestion. This is the key step to include Console login events in the same pipeline.
Parsing AuthenticationInfo can help identify the principal for many Cloud Audit Logs entries, but it does not solve the core requirements: it does not create org-wide coverage, does not provide a streaming export mechanism, and does not add Console sign-in events if they are not present in Cloud Logging. It is an implementation detail for SIEM enrichment, not one of the two required architectural actions.
Core concept: This question tests organization-wide logging architecture using Cloud Logging sinks and Cloud Audit Logs, and how to stream security-relevant events to an external SIEM with low latency. It also tests understanding of what “Google Cloud Console sign-in events” actually are and where they originate. Why the answer is correct: To cover all existing and future projects across an organization hierarchy, you should create an organization-level aggregated log sink with includeChildren enabled. This ensures logs from all folders and projects (present and future) are captured without per-project configuration. For near real-time delivery (<60 seconds), Pub/Sub is the correct sink destination because it supports streaming consumption patterns and low-latency delivery to external systems. Cloud Storage is primarily batch/archival and does not meet the near-real-time requirement. Console sign-in events are not Cloud Audit Logs “Admin Activity” entries. Interactive sign-ins to the Cloud Console are identity events typically recorded in Google Workspace (or Cloud Identity) audit logs. To route those into Cloud Logging so they can be exported via the same org-level sink to Pub/Sub, you must enable export of Google Workspace audit logs to Cloud Logging. This provides the required coverage for Console login events. Key features and best practices: - Organization-level sinks with includeChildren provide centralized governance and scale for 150+ projects and future growth. - Pub/Sub sinks are the standard pattern for SIEM streaming; the SIEM can pull or receive pushed messages via a subscriber/connector. - Admin Activity audit logs are enabled by default and capture configuration-changing API calls; they are retained and cannot be disabled. - Aligns with Google Cloud Architecture Framework “Operational Excellence” and “Security, Privacy, and Compliance” by centralizing telemetry and enabling rapid detection. Common misconceptions: Many assume Cloud Audit Logs already include Console sign-in events; they generally do not. Another trap is enabling Data Access logs (often expensive and high-volume) when the requirement is specifically configuration-changing activity (Admin Activity) plus sign-ins. Exam tips: When you see “all projects including future,” think org-level sink + includeChildren. When you see “<60 seconds to SIEM,” think Pub/Sub (or sometimes BigQuery for analytics, but not streaming). Distinguish between Cloud Audit Logs (API activity) and identity sign-in logs (Workspace/Cloud Identity).
Your healthcare analytics startup is building a multi-region telemetry pipeline on Google Cloud that spans Compute Engine VMs, a GKE Autopilot cluster, Cloud Storage buckets, BigQuery datasets (~50 TB), and Pub/Sub topics processing ~80,000 messages per second. Under your GDPR data protection by design program, the security review mandates that: (1) you—not Google—must control key creation, 90-day rotation, and IAM-scoped usage of encryption keys; (2) keys must reside in Google Cloud KMS/HSM with no dependency on external key stores; and (3) a single key management approach must be supported uniformly across all listed services. Which option should you choose to meet these requirements?
Cloud External Key Manager (EKM) integrates Google Cloud services with keys stored in an external key management system (often on-prem or third-party HSM). While it can provide strong customer control and separation from Google, it violates the requirement of “no dependency on external key stores” and “keys must reside in Google Cloud KMS/HSM.” Therefore it is not suitable for this scenario.
Customer-managed encryption keys (CMEK) use Cloud KMS (optionally Cloud HSM-backed) keys that you create and control. You can enforce IAM-scoped usage, manage key versions, and configure a 90-day rotation policy. CMEK is supported across major data and messaging services including Cloud Storage, BigQuery, Pub/Sub, and Compute Engine, enabling a single, uniform approach while keeping keys entirely within Google Cloud.
Customer-supplied encryption keys (CSEK) require you to provide raw key material to Google Cloud services at request time. This increases operational complexity, complicates rotation, and is not uniformly supported across all the listed services (notably many managed services like BigQuery and Pub/Sub rely on CMEK rather than CSEK). It also does not meet the requirement that keys must reside in Cloud KMS/HSM.
Google default encryption uses Google-managed keys and provides encryption at rest automatically, but you do not control key creation, rotation cadence, or IAM-scoped key usage. This fails the explicit requirement that the customer—not Google—must control key lifecycle and access. It also does not satisfy compliance-driven “customer control” expectations common in regulated environments.
Core Concept: This question tests encryption key management choices on Google Cloud—specifically the difference between Google-managed encryption, customer-managed encryption keys (CMEK) in Cloud KMS/Cloud HSM, customer-supplied encryption keys (CSEK), and External Key Manager (EKM). It also tests uniform applicability across multiple services (Compute Engine, GKE, Cloud Storage, BigQuery, Pub/Sub) and compliance-driven controls (GDPR “data protection by design”). Why the Answer is Correct: Customer-managed encryption keys (CMEK) is the only option that satisfies all three requirements simultaneously: (1) you control key creation, rotation (including 90-day rotation), and IAM-scoped usage via Cloud KMS permissions; (2) keys reside in Google Cloud KMS and can be backed by Cloud HSM (still Google Cloud–native, no external keystore dependency); and (3) CMEK is broadly supported across the listed services, enabling a single consistent approach. Key Features / How to Implement: - Use Cloud KMS key rings/keys (regional) aligned to your multi-region architecture; for higher assurance, use Cloud HSM-backed keys. - Enforce IAM least privilege: grant services access to the CryptoKey via roles/cloudkms.cryptoKeyEncrypterDecrypter (or narrower where possible) and use separation of duties for key admins. - Configure rotation: set rotation period to 90 days and ensure new key versions are used automatically where supported. - Service integrations: Cloud Storage bucket default KMS key, BigQuery dataset/table CMEK, Pub/Sub topic CMEK, Compute Engine disk/image/snapshot CMEK, and GKE (Autopilot) integrates with CMEK for supported encryption use cases (e.g., node boot disks and certain control-plane related encryption features depending on configuration). Common Misconceptions: - “External Key Manager” sounds like stronger control, but it explicitly depends on external key stores—disallowed here. - “Customer-supplied encryption keys” can look like maximum control, but it is operationally brittle, not uniformly supported across all services, and does not meet the requirement to keep keys in Cloud KMS/HSM. - Google default encryption is always on, but you do not control key lifecycle or IAM-scoped key usage. Exam Tips: When requirements say: control key creation/rotation + IAM-scoped key usage + keys in Cloud KMS/HSM + broad service support, the exam almost always points to CMEK (Cloud KMS/Cloud HSM). If external key custody is required, then EKM/Cloud HSM with external systems may appear—but here external dependency is explicitly prohibited.
You lead network security for a fintech trading platform on Google Cloud. You currently detect anomalies using VPC Flow Logs exported to BigQuery with a 5-minute aggregation interval across three VPCs. A red team exercise now requires examining full packet payloads and L4/L7 headers for east-west traffic between two production subnets (10.20.0.0/24 and 10.20.1.0/24) in a single VPC and forwarding a copy of up to 8 Gbps of this traffic to a third-party NIDS running on a Compute Engine VM, without altering original packets. Which Google Cloud product should you use?
Cloud IDS provides managed intrusion detection (based on Palo Alto Networks technology) for monitoring network threats in Google Cloud. It’s appropriate when you want Google-managed detection and alerting, not when you must forward raw mirrored traffic to a third-party NIDS VM. Cloud IDS consumes traffic via Packet Mirroring under the hood, but it does not satisfy the explicit requirement to send a copy of traffic to your own NIDS without altering packets.
VPC Service Controls logs relate to service perimeter events and access to Google-managed services (e.g., BigQuery, Cloud Storage) to reduce data exfiltration risk. They do not capture VPC east-west packet payloads or L4/L7 headers and cannot forward traffic to a network IDS. This option might seem relevant due to “fintech” and compliance, but it’s the wrong telemetry type for packet-level inspection.
VPC Flow Logs provide sampled/aggregated network flow metadata (5-tuple, bytes, packets, TCP flags, etc.) and are useful for traffic analysis and anomaly detection at scale. However, they do not include full packet payloads and cannot reconstruct L7 content. The question explicitly requires examining payloads and forwarding a copy of up to 8 Gbps of traffic, which Flow Logs cannot do.
Google Cloud Armor is an edge security service (WAF and DDoS protection) for external HTTP(S) load balancers and some other front-door scenarios. It does not provide packet capture for internal east-west traffic between subnets, nor does it mirror traffic to a third-party NIDS. It’s commonly confused with general network security tooling, but it’s specifically for protecting internet-facing applications.
Packet Mirroring is the correct choice because it creates out-of-band copies of VM network traffic, including packet headers and payloads, for deep inspection. That directly satisfies the requirement to analyze east-west traffic between two subnets and preserve the original packets unchanged. In Google Cloud, mirrored traffic is delivered to a collector architecture behind an internal passthrough Network Load Balancer, which is how a third-party NIDS on Compute Engine can receive and inspect the traffic. It also supports selective mirroring and filtering so the scope can be limited to the relevant production subnet traffic rather than the entire VPC.
Core concept: This question is about choosing the Google Cloud feature that provides full-fidelity packet copies for deep inspection of east-west traffic. The requirement is to inspect full packet payloads plus L4/L7 headers and send a copy of traffic to a third-party NIDS without modifying the original packets. Why correct: Packet Mirroring is the Google Cloud capability designed for this exact use case. It mirrors traffic from selected VM instances in a VPC and sends those packet copies out-of-band to a collector behind an internal passthrough Network Load Balancer, enabling third-party NIDS or packet analysis tools to inspect the traffic while production flows continue unchanged. Unlike VPC Flow Logs, Packet Mirroring provides actual packet data rather than aggregated metadata. Key features: Packet Mirroring supports filtering and scoping by subnet, instance, tags, direction, protocol, and CIDR ranges, which fits the requirement to focus on traffic between two production subnets in a single VPC. It is intended for deep packet inspection, forensic analysis, and integration with partner or self-managed security appliances. For an 8 Gbps requirement, the collector architecture and load balancer backend capacity must be sized appropriately. Common misconceptions: Cloud IDS is a managed IDS service, but it is not the product you choose when the requirement explicitly says to forward mirrored traffic to your own third-party NIDS VM. VPC Flow Logs only provide flow metadata, not payloads. Cloud Armor is for edge application protection, and VPC Service Controls logs are about Google API perimeter events rather than packet capture. Exam tips: If the question mentions full packet payloads, packet copies, east-west inspection, or sending traffic to a third-party appliance, think Packet Mirroring. If it mentions aggregated network metadata, think VPC Flow Logs. If it mentions managed IDS detections, think Cloud IDS. If it mentions WAF or DDoS at the edge, think Cloud Armor.
You deploy a Cloud Run job in us-central1 that executes every 4 hours for ~20 minutes to compress and upload up to 500 MB of log archives into a Cloud Storage bucket named cr-logs-archive; the job must have write-only access (no read, list, or delete) to the bucket during execution, you want to avoid long-lived credentials, and you must grant only the minimum permissions required to complete the uploads—what should you do?
roles/storage.admin is far too permissive. It includes broad capabilities such as reading/listing objects, deleting objects, and managing bucket configuration. This violates the requirement for write-only access and minimum permissions. While using a service account as the Cloud Run execution identity avoids long-lived credentials, the role choice fails least-privilege and increases blast radius if the job is compromised.
Generating a long-lived JSON key and baking it into the container image is an anti-pattern. It directly conflicts with the requirement to avoid long-lived credentials and creates significant secret-management risk (key exfiltration from image registry, logs, or runtime filesystem). Even if the key is scoped to write access, key rotation and revocation become operational burdens and common sources of security incidents.
Service account impersonation is unnecessary here and typically increases required permissions. To impersonate, the job’s identity must have permissions like roles/iam.serviceAccountTokenCreator on the target service account, which expands IAM surface area. Cloud Run can simply run as the intended service account and obtain short-lived tokens automatically, achieving the same goal with fewer moving parts and less risk.
This is the best option: grant roles/storage.objectCreator on the cr-logs-archive bucket to the Cloud Run job’s execution service account. The job then uses short-lived, automatically provided credentials (no keys) and receives only create-object permissions, meeting the write-only requirement (no read/list/delete). It also supports least privilege by scoping the binding to the single bucket rather than the whole project.
Core Concept: This question tests IAM least privilege for Cloud Storage access from Cloud Run jobs using short-lived, workload-attached credentials (service account identity) rather than long-lived keys. It focuses on configuring access correctly and minimally. Why the Answer is Correct: Cloud Run jobs run as an execution identity (a service account). When the job writes objects to a Cloud Storage bucket, it should authenticate using the job’s service account and receive short-lived OAuth tokens automatically via the metadata server/Workload Identity integration—no keys required. To enforce “write-only” behavior, grant only the permission to create new objects, not read/list/delete. The predefined role roles/storage.objectCreator on the target bucket grants storage.objects.create (and related minimal permissions) without granting storage.objects.get, storage.objects.list, or storage.objects.delete. Granting this role directly to the Cloud Run job’s execution service account on the bucket (resource-level IAM) is the simplest and most secure approach. Key Features / Best Practices: - Use Cloud Run job execution service account (no embedded secrets) to obtain short-lived tokens. - Apply IAM at the narrowest scope: bucket-level binding on cr-logs-archive, not project-wide. - roles/storage.objectCreator aligns with least privilege for uploads (create-only). - Avoid service account keys; prefer managed identity and token minting by Google. Common Misconceptions: - “storage.admin” feels convenient but violates least privilege by allowing read/list/delete and bucket management. - Baking JSON keys into images is explicitly discouraged; it creates long-lived credentials and key leakage risk. - Impersonation can be useful in multi-hop or cross-project scenarios, but it adds complexity and requires extra permissions (iam.serviceAccounts.getAccessToken / Token Creator) that are unnecessary when Cloud Run can directly run as the correct service account. Exam Tips: For Cloud Run/Cloud Functions/GKE, default to using the runtime service account with narrowly scoped IAM on the specific resource. If the requirement says “avoid long-lived credentials,” eliminate JSON keys. If it says “write-only,” look for roles like storage.objectCreator (create) rather than storage.objectAdmin (create/delete) or storage.admin (full control). Also prefer resource-level IAM bindings (bucket) over project-level grants to reduce blast radius.
A media-streaming startup must launch a public REST API on Cloud Run behind an external HTTP(S) Load Balancer within 48 hours, and the security team mandates minimizing the container image’s attack surface (target image size under 200 MB, no interactive shell or package manager, and only required runtime files) without changing networking or deployment tools; what should the team do to meet this requirement?
Cloud Build can compile and build container images, but it is not, by itself, an attack-surface reduction measure. You can still produce large images with shells and package managers using Cloud Build if the Dockerfile/base image includes them. Cloud Build is a delivery mechanism; the security requirement here is about the runtime image contents (minimal filesystem, no shell), which Cloud Build alone does not guarantee.
Distroless or scratch-based images are designed to minimize the runtime footprint: they exclude interactive shells and package managers and include only the application and required runtime libraries. Combined with multi-stage builds, this approach reliably produces smaller images and reduces the number of binaries available to an attacker. This directly satisfies the explicit constraints (size, no shell, no package manager, only required files) without changing networking or deployment tools.
Deleting unused image tags from Artifact Registry reduces clutter and may help governance, but it does not change the security posture of the image that is deployed to Cloud Run. The attack surface is determined by what is inside the running container image layers, not by how many tags exist in the registry. This is operational hygiene, not container hardening.
A CD tool like Cloud Deploy improves release management (progressive delivery, approvals, rollbacks), but it does not inherently reduce container image size or remove shells/package managers. The question’s constraint is specifically about minimizing the container’s runtime contents within 48 hours, and adopting new CD tooling is both unnecessary and unlikely to directly meet the image-hardening requirements.
Core Concept: This question tests container hardening and supply-chain security for a Cloud Run service. A key best practice in the Google Cloud Architecture Framework (Security, Reliability) is to reduce the attack surface by minimizing what is shipped and executed in production containers. Why the Answer is Correct: The requirement is explicitly about minimizing the container image’s attack surface: target size under 200 MB, no interactive shell, no package manager, and only required runtime files. Building images from distroless (or scratch where feasible) directly addresses these constraints. Distroless images contain only the application and its runtime dependencies (e.g., language runtime and required libraries) and omit shells and package managers, which reduces exploitable tooling and limits post-compromise actions. This is the most direct control to meet the stated security mandate without changing networking (external HTTP(S) Load Balancer) or deployment tooling (Cloud Run). Key Features / Best Practices: - Use multi-stage builds: compile/build in a builder stage (e.g., golang, node, maven) and copy only the final artifacts into a distroless runtime stage. - Prefer distroless variants (e.g., gcr.io/distroless/base-debian12, java, nodejs) to keep necessary CA certs and libc while still removing shells/package managers. - Run as non-root where possible and set a minimal filesystem layout; include only required certs/config. - Pair with vulnerability scanning (Artifact Registry scanning) and policy controls, but the question’s primary control is image composition. Common Misconceptions: Teams often think “use Cloud Build” automatically produces secure images. Cloud Build is a build service; it does not inherently remove shells/package managers or shrink runtime layers unless you design the Dockerfile accordingly. Similarly, cleaning up tags in Artifact Registry improves hygiene but does not change what runs in production. CD tooling improves rollout safety but does not meet the stated image-hardening constraints. Exam Tips: When you see requirements like “no shell,” “no package manager,” and “only runtime files,” the exam is pointing to distroless/scratch and multi-stage builds. Map the control to the layer being secured: runtime container contents (image hardening) rather than CI/CD orchestration or registry housekeeping. Also note that this is compatible with Cloud Run and external HTTP(S) Load Balancing because it doesn’t alter networking—only the container artifact.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
Your company has a three-level resource hierarchy: Organization > Business Unit folders > Team folders, and you are onboarding 12 platform squads that each receive a dedicated Terraform provisioner service account; each squad must be able to create and fully manage projects only under its assigned team folder (for example, folders/789012345678) while adhering to least privilege and preventing project creation in any other location; you need a scalable, centrally managed approach that supports Infrastructure as Code and avoids granting broad administrative control at the folder or organization level; what should you do?
Granting roles/resourcemanager.folderAdmin on the team folder is broader than necessary for the stated requirement. That role is intended for administering the folder itself, including folder management capabilities that exceed simply creating projects beneath it. Because the question explicitly asks to avoid broad administrative control at the folder or organization level, this violates least privilege. It also creates unnecessary risk if the provisioner service account is compromised, since the blast radius extends to folder administration rather than only project creation.
roles/resourcemanager.projectCreator at the organization level provides the ability to create projects, and an IAM Condition restricting resource.parent to the squad’s team folder enforces a hard boundary on where projects can be created. This is scalable (repeatable pattern for 12 squads), centrally managed, IaC-friendly, and aligns with least privilege by avoiding folder/organization admin roles while still preventing project creation in other locations.
Granting roles/editor at the organization level is excessively permissive and clearly conflicts with least-privilege design. It provides broad access across organizational resources and does not create a strong, centrally enforced boundary limiting project creation to one specific folder. Telling squads to apply finer-grained IAM later is reactive rather than preventive and leads to inconsistent governance. This option is therefore both overprivileged and operationally weak for centralized control.
roles/resourcemanager.folderIamAdmin allows the principal to manage IAM policy on the folder, which is a sensitive administrative capability. That permission can enable privilege escalation because the service account could potentially grant itself or others additional access. It also does not directly solve the core need of controlled project creation under only one assigned folder. Since the question asks for a centrally managed, least-privilege approach that avoids broad administrative control, this option is not appropriate.
Core concept: This question tests Google Cloud IAM design for resource hierarchy governance (Organization > Folders > Projects), specifically controlling project creation location using least privilege. It also touches IAM Conditions (conditional role bindings) and scalable, centrally managed access patterns suitable for Terraform-based provisioning. Why the answer is correct: To let each squad create and fully manage projects only under its own team folder, you need two things: (1) the ability to create projects, and (2) a hard boundary preventing creation elsewhere. Granting roles/resourcemanager.projectCreator at the organization level provides the project creation permission (resourcemanager.projects.create). Adding an IAM Condition that checks the target parent (for example, resource.parent == "folders/789012345678") constrains where that permission can be exercised. This is scalable because you can manage a consistent pattern centrally (org-level bindings) while still scoping each squad to a specific folder via the condition. It also avoids giving broad folder/organization admin privileges. Key features and best practices: - IAM Conditions allow attribute-based access control on IAM bindings, enabling “create only under this folder” constraints. - Centralized governance: org-level binding + condition is easier to audit and standardize across 12 squads than delegating folder admin. - Least privilege: projectCreator is narrower than folderAdmin/editor and avoids granting permissions to modify folder structure or IAM broadly. - For “fully manage projects,” you typically pair this with project-level roles granted after creation (often via automation): e.g., roles/resourcemanager.projectIamAdmin and/or predefined admin roles on the newly created project. The question’s key requirement is preventing creation outside the assigned folder; conditional projectCreator is the control that enforces that boundary. Common misconceptions: - “Just grant folderAdmin on the team folder” seems to scope correctly, but it grants extensive folder management capabilities (including moving resources, deleting folders, and potentially manipulating hierarchy) beyond what’s needed for project provisioning. - “Use editor at org” is overly broad and violates least privilege; it also doesn’t enforce location constraints. - “Give folder IAM admin” enables changing IAM policies, which can lead to privilege escalation and does not directly grant project creation. Exam tips: - For controlling where projects can be created, think roles/resourcemanager.projectCreator plus IAM Conditions on resource.parent. - Prefer conditional, centrally managed bindings for scalable multi-team governance. - Watch for privilege escalation: roles that allow setting IAM (folderIamAdmin, projectIamAdmin) are powerful and should be tightly controlled. - Map requirements to the minimal role that grants the needed permission, then add conditions to enforce boundaries aligned with the Google Cloud Architecture Framework’s security principle of least privilege and centralized policy management.
Your retail company is standing up a brand-new Google Cloud organization backed by a fresh Cloud Identity domain, and you must create exactly two super administrator accounts for break-glass use while meeting internal security baselines aligned to CIS; the environment has only standard internet egress with TLS (no VPN/Interconnect) and you must complete setup within 24 hours—when creating these super admin accounts, which two actions should you take to meet best practices and reduce risk? (Choose two.)
Incorrect. An Access Context Manager access level can restrict access to Google Cloud resources, but blocking super administrators from signing in defeats the purpose of break-glass accounts. In an emergency (identity recovery, org-level misconfiguration), you must be able to authenticate as super admin. Best practice is to tightly control and monitor super admin use, not prevent it entirely.
Incorrect. Removing IAM role bindings at the organization root may reduce Google Cloud resource permissions, but it does not remove Cloud Identity/Google Workspace super administrator privileges, which exist outside Cloud IAM. Also, super admins may need certain permissions during initial setup and recovery. The better pattern is separate day-to-day accounts with least privilege plus hardened super admins for emergencies.
Correct. Hardware security keys (FIDO2/U2F) provide phishing-resistant MFA and are strongly recommended for privileged accounts, aligning with CIS controls and Google best practices. Enrolling at least two keys per super admin reduces operational risk of lockout while maintaining strong authentication. This is especially important for brand-new orgs where identity compromise would be catastrophic.
Incorrect. While private network paths can reduce some exposure, Google sign-in is designed to be secure over the public Internet using TLS, and the scenario explicitly states there is no VPN/Interconnect and setup must be completed within 24 hours. Requiring private connectivity is not a realistic or necessary prerequisite for securely creating/administering super admin accounts.
Correct. Issuing separate non-privileged identities for daily work and reserving super admin accounts strictly for break-glass enforces least privilege and reduces the attack surface. It limits exposure of the highest-privilege credentials to rare, controlled events, lowering the likelihood of phishing, malware token theft, or accidental misuse. This is a standard privileged access management best practice.
Core Concept: This question tests secure identity and privileged access management for a new Google Cloud organization using Cloud Identity/Google Workspace super administrators. It aligns with CIS-style baselines and the Google Cloud Architecture Framework principle of “secure by design,” emphasizing strong authentication, least privilege, and separation of duties. Why the Answer is Correct: C is correct because super administrator accounts are the highest-privilege identities in Cloud Identity/Google Workspace and are frequently targeted. Enforcing phishing-resistant MFA with hardware security keys (FIDO2/U2F) is a top control in CIS benchmarks and Google guidance for privileged accounts. Requiring at least two keys per admin reduces lockout risk (lost/broken key) while maintaining strong assurance. E is correct because it implements separation of duties and least privilege: admins should not use super admin accounts for routine tasks (email, browsing, daily console work). Day-to-day identities should have only the minimum required IAM/admin roles, while super admin accounts remain dormant and tightly controlled for break-glass recovery (e.g., billing/identity recovery, org-level emergencies). This reduces exposure time and the chance of credential theft or accidental destructive actions. Key Features / Best Practices: - Use phishing-resistant MFA (security keys) for privileged identities; avoid SMS/voice. - Enroll backup keys and store them securely (e.g., separate physical locations/secure storage) to preserve recoverability. - Maintain dedicated break-glass accounts with restricted use, strong monitoring, and documented procedures. - Combine with logging/alerting (Admin audit logs, Cloud Audit Logs) and periodic access reviews to satisfy compliance intent. Common Misconceptions: A seems security-positive but is self-defeating: blocking super admins from signing in prevents emergency recovery. B misunderstands the control plane: super admin power is in Cloud Identity/Workspace, not only Google Cloud IAM bindings; removing IAM roles doesn’t remove super admin capabilities and can hinder necessary org setup. D is impractical and unnecessary here: TLS already protects credentials in transit, and the scenario explicitly lacks private connectivity and requires completion within 24 hours. Exam Tips: For “break-glass” accounts, look for (1) phishing-resistant MFA, (2) separate privileged vs daily identities, (3) minimal exposure and strong operational controls. Don’t choose options that eliminate the ability to use the break-glass account or require unavailable network prerequisites.
Your fintech company stores regulated workloads on Compute Engine persistent disks, and the security team requires that a 256-bit AES key generated in your on-premises HSM (rotated every 90 days) be used directly to encrypt data at rest, with Google Cloud not storing the key and data becoming irrecoverable if the key is unavailable. What should you do?
Incorrect. Using Cloud KMS to manage a DEK is not how Google Cloud storage encryption is typically implemented; KMS keys are used as KEKs to wrap DEKs (envelope encryption). More importantly, Cloud KMS stores key material in Google-managed HSM infrastructure, which violates the requirement that Google Cloud must not store the key and that the on-prem HSM-generated key be used directly.
Incorrect. Cloud KMS managing a KEK describes CMEK: Google stores the KEK in Cloud KMS and uses it to wrap per-resource DEKs. This can satisfy many compliance needs, but it does not meet the strict requirement that the key is generated in an on-prem HSM and that Google Cloud does not store the key. With CMEK, loss of access can block decryption, but the key still resides in Google Cloud.
Correct. Customer-supplied encryption keys (CSEK) let you provide your own raw AES-256 key material (e.g., generated in an on-prem HSM) to encrypt data at rest for persistent disks. Google does not store the key; it is supplied at use time to wrap the disk’s DEK. If the key is unavailable or lost, the disk data cannot be recovered, matching the stated requirement.
Incorrect. CSEK is not used to manage a KEK in the Cloud KMS sense; rather, the customer-supplied key is used to wrap the resource’s DEK. Choosing “KEK” here reflects a misunderstanding of Google’s envelope encryption terminology. The correct framing is that CSEK is applied to the DEK for the disk, not that you are managing a KEK service-side.
Core Concept: This question tests encryption key control for data-at-rest on Compute Engine persistent disks, specifically the difference between Google-managed encryption, Cloud KMS/CMEK, and Customer-Supplied Encryption Keys (CSEK). It also implicitly tests the DEK vs KEK model used by Google Cloud storage systems. Why the Answer is Correct: The requirement says a 256-bit AES key is generated in an on-prem HSM, rotated every 90 days, must be used directly to encrypt data at rest, Google Cloud must not store the key, and data must be irrecoverable if the key is unavailable. This exactly matches CSEK behavior: you provide your own raw AES-256 key material with each API request (or via tooling that supplies it), Google uses it to encrypt the resource’s data encryption key (DEK) and does not persist your key. If you lose the key, Google cannot decrypt the disk—data is effectively unrecoverable. Key Features / How it Works: Compute Engine persistent disks are encrypted with a per-disk DEK. With CSEK, you supply an AES-256 key that Google uses to wrap (encrypt) that DEK. Operationally, you must securely store and deliver the key for every attach/read operation and handle rotation (e.g., re-encrypt/rewrap disks with a new key on a 90-day schedule). This aligns with compliance-driven “hold-your-own-key” requirements and strong separation of duties. Common Misconceptions: Cloud KMS (CMEK) is often chosen for “customer-managed keys,” but KMS stores key material in Google-managed HSMs. That violates “Google Cloud not storing the key” and “key generated in on-prem HSM used directly.” Similarly, “KEK vs DEK” wording can confuse: in Google’s envelope encryption, KMS keys are KEKs used to wrap DEKs; CSEK is the customer-provided key used to wrap the DEK. Exam Tips: If the prompt demands: (1) key originates outside Google, (2) Google must not store it, and (3) loss of key makes data unrecoverable, think CSEK. If it says “customer controls rotation/permissions but Google stores key in KMS,” think CMEK with Cloud KMS/Cloud HSM. Map requirements to the key custody model first, then choose the service.
Your company delegates project-level administration to each feature team by granting the Project Owner role on their own Google Cloud projects; the organization has approximately 2,300 projects across 90 VPC networks. Security Command Center Premium has surfaced 180 OPEN_REDIS_PORT (TCP/6379) findings where VMs with external IPs are reachable from the internet. You must enforce preventative guardrails that automatically apply to all current and future projects to stop these common exposure misconfigurations without relying on per-project maintenance. What should you do?
An org-level hierarchical firewall deny from 0.0.0.0/0 is centralized and preventative, but it is overly broad. It would block all internet ingress to every workload, including legitimate public-facing services (for example, web apps on 443) and could cause widespread outages. The question targets stopping common exposure misconfigurations (like open Redis), not eliminating all public ingress everywhere.
This is the best guardrail: an organization-level hierarchical firewall policy that allowlists only approved internal/on-prem CIDRs and relies on implicit deny for everything else. It scales across 2,300 projects and automatically applies to future projects, meeting the “no per-project maintenance” requirement. It prevents direct internet reachability to VM external IPs (including TCP/6379) even if teams misconfigure VPC firewall rules.
Cloud Armor is a WAF/DDoS policy attached to external HTTP(S) Load Balancers (and some proxy-based LBs). It does not protect arbitrary TCP ports on VM external IPs, such as direct Redis on 6379. Since the finding is about VMs reachable from the internet, the correct control plane is VPC/hierarchical firewalling, not Cloud Armor.
Adding a priority-0 deny rule in every VPC network could block internet ingress, but it violates the requirement to avoid per-project/per-network maintenance. With ~90 VPCs and 2,300 projects (and future growth), this approach is operationally brittle and easy to drift. Centralized hierarchical firewall policies are designed specifically to enforce consistent baselines across the organization.
Core concept: This question tests preventative network guardrails at scale using VPC hierarchical firewall policies (a Google Cloud Organization Policy–like control for firewalling) to enforce consistent ingress controls across many projects/VPCs. It aligns to the Google Cloud Architecture Framework principle of “defense in depth” and centralized governance. Why the answer is correct: An organization-level hierarchical firewall policy that only allows ingress from approved internal/on-prem ranges (and relies on implicit deny for everything else) prevents internet reachability to workloads with external IPs, including Redis on TCP/6379. Because the policy is applied at the org (or folder) level, it automatically covers all existing and future projects and VPC networks without per-project rule maintenance, which is essential given 2,300 projects and delegated Project Owner access. Key features and best practices: Hierarchical firewall policies are evaluated in addition to VPC firewall rules and can be used to enforce baseline controls that project owners cannot easily bypass. Using an “allowlist” model (only trusted source CIDRs) is a strong boundary protection pattern: it blocks common exposure misconfigurations (like OPEN_REDIS_PORT) regardless of whether a team accidentally creates permissive VPC firewall rules. You can scope the policy to the organization or specific folders, and you can include on-prem ranges (via Cloud VPN/Interconnect) and RFC1918 ranges as appropriate. This approach is preventative (stops exposure) rather than detective (finding after the fact). Common misconceptions: A blanket deny from 0.0.0.0/0 (option A) sounds secure but is usually operationally incorrect because it would also block legitimate public services (HTTPS, APIs, IAP/SSH patterns, etc.) across the entire organization. Cloud Armor (option C) is not a general VM ingress firewall; it protects HTTP(S) load-balanced traffic, not direct TCP/6379 to VM external IPs. Per-VPC rules (option D) do not meet the requirement to avoid per-project maintenance. Exam tips: When you see “apply to all current and future projects” and “preventative guardrails,” think organization/folder-level controls: hierarchical firewall policies and organization policies. For internet exposure findings, prefer centralized network boundary controls over per-project firewall rule hygiene or WAF products that only cover load balancers.
Your retail analytics platform runs on two Compute Engine instances behind a load balancer and authenticates to Google APIs using a user-managed service account key stored in Secret Manager (secret name: retail-sa-key), and your security policy mandates rotation every 90 days with no more than 2 minutes of reduced capacity. To follow Google-recommended practices when rotating this user-managed service account key, what should you do?
Incorrect. There is no standard, Google-recommended “enable-auto-rotate” capability for user-managed service account keys as described. Google’s guidance is to avoid long-lived keys where possible, and if keys are required, rotate them via a controlled process (create new key, deploy, then delete old). Relying on a nonexistent/incorrect command would not meet the 90-day policy or the 2-minute reduced-capacity requirement.
Incorrect. Key rotation is not typically performed by supplying a “NEW_KEY” to a rotate command for service account keys. In Google Cloud, you create a new key for the service account, distribute it securely (here, via Secret Manager), update workloads, and then delete the old key. A single-step “rotate” command is misleading and doesn’t address safe rollout, verification, or Secret Manager versioning.
Correct. This is the recommended operational pattern: create a new service account key, store it as a new Secret Manager version, roll out the change (rolling restart across the two instances behind the load balancer), verify API access, then delete the old key from IAM. This minimizes downtime (overlapping keys during cutover) and reduces risk by promptly revoking the old credential after validation.
Incorrect. Keeping the old key on VMs for 30 days increases exposure and undermines the purpose of rotation. It also conflicts with least privilege and credential hygiene: old keys should be revoked once the new key is confirmed working. If rollback is needed, use Secret Manager versioning and controlled deployment, not lingering key files on disk (which are prone to theft and hard to audit).
Core concept: This question tests secure rotation of user-managed service account keys used by workloads, and how to do it with minimal downtime. In Google-recommended practice, long-lived service account keys are a risk and should be avoided when possible (prefer Workload Identity/attached service accounts). If you must use keys, rotate them safely using overlapping validity and controlled rollout. Why the answer is correct: Option C describes the standard, recommended rotation pattern: create a new key, store it as a new Secret Manager version, update the application to use the new version, validate that API calls succeed, and then delete the old key from the service account. This achieves near-zero downtime because both keys can exist simultaneously during the cutover. With two instances behind a load balancer, you can do a rolling restart/refresh (one VM at a time) so capacity reduction stays within the 2-minute constraint. Key features and configurations: Secret Manager supports versioning, enabling you to add a new version without breaking existing consumers. Applications should reference the secret (and ideally “latest”) and be able to reload credentials (or be restarted in a rolling manner). After verification, deleting the old key in IAM immediately invalidates it, reducing exposure. This aligns with the Google Cloud Architecture Framework’s security principles: reduce credential leakage risk, enforce credential lifecycle management, and operationalize security with repeatable runbooks. Common misconceptions: Some assume there is an “auto-rotate” command for service account keys (there isn’t for user-managed keys in the way implied), or that “rotate” is a single gcloud command. In practice, rotation is a process: create new key, deploy, verify, then revoke old. Keeping old keys “just in case” increases breach blast radius and violates rotation intent. Exam tips: On the Security Engineer exam, when you see “user-managed service account key” + “Secret Manager” + “rotation,” expect a versioned-secret + rolling deployment + delete-old-key workflow. Also remember the best-practice alternative: avoid keys entirely by using attached service accounts on Compute Engine (metadata server) or Workload Identity Federation for external workloads.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
Your logistics company runs a route-optimization model as a managed Vertex AI Batch Predictions job on Google Cloud. Twenty external carriers upload up to 1,000 CSV files per day (each <= 100 MB) to a dedicated Cloud Storage bucket via 15-minute signed URLs; a Cloud Function triggers the batch predictions and writes results to partner-specific buckets. You are conducting a configuration review with stakeholders and must clearly describe your security responsibilities for this managed AI workflow. What should you do?
Incorrect. While managed services reduce your operational burden, they do not eliminate your security responsibilities. Rate limits and budget alerts are cost/availability controls, not the core security posture for partner uploads and managed batch inference. You still must manage IAM, data access boundaries, and monitoring. This option misrepresents the shared responsibility model by implying most security shifts entirely to Google.
Incorrect. Securing CSV normalization code is important (input validation, safe parsing), but the question asks to describe security responsibilities for the managed workflow end-to-end. Limiting IAM discussion to “within the development team” ignores the biggest risks here: external partner access, service account permissions between Cloud Functions/Vertex AI/Cloud Storage, and auditability. It’s too narrow for a configuration review.
Correct. It accurately applies Google’s shared responsibility model: Google secures the underlying managed Vertex AI infrastructure, while you secure identities, permissions, and data access patterns. It highlights least-privilege IAM for service accounts and partners, secure upload/download via short-lived signed URLs and TLS, restrictive bucket policies, and operational monitoring using Cloud Audit Logs and Cloud Logging to detect misuse or malicious activity.
Incorrect. You generally cannot place custom network firewalls or deep IDS/IPS “around” Vertex AI’s managed service control plane in the way you would for self-managed VMs. The more appropriate controls are IAM, organization policies, VPC Service Controls, private access patterns, and logging/monitoring. Vulnerability scanning of a Google-managed runtime is also largely Google’s responsibility, not yours.
Core Concept - This question tests Google Cloud’s shared responsibility model in a managed ML workflow (Vertex AI Batch Predictions) and how to articulate customer vs. Google responsibilities across IAM, data access, and operational monitoring. Why the Answer is Correct - Even though Vertex AI Batch Predictions is a managed service, you still own security “in the cloud”: who can upload data, who can trigger jobs, what identities run the pipeline, where outputs are written, and how activity is monitored. Option C correctly frames the review around IAM for service accounts and partners, secure upload/download patterns (short-lived signed URLs, TLS), least-privilege bucket policies, and logging/monitoring for detection and response. This is exactly what stakeholders need to hear in a configuration review: clear boundaries of responsibility and concrete controls. Key Features - Use dedicated service accounts for Cloud Functions and Vertex AI with minimal roles (e.g., storage.objectViewer/objectCreator on specific buckets, Vertex AI job permissions only where needed). Prefer uniform bucket-level access, disable public access, and scope signed URLs to object name, method, content-type, and short expiration. Ensure Cloud Storage and Vertex AI access is over TLS (default) and consider VPC Service Controls for data exfiltration risk reduction. Enable and review Cloud Audit Logs (Admin Activity, Data Access where appropriate) and Cloud Logging metrics/alerts for anomalous uploads, job triggers, and cross-bucket writes. Common Misconceptions - Many assume “PaaS means Google handles security” (A), but customers still manage identities, permissions, and data governance. Others focus narrowly on application code (B) while ignoring partner access and service-to-service permissions. Designing custom firewalls/IDS around a managed runtime (D) misunderstands that you can’t insert traditional network appliances into Google-managed control planes; instead, you use IAM, organization policies, VPC SC, and logging. Exam Tips - For managed services, answer choices that mention shared responsibility, least privilege IAM, secure data access patterns (signed URLs, bucket policies), and audit logging are usually correct. Be wary of options that overemphasize network perimeter controls for serverless/managed services or treat cost controls as “security responsibilities.”
You are deploying an internal Cloud Run service that must read files from each employee's Google Drive and write a summary to BigQuery without any interactive user consent; the service must not rely on the currently signed-in user's credentials and must follow Google's recommended server-to-server approach. Your Google Workspace has 3,000 users, and the application should request only the https://www.googleapis.com/auth/drive.readonly scope and log which user was impersonated for each request; what should you do?
Granting users the Service Account User role allows those users to impersonate or actAs the service account for Google Cloud IAM purposes. It does not provide access to Google Drive user data via the Drive API, nor does it eliminate the need for Workspace OAuth authorization. It also scales poorly and increases risk by allowing many users to impersonate the service account.
Using a Google Group to grant Service Account User is an IAM convenience improvement over option A, but it still addresses the wrong control plane. IAM actAs permissions do not grant Workspace Drive access to employee files. The requirement is server-to-server access to Workspace user data without interactive consent, which requires domain-wide delegation and authorized OAuth scopes.
Authenticating with a super admin account is explicitly discouraged. It violates least privilege, creates a high-impact credential that can access far more than Drive read-only, and is difficult to manage securely (rotation, storage, auditing). It also does not follow Google’s recommended server-to-server approach for accessing Workspace user data at scale.
This is the correct pattern: configure a service account for Google Workspace domain-wide delegation, authorize only the drive.readonly scope in the Admin console, and impersonate each target user (set the subject) when calling the Drive API. This avoids interactive consent, does not rely on the signed-in user, supports auditing by logging the impersonated user, and enforces least privilege through scoped admin authorization.
Core Concept: This question tests Google Workspace access to user data from a server workload on Google Cloud using the recommended server-to-server pattern: a service account with Google Workspace Domain-Wide Delegation (DWD) to impersonate users and call Google APIs (Drive) without interactive consent. Why the Answer is Correct: An internal Cloud Run service must read each employee’s Drive files and must not use the currently signed-in user’s credentials. For Workspace user data, the correct approach is to use a service account configured for domain-wide delegation, then impersonate (“sub”/“subject”) the target user when minting OAuth tokens. The Workspace admin explicitly authorizes the exact OAuth scope (here, drive.readonly) in the Admin console, satisfying least privilege. The application can log which user is impersonated per request because it sets the subject to the user’s email when generating credentials. Key Features / Configurations: 1) Create a service account used by Cloud Run (via runtime service account). 2) Enable Domain-Wide Delegation on that service account and capture its OAuth client ID. 3) In Google Admin console: Security > API controls > Domain-wide delegation, add the client ID and authorize only https://www.googleapis.com/auth/drive.readonly. 4) In code: use service account credentials with a “subject” (user email) to impersonate each employee when calling Drive API; write summaries to BigQuery using the Cloud Run service account’s IAM permissions (separate from Workspace scopes). This aligns with Google’s recommended server-to-server model and supports 3,000 users without per-user consent flows. Common Misconceptions: IAM roles like Service Account User (options A/B) control who can impersonate a Google Cloud service account, not how to access Google Workspace user Drive data. They do not grant Drive API access to user files. Using a super admin account (option C) is insecure, non-auditable, and violates least privilege; it also creates operational risk and poor key management. Exam Tips: For Workspace data access (Gmail/Drive/Calendar) across many users without interactive consent, look for “Domain-wide delegation” + “impersonate user” + “Admin console authorizes scopes.” Distinguish Google Cloud IAM (resource access) from Google Workspace OAuth scopes (user data access). Always prefer least-privilege scopes and per-request subject logging for auditability.
Your marketing analytics unit (120 users) plans to adopt Google Cloud for BigQuery and Vertex AI within 30 days, and company policy requires that all identities remain company-owned and all sign-ins use the corporate SAML 2.0 IdP; while attempting to create a new Cloud Identity tenant for example.com, the Platform Engineer discovers that example.com is already verified and actively used by an internal Google Workspace deployment with 850 active accounts and existing SAML SSO, and needs guidance on how to proceed with the least disruption and without violating the policy. What should you advise?
Domain contestation is designed for situations where domain ownership is disputed or an unauthorized party controls the domain in a tenant. Here, the domain is already used internally with 850 accounts and SAML SSO, so contestation would be highly disruptive and unnecessary. It risks account conflicts, service interruption, and complex migration. It does not align with the “least disruption” requirement and is not the right tool for internal segmentation.
Using a new domain (analytics-example.com) could technically enable a separate Cloud Identity tenant, but it violates the spirit of “all identities remain company-owned” under the primary corporate domain and creates identity fragmentation. Users would need alternate usernames or aliases, complicating lifecycle management, group membership, and access governance. It also increases operational overhead and can introduce inconsistent SSO and policy enforcement across tenants.
Granting Super Admin to a marketing program manager is excessive and violates least privilege. Super Admin has broad control over identity, security settings, and domain-wide configurations, creating significant risk. The requirement is to onboard a unit with minimal disruption and maintain policy, not to delegate top-level tenant control. Proper onboarding should be done by existing admins with scoped admin roles and group/OU-based controls.
This is the least disruptive and most policy-aligned approach. The domain is already verified and actively used with SAML SSO, so the correct action is to coordinate with the existing Super Administrator to onboard the marketing unit into the current tenant. Use OUs and groups to segment users, enforce SAML sign-in, and apply appropriate policies. Then grant BigQuery/Vertex AI access via group-based IAM in the correct Google Cloud projects.
Core concept: This question tests identity architecture on Google Cloud: Cloud Identity/Google Workspace tenancy, domain verification ownership, and federated SSO (SAML 2.0) as the authoritative sign-in method. It also touches organizational governance (least disruption, centralized policy enforcement) aligned with the Google Cloud Architecture Framework’s security foundations (centralized identity, consistent policy, least privilege). Why the answer is correct: Because example.com is already verified and actively used in an existing Google Workspace tenant with SAML SSO, the company already has an authoritative identity boundary for that domain. A verified domain cannot be cleanly “re-created” in a separate Cloud Identity tenant without significant disruption. The least disruptive, policy-compliant path is to collaborate with the existing Super Administrator and onboard the marketing analytics unit into the current tenant. This preserves company-owned identities (same corporate domain), continues using the corporate SAML 2.0 IdP for sign-in, and avoids breaking existing users, groups, and app integrations. The unit can then access BigQuery and Vertex AI via Cloud Identity/Workspace identities, groups, and (ideally) Cloud Identity Groups mapped to Google Cloud IAM. Key features / configurations: Use Organizational Units (OUs) and Groups to segment the marketing unit, apply context-aware access and security policies, and manage app access. Ensure SAML SSO remains enforced, and use group-based IAM bindings in Google Cloud projects for BigQuery/Vertex AI. If needed, use separate Google Cloud organizations/projects and centralized IAM with groups, rather than separate identity tenants. Common misconceptions: It may seem simpler to create a new tenant for the unit, but domain verification and identity lifecycle become fragmented, and SSO policy consistency becomes harder. Domain contestation is not a routine “move” mechanism and is intended for ownership disputes, not internal re-architecture. Exam tips: When a domain is already verified in Workspace/Cloud Identity, assume it is the authoritative tenant. Prefer onboarding via OUs/groups and Google Cloud resource hierarchy (org/folders/projects) rather than creating parallel identity tenants. Look for answers that preserve centralized identity, minimize disruption, and maintain federation requirements.
Your compliance team is launching an internal meeting-notes summarization pipeline on Google Cloud that uses a generative model to create summaries from audio transcripts, it must process up to 3,000 transcripts per day (average 1 MB each) with under 200 ms filtering latency per request, and company policy mandates that no personally identifiable information (PII)—such as names, email addresses, phone numbers, or government IDs—may appear in either the prompts sent to the model or the summaries returned, so you need a managed, scalable control that detects and automatically redacts PII on both ingress and egress before any storage or display; what should you do?
Cloud KMS protects data by encrypting it at rest and controlling access to encryption keys, which is valuable for confidentiality and key management. However, encryption does not inspect the contents of transcripts or summaries and does not remove sensitive values before they are sent to a model or shown to users. Once the application decrypts the data for processing, any PII is still present unless a separate inspection and redaction step is applied. Therefore, KMS is useful as a complementary control but does not satisfy the core requirement in the question.
Cloud DLP is Google Cloud’s native managed service for discovering and de-identifying sensitive data in text and other content. In this pipeline, it can inspect transcript text before it is turned into a prompt and then inspect the generated summary before that output is stored or displayed, ensuring sensitive values are removed on both ingress and egress. It supports de-identification techniques such as masking, redaction, replacement, and tokenization, which aligns directly with the requirement to prevent PII from appearing in prompts or returned summaries. This makes it the most appropriate managed and scalable control for compliance-focused content filtering in a generative AI workflow.
VPC Service Controls helps reduce the risk of data exfiltration by placing supported services inside a service perimeter and restricting access paths. That control governs where data can move, but it does not analyze request or response payloads to determine whether they contain PII. Sensitive data could still be passed to the model and returned in summaries entirely within the perimeter, which would still violate the stated policy. As a result, VPC-SC is defense in depth, not the primary solution for content redaction.
A third-party Marketplace product might offer scanning or redaction features, but the question asks for the most appropriate managed and scalable Google Cloud control. Introducing an external product adds procurement, integration, operational, and compliance review overhead, and it is not the standard native answer for Google Cloud certification scenarios. The option also emphasizes encryption, which still does not inherently guarantee that PII is removed from prompts and summaries before use. Cloud DLP is purpose-built for sensitive-data inspection and de-identification, making it the better fit.
Core Concept: This question tests managed data protection controls for compliance—specifically detecting and de-identifying PII in content flowing into and out of a generative AI summarization pipeline. In Google Cloud, the primary managed service for PII discovery and redaction is Cloud Data Loss Prevention (Cloud DLP). Why the Answer is Correct: Option B is correct because Cloud DLP can inspect text for sensitive data types (names, emails, phone numbers, government IDs, etc.) and then automatically de-identify (mask, redact, replace, tokenize, or format-preserve) the detected findings. The requirement explicitly states that PII must not appear in prompts sent to the model (ingress) nor in summaries returned (egress), and that the control must be managed, scalable, and applied before storage or display. Cloud DLP is designed for exactly this: programmatic inspection + de-identification as part of an application workflow. Key Features / How to Implement: - Use Cloud DLP inspect + de-identify APIs in the request path before calling the model (prompt construction) and again on model output before persisting or rendering. - Configure infoTypes (built-in and custom), likelihood thresholds, and exclusion rules to reduce false positives. - Choose transformations: masking/redaction for strict removal, or tokenization/crypto-based pseudonymization when you need consistent replacements. - For performance, keep processing regional where possible and design for horizontal scale (e.g., Cloud Run/GKE calling DLP). The stated throughput (3,000 x 1 MB/day) is modest; the key is meeting per-request latency targets by minimizing extra hops and tuning inspection scope. Common Misconceptions: Encryption (A) protects confidentiality at rest but does not remove PII from prompts or summaries. VPC Service Controls (C) reduces data exfiltration risk but cannot detect/redact PII within allowed flows. Third-party tools (D) may work but are not the most appropriate managed native control and add supply-chain/compliance complexity. Exam Tips: When the requirement is “detect and redact PII” (especially for compliance) think Cloud DLP first. Pair it with IAM least privilege, logging, and boundary controls as defense-in-depth, but DLP is the control that actually enforces “no PII in content” on ingress/egress. Map such requirements to the Google Cloud Architecture Framework’s security and compliance principles: data classification, prevention controls, and automated policy enforcement.
Your company exposes a payment reconciliation API running on Compute Engine VMs behind a regional Internal HTTP(S) Load Balancer in VPC prod-finance (10.20.0.0/16), reachable only from your on-premises network over two HA VPN tunnels (TCP 443); to meet a security mandate, all inbound TLS traffic from on-prem must be intercepted (decrypted, inspected for malware/C2), then re-encrypted before it reaches the backends, and the policy must be centrally enforced for all projects in the Apps-Prod folder without changing the application architecture. What should you do?
Secure Web Proxy is designed primarily as a managed forward proxy for controlling and inspecting outbound (egress) web traffic from clients to the internet/SaaS, with URL filtering and threat protection. It is not the right construct for inbound TLS traffic coming from on-prem over HA VPN to an internal load balancer. It also doesn’t meet the “centrally enforced for all projects in a folder” requirement in the same way as hierarchical firewall policies.
An internal proxy load balancer can terminate TLS, but load balancers are not positioned as malware/C2 inspection engines for decrypted payload inspection. Terminating TLS at the ILB changes where encryption ends, but it does not inherently provide the mandated security inspection capabilities. Additionally, it does not provide centralized, folder-level enforcement across multiple projects; it would be configured per service/load balancer.
Hierarchical Firewall Policies provide centralized, inherited enforcement at the folder (Apps-Prod) level, which directly matches the requirement for consistent policy across projects. Cloud NGFW Enterprise adds advanced inspection, including TLS inspection (decrypt/inspect/re-encrypt) for traffic, enabling malware/C2 detection inline without requiring application changes. This combination best satisfies both the technical mandate and the governance/central enforcement requirement.
VPC firewall rules are scoped to a specific VPC network/project and are not centrally enforced across all projects in a folder unless you replicate them everywhere, which violates the “centrally enforced” mandate and increases drift risk. Also, standard VPC firewall rules do not provide TLS decryption/inspection; they are L3/L4 allow/deny controls. Even with NGFW features, the governance requirement points to hierarchical firewall policies, not per-VPC rules.
Core concept: This question tests network security controls for east-west/north-south traffic in Google Cloud, specifically centralized policy enforcement and inline TLS inspection. The relevant services are Cloud NGFW Enterprise (part of Network Security) with TLS inspection, and Hierarchical Firewall Policies (HFP) for organization/folder-level enforcement. Why the answer is correct: The requirement is to intercept (decrypt), inspect (malware/C2), and re-encrypt inbound TLS traffic coming from on-prem over HA VPN to an internal HTTP(S) load balancer, without changing the application architecture, and to centrally enforce the policy for all projects in the Apps-Prod folder. Cloud NGFW Enterprise provides advanced threat prevention and TLS inspection capabilities, and Hierarchical Firewall Policies allow you to apply and centrally manage those controls at the folder level (Apps-Prod), ensuring consistent enforcement across multiple projects/VPCs. This aligns with the Google Cloud Architecture Framework security principle of centralized governance and consistent guardrails. Key features / configuration points: - Use a Hierarchical Firewall Policy attached to the Apps-Prod folder to enforce inspection requirements consistently. - Enable Cloud NGFW Enterprise TLS inspection so traffic can be decrypted, inspected, and re-encrypted inline. - This approach is transparent to applications and load balancers (no app changes), and scales across projects. - Central management reduces configuration drift and supports compliance/audit needs. Common misconceptions: A and B suggest “inspect at a load balancer/proxy.” Internal HTTP(S) Load Balancing terminates TLS, but it is not a malware/C2 inspection engine, and “Secure Web Proxy” is primarily for egress web access control (forward proxy) rather than inbound, VPN-sourced, internal service protection. D uses a VPC firewall rule, but VPC firewall rules are not centrally enforced across projects like HFP and do not provide TLS decryption/inspection by themselves. Exam tips: - If the question emphasizes “centrally enforced across projects/folders,” think Hierarchical Firewall Policies (or org policies), not per-VPC rules. - If it requires decrypt/inspect/re-encrypt, look for Cloud NGFW Enterprise TLS inspection rather than load balancer features. - Distinguish inbound service protection from outbound web proxying: Secure Web Proxy is typically for egress controls, not inbound API inspection.
Möchtest du alle Fragen unterwegs üben?
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.
Your media-streaming company plans to migrate 120 microservices to Google Cloud within 90 days across 5 VPC networks spanning us-central1 and europe-west2; you must decide where to apply security controls and policies and determine which responsibilities are handled by Google versus your team, considering that you store 12 TB of GDPR-regulated PII for 200,000 EU customers and must retain audit logs for 365 days. What should you do?
Security Foundations/landing zone blueprints are useful starting points (org structure, IAM patterns, logging, networking), but implementing them “verbatim” is rarely correct for exams. They require customization for your specific VPC topology, regions, GDPR controls, and log-retention requirements. Also, a blueprint does not “continuously maintain” posture by itself; you still need governance, monitoring, and ongoing operations to keep controls effective.
Risk Manager/cyber insurance (and similar risk-transfer mechanisms) may be part of an overall risk strategy, but they do not answer the core requirement: deciding where to apply controls and clarifying which responsibilities belong to Google vs your team. A posture report can be informative, yet it is not the foundational step for control ownership mapping, GDPR accountability, or audit-log retention design.
Tracking release notes and evaluating new services before enabling APIs is good change-management practice, but it does not establish a security control framework or clarify shared responsibility. The question is about control placement and responsibility boundaries for regulated data and audit requirements. Release notes won’t tell you which layers you must secure (IAM, keys, logging, segmentation) versus what Google secures.
This directly addresses the question’s core need: use the shared responsibility model to map security/compliance controls to your obligations and identify what Google manages. For GDPR PII and multi-region deployments, this mapping drives decisions like data residency, encryption/key ownership, IAM and network boundary controls, and audit-log retention architecture (e.g., sinks to storage/BigQuery for 365 days). It’s the correct foundational step before implementing specific technical controls.
Core Concept: This question tests the Google Cloud shared responsibility model and how to translate it into a compliance/security control mapping for a regulated migration (GDPR PII, multi-region, multiple VPCs) including audit logging retention. It aligns strongly with the Google Cloud Architecture Framework (Security, Governance, and Operations pillars) and compliance planning. Why the Answer is Correct: Option D is the only choice that directly addresses the requirement to “decide where to apply security controls and policies and determine which responsibilities are handled by Google versus your team.” In Google Cloud, Google is responsible for security “of” the cloud (physical facilities, hardware, core networking, managed service infrastructure), while you are responsible for security “in” the cloud (IAM, data classification, encryption choices, network segmentation, logging configuration, key management, workload hardening, and compliance evidence). For GDPR-regulated PII across EU customers, you must map controls to data residency/processing requirements (e.g., europe-west2), define who configures access, encryption, DLP, and logging, and ensure audit logs are retained for 365 days (typically via Cloud Logging sinks to BigQuery/Cloud Storage with retention/lock). Key Features / What you would implement after the mapping: You would typically use Organization Policy constraints, IAM least privilege, VPC Service Controls for data exfiltration boundaries, Cloud KMS/CMEK where required, Cloud Audit Logs (Admin Activity/Data Access as appropriate), log sinks to meet 365-day retention, and monitoring/alerting. You also validate regionality for storage/processing and document controls for auditors. Common Misconceptions: Blueprints (A) help accelerate landing zones but do not “continuously maintain” posture automatically, and they must be tailored to your regulatory and architectural needs. Release notes (C) are operational hygiene, not a responsibility model. Risk/insurance (B) does not establish control ownership or compliance. Exam Tips: When a question explicitly asks “what is Google’s responsibility vs yours,” the shared responsibility model is the anchor. Then connect it to concrete control areas: IAM, network boundaries, encryption/keys, logging/retention, and data residency. For long audit-log retention, remember default Logging retention is limited; plan sinks and retention/immutability controls.
Your organization runs a self-hosted CI/CD system on a Google Kubernetes Engine (GKE) Autopilot cluster in a dedicated build project. The cluster executes more than 300 pipeline jobs per day, using ephemeral build pods that terminate within 60 minutes, and the pipelines deploy resources across multiple Google Cloud projects in the same folder. Security policy requires no long-lived user credentials, enforcing least privilege, and minimizing the risk of credential exfiltration from build nodes; you can enforce Organization Policy constraints at the project level. What should you do to minimize the risk of the CI/CD system's credentials being stolen?
Incorrect. A Cloud Identity user with rotating passwords and stored OAuth refresh tokens still relies on long-lived user credentials and secrets that can be exfiltrated from the CI/CD environment or vault. Refresh tokens are particularly sensitive because they can be exchanged for new access tokens. This violates the requirement to avoid long-lived user credentials and is not the recommended workload identity model for automated systems.
Incorrect. Using a Cloud Identity user account for automation is already against best practice for CI/CD because it introduces user credentials and typically long-lived refresh tokens. Additionally, disabling service account creation does not directly reduce the risk of credential theft from build pods; it can also hinder adopting the recommended approach (dedicated service accounts and Workload Identity) for workloads.
Correct. A dedicated service account with minimal required roles supports least privilege and avoids human user credentials. Enforcing constraints/iam.disableServiceAccountKeyCreation prevents creation of user-managed service account keys, which are long-lived and easily exfiltrated. Combined with GKE Workload Identity, build pods can use short-lived Google-issued tokens without storing secrets on nodes, minimizing credential theft impact.
Incorrect. constraints/iam.allowServiceAccountCredentialLifetimeExtension enables longer-lived credentials, which increases the window of misuse if credentials are stolen. The scenario explicitly wants to minimize exfiltration risk and avoid long-lived credentials. Security best practice is to keep credentials short-lived and non-exportable, not to extend their lifetime.
Core concept: This question tests secure workload authentication for CI/CD running on GKE (Autopilot) and how to reduce credential theft risk using Google Cloud’s recommended identity model: short-lived, automatically rotated credentials via service accounts (ideally with Workload Identity) and preventing long-lived, exfiltratable secrets such as service account keys. Why the answer is correct: Option C aligns with least privilege and “no long-lived user credentials” by using a dedicated service account for build workloads with only required IAM roles across the target projects (granted via IAM bindings at the appropriate scope, e.g., project/folder). The key risk in CI/CD is credential exfiltration from build pods/nodes; user-managed service account keys are long-lived and easily stolen. Enforcing the Organization Policy constraint constraints/iam.disableServiceAccountKeyCreation at the project level prevents creation of new user-managed private keys, forcing the system toward short-lived credentials (OAuth access tokens minted by Google) rather than exportable key files. This is a core best practice in the Google Cloud Architecture Framework under “Security, privacy, and compliance”: prefer managed identities and eliminate static secrets. Key features / configurations: - Use a custom service account dedicated to CI/CD, grant only required roles (often via predefined roles and, where needed, custom roles). - Use GKE Workload Identity (Autopilot supports it) to map Kubernetes service accounts to Google service accounts so pods obtain short-lived tokens without storing keys. - Apply constraints/iam.disableServiceAccountKeyCreation to block new key creation; optionally also audit for existing keys and remove them. Common misconceptions: - Using a Cloud Identity user (A/B) seems straightforward, but it introduces long-lived credentials (passwords/refresh tokens) and violates the “no long-lived user credentials” requirement. - Allowing longer credential lifetimes (D) increases blast radius and theft impact; it’s the opposite of minimizing exfiltration risk. Exam tips: For CI/CD on GKE, the preferred pattern is: dedicated service account + least privilege + Workload Identity + block service account keys with org policy. When you see “minimize credential exfiltration,” choose short-lived, non-exportable credentials and constraints that prevent key material from being created or stored.
Your organization uses a Shared VPC where net-hub-prod is the host project, and all firewall rules, subnets, and an HA VPN with Cloud Router are configured in the host; you need to let the Data Science Blue group attach Compute Engine VMs in service project ml-svc-02 only to the us-central1 subnetwork 172.16.20.0/24 and prevent attachment to any other subnet—what should you grant to the group to meet this requirement?
Granting Compute Network User at the host project level is too broad. It would allow the group to attach VMs from the service project to any subnetwork in the host project’s Shared VPC, not just 172.16.20.0/24 in us-central1. This violates the requirement to prevent attachment to any other subnet. Project-level grants are common but fail least-privilege constraints.
This is correct because IAM can be applied directly to the specific subnetwork resource in the host project. roles/compute.networkUser on the 172.16.20.0/24 subnetwork allows the group to use (attach NICs to) that subnetwork when creating VMs in the service project, while not granting permissions to use other subnetworks. This precisely enforces the requested restriction.
Compute Shared VPC Admin (roles/compute.xpnAdmin) at the host project level is intended for administering Shared VPC: enabling a host project, associating/dissociating service projects, and managing Shared VPC-level settings. It is far more privileged than needed and does not directly implement a “only this one subnet” usage constraint. It also increases risk by expanding administrative capabilities.
Compute Shared VPC Admin at the service project level is not the right control point for subnet attachment. The subnetworks are owned by the host project, so the effective permission to use a subnet must be granted on the host project’s subnetwork resource. Additionally, xpnAdmin is an administrative role and is overly permissive relative to the requirement.
Core concept: This question tests Shared VPC IAM delegation and the principle of least privilege for attaching resources (Compute Engine VMs) in a service project to subnets that live in a host project. In Shared VPC, subnets and firewall rules are owned by the host project, but service projects can create VM NICs that attach to those host subnets only if they have the right IAM permissions. Why the answer is correct: To allow the Data Science Blue group to create/attach VM network interfaces only on a single subnetwork (us-central1, 172.16.20.0/24) and prevent use of any other subnet, you must scope the permission to that specific subnetwork resource. Granting Compute Network User (roles/compute.networkUser) on the specific subnetwork in the host project allows the group to use that subnetwork when creating VM instances (or instance templates/MIGs) from the service project, but does not grant the ability to use other subnets in the Shared VPC. Key features / best practices: - Shared VPC uses host-project-owned network resources; service-project principals need explicit IAM on those network resources. - roles/compute.networkUser includes permissions such as compute.subnetworks.use and compute.subnetworks.useExternalIp (depending on configuration) that enable VM NIC attachment. - Resource-level IAM on subnetworks is the recommended way to enforce subnet-level boundaries (least privilege) rather than broad project-level grants. - This aligns with the Google Cloud Architecture Framework security principles: strong identity, least privilege, and clear segmentation/boundary control. Common misconceptions: A common trap is granting roles/compute.networkUser at the host project level, which would allow attachment to all subnetworks in that host project, violating the requirement. Another misconception is using Shared VPC Admin roles, which are intended for configuring and managing Shared VPC (associating service projects, enabling host project, etc.), not for narrowly controlling which subnet a workload can attach to. Exam tips: - For “can use a specific subnet” questions in Shared VPC, think: roles/compute.networkUser on the subnetwork resource. - For “can manage Shared VPC relationships/configuration,” think: roles/compute.xpnAdmin (Shared VPC Admin) at the appropriate project scope. - Always match the scope (project vs. subnetwork) to the requirement (all subnets vs. one subnet).
Your online gaming platform runs UDP-based matchmaker services on Compute Engine instances in us-east1 and must record the original client IPs for per-player rate limiting and security audits; a cost policy mandates use of the Standard network tier and you expect up to 25,000 concurrent client connections—under these constraints, which Google Cloud load balancer should you deploy to preserve the client IP by default?
External SSL Proxy Load Balancer is a proxy-based, global external load balancer that terminates SSL/TLS and forwards traffic to backends. It is not designed for UDP (it’s for TCP with SSL offload) and, as a proxy, it does not preserve the original client source IP at the backend by default. It also requires the Premium network tier, violating the Standard tier cost constraint.
External TCP Proxy Load Balancer is also proxy-based and intended for TCP traffic (not UDP). Because it proxies connections, the backend typically sees the load balancer’s IP as the source, not the original client IP, which breaks per-player rate limiting and audit requirements. Additionally, external TCP proxy load balancing is a global external proxy service and requires Premium tier, conflicting with the Standard tier mandate.
Internal TCP/UDP Load Balancer supports TCP/UDP but is only for internal (private) traffic within a VPC or connected networks (VPN/Interconnect). It is not an internet-facing load balancer for public gaming clients. Even though it can preserve source IP for internal clients, it cannot meet the requirement of serving external players connecting over the public internet.
External TCP/UDP Network Load Balancer is a regional, passthrough Layer 4 load balancer that supports UDP and preserves the original client source IP by default because it does not terminate/proxy the connection. It can be used with the Standard network tier, meeting the cost policy. This makes it the best fit for UDP-based matchmaking with backend rate limiting and security audit logging based on true client IPs.
Core Concept: This question tests choosing the correct Google Cloud load balancer for UDP traffic while preserving the original client source IP, under a constraint to use the Standard network tier. It also implicitly tests understanding of proxy vs passthrough load balancing and how that impacts source IP visibility for security controls like rate limiting and audit logging. Why the Answer is Correct: The External TCP/UDP Network Load Balancer (option D) is a passthrough (non-proxy) Layer 4 load balancer. Because it does not terminate or proxy the connection, backend VMs receive packets with the original client source IP by default. This directly satisfies the requirement to record original client IPs for per-player rate limiting and security audits without needing special headers (which don’t exist for UDP) or additional configuration. It also supports the Standard network tier, aligning with the cost policy. Key Features / Configurations / Best Practices: - Passthrough L4 load balancing for TCP and UDP: preserves client IP natively. - Works with Standard tier (unlike global external proxy load balancers that require Premium). - Suitable for regional deployments (you have instances in us-east1). For multi-region, you’d typically deploy separate regional NLBs and use DNS/traffic steering. - For security engineering: combine with VPC firewall rules, Cloud Armor is not applicable to NLB (Cloud Armor is for HTTP(S) and some proxy LBs), and consider logging/telemetry via VPC Flow Logs and application logs. - Capacity/scaling: 25,000 concurrent clients is a common scale point; ensure backend instance sizing, UDP socket handling, and health checks are configured appropriately. Common Misconceptions: Many assume “proxy” load balancers are better for security, but for UDP they break the requirement because the backend sees the load balancer’s IP as the source. Another trap is choosing internal load balancing because it supports UDP; however, internal LBs are for private, intra-VPC traffic and won’t front internet clients. Exam Tips: - If you must preserve client source IP at the backend, prefer passthrough L4 (Network Load Balancer) over proxy L4/L7. - If the question mentions Standard tier, eliminate global external proxy load balancers (they require Premium). - UDP + internet-facing + preserve client IP strongly points to External TCP/UDP Network Load Balancer.
Your organization runs an Autopilot GKE cluster in us-central1 with three namespaces (ingest, transform, report) hosting about 80 Pods that must read from a Cloud Storage bucket (gs://media-raw) and write to another bucket (gs://media-processed) in a separate project; your security policy forbids distributing long-lived credentials to workloads and requires least privilege and low operational overhead. How should you grant the Pods secure access to the buckets while minimizing management effort?
Correct. Workload Identity for GKE provides keyless, short-lived credentials for Pods by mapping a Kubernetes service account to a Google service account. You then grant only the necessary Cloud Storage IAM roles on the specific buckets (even across projects). This meets the “no long-lived credentials” policy, supports least privilege via bucket-level roles, and minimizes operational overhead by eliminating key distribution and rotation.
Incorrect. Even with Secret Manager and a 30-day rotation schedule, this approach still uses service account keys (long-lived credentials) and requires operational processes to rotate, redeploy, and handle failures. It also increases blast radius if a key is exfiltrated. The question explicitly forbids distributing long-lived credentials to workloads, which service account keys violate.
Incorrect. Storing service account keys in a Kubernetes Secret is the highest operational and security risk among the options: keys are long-lived, can be copied, and may be accessible to other principals in the namespace depending on RBAC. Rotation is manual/complex, and it directly violates the policy against distributing long-lived credentials to workloads.
Incorrect. Secret Manager is better than Kubernetes Secrets for storing sensitive material, but it does not solve the core issue: service account keys are still long-lived credentials that must be distributed to workloads. This increases management overhead (key lifecycle, rotation, rollout coordination) and conflicts with least-privilege best practices compared to Workload Identity’s short-lived, automatically rotated tokens.
Core Concept: This question tests identity and access for GKE workloads without long-lived credentials, specifically Workload Identity for GKE (federating Kubernetes service accounts to Google Cloud service accounts) and least-privilege IAM on Cloud Storage, including cross-project access. Why the Answer is Correct: Option A uses Workload Identity for GKE to let Pods obtain short-lived, automatically rotated credentials to call Google APIs as a Google Cloud service account (GSA). This satisfies the policy forbidding long-lived credentials (no service account keys) and minimizes operational overhead (no key distribution/rotation). You then grant only the required Cloud Storage permissions on the specific buckets (gs://media-raw read and gs://media-processed write) even though they are in a separate project, by binding IAM roles on the bucket resources to the GSA. This aligns with least privilege and the Google Cloud Architecture Framework’s security principles (strong identity, short-lived credentials, centralized policy). Key Features / Configuration Notes: - Enable Workload Identity on the Autopilot cluster (Autopilot supports it; it’s the recommended approach). - Create/choose a Kubernetes service account (KSA) per namespace or per workload class (ingest/transform/report) to scope permissions. - Bind KSA to GSA using iam.workloadIdentityUser on the GSA (principalSet / principal format for the KSA identity). - Grant bucket-level IAM roles to the GSA: typically roles/storage.objectViewer on media-raw and roles/storage.objectCreator (or objectAdmin if overwrite/delete is needed) on media-processed. Prefer bucket-level IAM over project-wide roles. - Cross-project: apply IAM on the buckets’ project (or bucket) to the same GSA; no VPC/network changes required. Common Misconceptions: Storing service account keys in Secret Manager (with or without rotation) can look “secure,” but it still relies on long-lived credentials and adds operational burden and risk (exfiltration, rotation failures). Kubernetes Secrets are even weaker for key material because they are often broadly readable within a namespace and are not a substitute for keyless workload identity. Exam Tips: For GKE-to-Google-API access, default to Workload Identity (keyless) over service account keys. For Cloud Storage, think in terms of bucket-level IAM and minimal roles (objectViewer/objectCreator) rather than broad project roles. When the question emphasizes “no long-lived credentials” and “low operational overhead,” Workload Identity is almost always the intended solution.
Lernzeitraum: 2 months
I used Cloud Pass during my last week of study, and it helped reinforce everything from beyondcorp principles to securing workloads. It’s straightforward, easy to use, and genuinely helps you understand security trade-offs.
Lernzeitraum: 1 month
문제 다 풀고 시험에 응했는데 바로 합격했어요! 시험이랑 문제는 비슷한게 40% 조금 넘었던거 같고, 처음 보는 유형은 제 개념 이해를 바탕으로 풀었어요.
Lernzeitraum: 1 month
I would like to thanks the team of Cloud Pass for these greats materials. This helped me passing the exam last week. Most of the questions in exam as the sample questions and some were almost similar. Thank you again Cloud Pass
Lernzeitraum: 1 month
Absolutely invaluable resource to prepare for the exam. Explanations and questions are spot on to give you a sense of what is expected from you on the actual test.
Lernzeitraum: 1 month
I realized I was weak in log-based alerts and access boundary configurations. Solving questions here helped me quickly identify and fix those gaps. The question style wasn’t identical to the exam, but the concepts were spot-on.

Professional

Associate

Professional

Associate

Foundational

Professional

Professional

Professional

Professional

Professional


Möchtest du alle Fragen unterwegs üben?
Kostenlose App holen
Lade Cloud Pass kostenlos herunter – mit Übungstests, Fortschrittsverfolgung und mehr.