CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. GCP
  3. Google Professional Cloud Network Engineer
Google Professional Cloud Network Engineer

GCP

Google Professional Cloud Network Engineer

230+ 練習問題(AI検証済み解答付き)

Free questions & answers本番形式の 練習問題
AI-powered explanations詳細な 解説
Real exam-style questions実際の試験に 最も近い
230+ 問題を見る

AI搭載

3重AI検証済み解答&解説

すべてのGoogle Professional Cloud Network Engineer解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。

GPT Pro
Claude Opus
Gemini Pro
選択肢ごとの解説
深い問題分析
3モデル合意の精度

試験ドメイン

Designing and Planning a Google Cloud VPC Network出題率 24%
Implementing a VPC Network出題率 19%
Configuring Managed Network Services出題率 16%
Configuring and Implementing Hybrid and Multi-Cloud Network Interconnectivity出題率 15%
Managing, Monitoring, and Troubleshooting Network Operations出題率 12%
Configuring, Implementing, and Managing a Cloud Network Security Solution出題率 14%

練習問題

1
問題 1

Your company operates a single Google Cloud project with three VPC networks (prod, stage, dev) across two regions; you must ensure that API calls to Cloud Bigtable and Artifact Registry are allowed only when requests originate from your corporate egress NAT public IP ranges 203.0.113.0/24 and 198.51.100.0/24, and on-prem systems access Google APIs over the public internet via those NATs; what should you do?

Incorrect. While Access Context Manager access levels can include IP ranges, you don’t “attach” an access context policy directly to Cloud Bigtable or Artifact Registry as standalone resources. Enforcement for these services is done through VPC Service Controls service perimeters (and their ingress/egress policies) that reference access levels. Also, allowing “your VPC” doesn’t address the stated on-prem public-internet path where the key attribute is the NAT public IP.

Correct. Create a VPC Service Controls perimeter for the project and restrict Bigtable and Artifact Registry. Then configure an Access Context Manager access level that allows only the corporate NAT public CIDRs 203.0.113.0/24 and 198.51.100.0/24. This ensures API calls are permitted only when Google sees those source IPs, matching the requirement that on-prem accesses Google APIs over the public internet via those NATs.

Incorrect. VPC firewall rules apply to traffic to/from resources with VPC interfaces (e.g., Compute Engine VMs, GKE nodes) and do not govern access to Google-managed service APIs like Bigtable and Artifact Registry. API access control for managed services is handled through IAM and higher-level controls like VPC Service Controls, not packet-level firewall rules in your VPC networks.

Incorrect. VPC Service Controls perimeters are not created “per VPC network”; they are defined around projects (or groups of projects) within an organization. Since all three VPCs are in a single project, you cannot meaningfully create separate perimeters for each VPC. The correct approach is a single perimeter for the project with an access level allowing the corporate NAT public IP ranges.

問題分析

Core concept: This question tests VPC Service Controls (VPC-SC) with Access Context Manager (ACM) access levels to restrict access to Google-managed services (Cloud Bigtable and Artifact Registry) based on request origin (source public IP ranges). This is a cloud network security control aligned with the Google Cloud Architecture Framework security pillar (reduce data exfiltration risk and enforce context-aware access). Why the answer is correct: You need to allow API calls to Bigtable and Artifact Registry only when the request originates from your corporate egress NAT public IP ranges (203.0.113.0/24 and 198.51.100.0/24). On-prem systems reach Google APIs over the public internet via those NATs, so the only reliable “origin” signal you can enforce at the service perimeter is the source IP seen by Google. A VPC Service Controls service perimeter around the project restricts access to supported services unless the request satisfies perimeter rules. By adding an ACM access level that includes the two NAT public CIDRs, you can allow only those source IPs to access the protected services, while denying requests from other IPs (including users at home, other clouds, or compromised credentials). Key features / configurations: - Create a VPC-SC service perimeter that includes the project. - Add restricted services: bigtable.googleapis.com and artifactregistry.googleapis.com. - Create an ACM access level using “ipSubnetworks” with 203.0.113.0/24 and 198.51.100.0/24, and reference it in the perimeter’s ingress policy (or perimeter access level configuration depending on mode). - Understand that VPC-SC is not per-VPC; it is applied at the project/folder/org resource level. Common misconceptions: - Firewall rules (VPC firewall) do not control access to Google APIs/managed services endpoints; they control traffic to/from VM NICs and certain network paths. - “Allow your VPC” is not the same as allowing public-internet-originated requests. If on-prem uses public internet, the relevant control is source public IP, not VPC internal ranges. - Creating per-VPC perimeters is not a supported or appropriate model; perimeters don’t map to individual VPC networks. Exam tips: - When the requirement is “only allow access to Google managed services from specific networks/IPs,” think VPC Service Controls + Access Context Manager. - If traffic is over public internet, IP-based access levels are often the enforceable signal. - Remember scope: VPC-SC perimeters protect projects (and above), not individual VPCs, and are about service access/exfiltration, not packet filtering.

2
問題 2

Your team manages a VPC named Studio that was created in auto mode for a global video-rendering platform; auto-mode VPCs reserve 10.128.0.0/9 for their subnets across regions. You must create a new VPC named Archive in the same project and connect it to Studio using VPC Network Peering so that internal RFC1918 traffic routes privately end-to-end without NAT; the two VPCs must have non-overlapping IP ranges now and as they scale. How should you configure the Archive VPC?

Incorrect. An auto-mode VPC uses the same predefined 10.128.0.0/9 space for its automatically created regional subnets. Since Studio already uses 10.128.0.0/9, creating Archive in auto mode will overlap, and VPC Network Peering requires non-overlapping subnet IP ranges. This option fails the core peering prerequisite and does not support safe scaling.

Correct. Archive should be created as a custom-mode VPC so you can create subnets from a different RFC1918 address plan than Studio’s auto-mode 10.128.0.0/9 space. In Google Cloud, you do not assign a CIDR to the VPC itself; instead, you create subnets with primary and optional secondary ranges, and those ranges must not overlap with the peered network. Using subnets carved from a plan such as 10.0.0.0/9 provides room for future growth while satisfying the peering requirement for private, non-NAT connectivity.

Incorrect. Assigning Archive 10.128.0.0/9 directly overlaps with Studio’s auto-mode reserved/used range. Overlapping IP ranges prevent VPC Network Peering from being established (or will break routing expectations). Even if you attempted to avoid overlap by careful subnetting, Studio’s auto-mode growth across regions makes future collisions likely.

Incorrect. Renaming the default VPC does not solve the IP overlap problem and is operationally risky. The default VPC is typically auto mode with 10.128.0.0/9 subnets, which would still overlap with Studio. Additionally, relying on the default VPC is discouraged for production due to uncontrolled subnet creation and weaker governance.

問題分析

Core Concept: This question tests VPC Network Peering prerequisites and IP planning in Google Cloud. VPC Network Peering provides private connectivity over Google's backbone without NAT, but it requires that all subnet primary and secondary IP ranges in the peered VPCs do not overlap. Auto-mode VPCs automatically create regional subnets from the 10.128.0.0/9 space, so any peered network must use a different RFC1918 address plan. Why the Answer is Correct: Studio is an auto-mode VPC, so its subnets come from 10.128.0.0/9. To connect Archive privately now and in the future, Archive should be created as a custom-mode VPC so you can define subnets from a different non-overlapping RFC1918 plan, such as ranges carved from 10.0.0.0/9. After creating the required subnets, you can establish VPC Network Peering and route internal traffic privately without NAT. Key Features / Best Practices: - VPC Peering is non-transitive and requires non-overlapping subnet primary and secondary ranges. - Auto-mode VPCs automatically create one subnet per region from 10.128.0.0/9, which often causes collisions. - Custom-mode VPCs let you explicitly create only the subnets you need and choose scalable, non-overlapping CIDRs. - Good IP planning should reserve enough address space for future regional growth and secondary ranges. Common Misconceptions: A common mistake is thinking a custom-mode VPC itself is assigned a single CIDR block; in reality, the VPC contains subnets, and those subnet ranges must be planned to avoid overlap. Another misconception is that two auto-mode VPCs can be peered because they are separate networks, but they still conflict because both use 10.128.0.0/9. Renaming or reusing the default network also does not avoid address overlap. Exam Tips: - Remember that auto-mode VPCs use 10.128.0.0/9 for automatically created regional subnets. - For peering questions, always verify that all current and future subnet ranges can remain non-overlapping. - Prefer custom-mode VPCs in production because they provide better control over subnet creation, governance, and long-term scaling.

3
問題 3

You are a network engineer at a global streaming company migrating core APIs to Google Cloud. These are the connectivity requirements:

  • On-premises connectivity with at least 20 Gbps from a single metro data center
  • Lowest-latency private access to Google Cloud (target: <5 ms one-way to nearest POP)
  • A centralized Network Engineering team must administer all WAN links and routing New product groups are creating separate Google Cloud projects and require private access to on-prem services from their workloads. You need the most cost-efficient design to connect the data center to Google Cloud while meeting the above requirements. What should you do?

Correct. Putting the Interconnect (Dedicated or Partner) and all VLAN attachments in the Shared VPC host project centralizes control of WAN links, Cloud Router/BGP, and routing policy while allowing many service projects to use the shared VPC subnets and on-prem routes. This is typically the most cost-efficient and operationally consistent approach for a centralized Network Engineering team supporting many product projects.

Incorrect. Creating VLAN attachments in service projects decentralizes hybrid connectivity administration. Even if attachments connect to the Shared VPC network, ownership and lifecycle management are spread across product teams, conflicting with the requirement that a centralized Network Engineering team administer all WAN links and routing. It also increases the risk of inconsistent BGP/route advertisement settings and complicates troubleshooting and governance.

Incorrect. Standalone projects with VLAN attachments per project require each project to manage its own hybrid connectivity constructs and routing, which violates the centralized administration requirement. It is also less cost-efficient at scale because each project may need its own Cloud Router sessions and attachment configuration, increasing operational overhead and making consistent routing policy enforcement harder.

Incorrect. Deploying both Interconnects and VLAN attachments in every project is the least cost-efficient and most operationally complex design. It duplicates expensive connectivity resources, increases port and attachment counts, and makes centralized routing control nearly impossible. Project-level isolation should be achieved through Shared VPC governance, IAM, and network segmentation rather than replicating physical Interconnect infrastructure.

問題分析

Core concept: This question tests hybrid connectivity design using Cloud Interconnect (Dedicated or Partner), VLAN attachments, Cloud Router/BGP, and Shared VPC for multi-project private connectivity with centralized administration. Why the answer is correct: A single metro data center needing >=20 Gbps and <5 ms one-way latency to the nearest Google POP strongly points to Cloud Interconnect rather than VPN. Dedicated Interconnect provides 10/100 Gbps per link and is designed for lowest-latency private connectivity at a colocation facility; Partner Interconnect can also meet bandwidth needs depending on provider offerings, but Dedicated is the typical choice for strict latency/throughput targets. To keep WAN links and routing centrally administered while enabling many product teams to use separate projects, you should place the Interconnect and all VLAN attachments in a Shared VPC host project. This centralizes ownership of the physical connectivity, VLAN attachments, Cloud Router sessions, and route exchange policies, while service projects consume the shared subnets and automatically gain access to on-prem routes (subject to IAM and firewall policy). Key features / best practices: - Use Shared VPC to separate network administration (host project) from application ownership (service projects), aligning with the Google Cloud Architecture Framework principle of centralized governance with delegated usage. - Terminate Interconnect VLAN attachments in the host project VPC and use Cloud Router with BGP to exchange routes. Apply consistent routing policy (advertised prefixes, BGP communities, route priority) centrally. - For >=20 Gbps, deploy redundant Interconnect connections (and redundant VLAN attachments/Cloud Routers) to meet HA best practices and avoid single points of failure. Consider quotas/limits for VLAN attachments and Cloud Router sessions per region. - Cost efficiency: one set of Interconnect resources shared across many projects is typically cheaper and operationally simpler than duplicating attachments per project. Common misconceptions: It may seem appealing to create VLAN attachments in each service project for autonomy, but that fragments routing control and increases operational overhead and cost. Another misconception is that “isolation” requires separate Interconnects per project; isolation is usually achieved with IAM, firewall policies, and segmentation (subnets, VPC design), not duplicating physical connectivity. Exam tips: When you see multi-project private hybrid access plus centralized network control, think Shared VPC host project owns the hybrid connectivity (Interconnect/VLAN attachments/Cloud Router). Use Dedicated Interconnect for strict latency and high throughput from a single metro, and design for redundancy (at least two connections and dual attachments) even if not explicitly asked.

4
問題 4

Your company operates a globally available ticketing API for live events that is fronted by a global external HTTP(S) load balancer, and during flash sales traffic spikes to 250,000 requests per minute from more than 40 countries while your security team detects application-layer patterns such as SQL injection, cross-site scripting, and anomalous headers. You must protect the service against these application-level attacks at the edge without changing application code and attach the control to the existing load balancer backend; what should you do?

Enabling Cloud CDN on the backend service helps cache content at Google’s edge and can reduce latency and backend load during spikes. However, Cloud CDN is not designed to detect or block application-layer attacks like SQL injection or XSS. While it may indirectly reduce origin exposure by serving cached responses, it does not provide WAF signatures or header/payload inspection required by the question.

Firewall deny rules (VPC firewall rules) operate at Layer 3/4 (IP/port/protocol) for traffic to VM instances and some network endpoints. They cannot inspect HTTP requests for SQLi/XSS patterns or anomalous headers, and they are not applied “to” a global external HTTP(S) load balancer in the way Cloud Armor policies are. This option confuses network firewalling with L7 application protection.

Cloud Armor security policies attach to the backend service of a global external HTTP(S) Load Balancer and enforce controls at Google’s edge. Cloud Armor WAF provides preconfigured rules to detect and block SQL injection, XSS, and other common web exploits, plus custom rules for headers, geolocation, and rate limiting. It meets all constraints: edge protection, no app code changes, and integration with the existing load balancer backend.

VPC Service Controls creates service perimeters to reduce data exfiltration risk and control access to Google-managed services (for example, Cloud Storage, BigQuery) via Google APIs. It is not a web application firewall and does not protect a public HTTP(S) endpoint from SQLi/XSS. Also, a global external HTTP(S) load balancer is not protected in the intended way by VPC Service Controls for inbound internet traffic.

問題分析

Core Concept: This question tests edge protection for application-layer (L7) attacks on a global external HTTP(S) Load Balancer using Google Cloud Armor Web Application Firewall (WAF). It also tests where controls attach in the load balancing stack (backend service) and the requirement to avoid application code changes. Why the Answer is Correct: Google Cloud Armor is the native security control for Google Cloud’s global external HTTP(S) Load Balancer to mitigate L7 threats such as SQL injection (SQLi), cross-site scripting (XSS), and protocol/header anomalies. You create a Cloud Armor security policy (with preconfigured WAF rules and/or custom match rules) and attach it to the load balancer’s backend service. This enforces policy at Google’s edge, before traffic reaches your backends, meeting the requirement to protect “at the edge” and “attach the control to the existing load balancer backend” without modifying application code. Key Features / Configurations / Best Practices: - Use Cloud Armor WAF preconfigured rules (OWASP Core Rule Set-derived signatures) for SQLi/XSS and other common exploits. - Add custom rules for anomalous headers, geo-based controls, allow/deny lists, and rate limiting (important for flash-sale spikes). - Deploy in preview mode first to tune false positives, then enforce. - Integrates with Cloud Logging/Monitoring for visibility and incident response. - Aligns with Google Cloud Architecture Framework security principles: defense in depth, centralized policy, and edge enforcement. Common Misconceptions: - Cloud CDN improves performance and can reduce origin load, but it is not a WAF and does not detect SQLi/XSS patterns. - VPC firewall rules are L3/L4 controls and cannot inspect HTTP payloads/headers for application attacks. - VPC Service Controls is for data exfiltration and API access boundaries around Google-managed services, not for protecting inbound public HTTP(S) endpoints. Exam Tips: When you see “SQL injection/XSS/anomalous headers” + “global external HTTP(S) load balancer” + “protect at the edge” + “no app changes,” the expected answer is Cloud Armor WAF attached to the backend service. Remember: firewall rules = network layer; Cloud Armor = L7 edge policy for HTTP(S) LB; VPC Service Controls = service perimeter for Google APIs/data, not inbound WAF.

5
問題 5

Your hardware startup distributes a critical smart door lock firmware globally via Cloud CDN in front of an external HTTP(S) load balancer with a Cloud Storage backend bucket. During a staggered rollout, you discover that the wrong firmware build (2.4.1-debug) was uploaded and has been cached worldwide; the object is served with Cache-Control: max-age=86400, and tens of thousands of devices have already pulled it. Your communications team has instructed customers to re-download the corrected firmware using the same URL (https://updates.example.com/locks/fw.bin). You must ensure that all subsequent downloads fetch the corrected firmware immediately from the same URL across all edge locations. What should you do?

A Cloud Armor security policy can block or rate-limit requests, but it does not remove or refresh cached objects. Even if you blocked Cloud CDN traffic, you would not ensure that subsequent downloads fetch the corrected firmware immediately from edge locations; you would only disrupt access. Log review and client filtering are irrelevant to fixing stale cached content.

Publishing a new URL path is a strong general best practice for immutable artifacts (versioned filenames), but it violates the requirement to keep the same URL (https://updates.example.com/locks/fw.bin). Also, allowing the cached object to expire naturally could take up to 86400 seconds due to max-age, which fails the “immediately” requirement.

This is the correct approach: replace the object at the origin (Cloud Storage backend bucket) and invalidate the cached path in Cloud CDN so edge locations purge the stale firmware and re-fetch the corrected version on the next request. This directly addresses long TTL behavior and ensures global consistency quickly while keeping the same URL.

Disabling and re-enabling Cloud CDN is an operationally risky workaround and does not reliably guarantee immediate cache purge across all edge locations. It can also cause traffic spikes to the origin and potential downtime or performance degradation. The supported, precise method is cache invalidation for the specific URL/path after updating the origin.

問題分析

Core Concept: This question tests Cloud CDN cache behavior and cache invalidation when using an external HTTP(S) Load Balancer with a Cloud Storage backend bucket. With Cloud CDN, objects are cached at Google edge locations according to HTTP caching headers (for example, Cache-Control: max-age). Why the Answer is Correct: Because the firmware is served from a single, fixed URL and is currently cached globally for up to 86400 seconds, you must both (1) replace the origin object and (2) force Cloud CDN to stop serving the stale cached version. Uploading the corrected fw.bin to the Cloud Storage bucket fixes the origin. Then issuing a Cloud CDN cache invalidation for that URL/path purges the cached object across edge locations so subsequent requests re-fetch from the origin immediately (and then re-cache the corrected content). This meets the requirement: same URL, immediate effect worldwide. Key Features / Best Practices: - Cloud CDN respects Cache-Control and will otherwise serve cached content until TTL expiry. - Cache invalidation is the supported mechanism to purge cached content early (path-based invalidation such as /locks/fw.bin or broader prefixes if needed). - For critical artifacts like firmware, consider versioned object names (fw-2.4.2.bin) plus a small “latest” pointer file, or use shorter TTLs during rollouts to reduce blast radius. This aligns with the Google Cloud Architecture Framework reliability principle (design for safe change/rollout). Common Misconceptions: - Disabling Cloud CDN does not instantly purge all edge caches; it also introduces operational risk and may not guarantee immediate global consistency. - Publishing a new URL is a common web pattern, but the scenario explicitly requires the same URL. - Security policies/log review do not address cache correctness. Exam Tips: When you see “wrong content cached globally” + “must use same URL” + “long max-age,” the canonical fix is: update origin object and invalidate Cloud CDN cache for the affected path(s). Remember that invalidation is a managed network service operation tied to the load balancer/CDN configuration, not a VPC routing change.

外出先でもすべての問題を解きたいですか?

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。

6
問題 6

In your Google Cloud organization, there are two folders: Analytics and Compliance; you need a scalable, consistent, and low-cost way to enforce the following across all VMs in every project under those folders: • For Analytics projects, TCP port 9000 must always be open for ingress from any source (0.0.0.0/0). • For Compliance projects, all ingress traffic to TCP port 9000 must be denied. What should you do?

Correct. Hierarchical firewall policies attached to folders provide centralized, inherited enforcement across all descendant projects and VPC networks. This meets the “scalable, consistent, low-cost” requirement by avoiding per-project configuration and preventing drift. The Compliance deny can act as a guardrail that blocks exposure even if project owners attempt to add permissive VPC firewall rules. Analytics can enforce a consistent allow for TCP/9000.

Incorrect. Shared VPC can centralize networking, but it requires migrating/standardizing projects to use those host projects and doesn’t automatically cover all existing VPCs in all projects under the folders. It also adds operational complexity (host/service project model, IAM, attachment management). The question asks for enforcement across all VMs in every project under folders, which hierarchical firewall policies address more directly.

Incorrect. Creating VPC firewall rules in every VPC of every project is not scalable and is prone to configuration drift and human error. It also increases operational cost over time as new projects/VPCs are created. This approach does not provide strong guardrails: a project owner could accidentally remove or change rules, especially in Compliance where “must be denied” implies a centrally enforced control.

Incorrect. Anthos Config Connector is primarily a Kubernetes-based configuration management tool to create/manage Google Cloud resources declaratively. It is not the most appropriate or lowest-cost mechanism for org-wide VM firewall enforcement, and it introduces additional platform dependencies (GKE/Anthos, controllers, lifecycle management). Even if used, the underlying correct control would still be hierarchical firewall policies, making this option indirect and unnecessarily complex.

問題分析

Core Concept: This question tests organization-scale network security enforcement using Hierarchical Firewall Policies (part of Firewall Policies in Google Cloud). Hierarchical firewall policies let you define allow/deny rules at the organization or folder level and have them consistently apply to all VPC networks and VM NICs in descendant projects, aligning with the Google Cloud Architecture Framework principle of centralized governance and policy-as-code. Why the Answer is Correct: You have two folders with opposite requirements for the same port (TCP 9000). The most scalable, consistent, and low-cost approach is to attach a hierarchical firewall policy to each folder: one policy for Analytics that ensures TCP/9000 ingress is allowed from 0.0.0.0/0, and another for Compliance that denies TCP/9000 ingress. Because the policies are attached at the folder level, any new project or VPC created under those folders inherits the enforcement automatically, eliminating per-project rule management and reducing operational overhead. Key Features / How It Works: Hierarchical firewall policies evaluate before VPC firewall rules and can be used to enforce baseline controls. You can use priorities and rule actions (allow/deny) and target them broadly (all instances) or via target service accounts/tags if needed. For Compliance, a deny rule at the folder level prevents accidental exposure even if a project owner adds permissive VPC firewall rules. For Analytics, an allow rule ensures the port remains open consistently; in practice you should also ensure no higher-level (org) policy denies it and that rule priority/order is correct. Common Misconceptions: Many assume Shared VPC is required for centralized firewalling, but Shared VPC centralizes networks, not governance across arbitrary existing projects/VPCs. Others default to per-project VPC firewall rules, which is not scalable and is error-prone. Anthos Config Connector is a deployment mechanism, not the underlying enforcement primitive; it doesn’t inherently provide folder-level guardrails unless it is used to manage hierarchical policies. Exam Tips: When requirements mention “across all projects under folders/org,” think hierarchical firewall policies (and/or org policy constraints). When requirements include “must always” and “must be denied,” prioritize controls that prevent project-level override. Also remember cost/ops: centralized policies reduce administrative toil and configuration drift.

7
問題 7

Your company operates a low-latency RTMP streaming service behind a regional external passthrough Network Load Balancer with backends in two managed instance groups located in us-central1 and europe-west1. For licensing reasons, only client networks 203.0.113.0/24 and 198.51.100.64/26 must be able to reach TCP port 1935 of the service from the internet, and all other client IPs must be blocked, while Google Cloud health checks must continue to work (130.211.0.0/22 and 35.191.0.0/16). What should you do?

Incorrect. Access Context Manager policies and VPC Service Controls perimeters are designed to restrict access to Google Cloud APIs and managed services to reduce data exfiltration risk. They do not enforce source IP allowlists for raw L4 traffic from the internet to an external passthrough Network Load Balancer VIP. Health checks and client TCP flows to port 1935 are not controlled by VPC Service Controls.

Incorrect. You cannot meaningfully “add the external load balancer as a protected service” in VPC Service Controls to restrict who can open TCP connections to the load balancer’s public IP. VPC Service Controls applies to supported Google services (API endpoints) and does not function as an internet-facing L4 firewall for passthrough NLB traffic.

Correct. For an external passthrough Network Load Balancer, traffic is delivered to backend VMs, so ingress VPC firewall rules on those VMs determine which sources can reach tcp:1935. Targeting a network tag applied via the MIG instance template ensures consistent enforcement across instances and regions. Including 130.211.0.0/22 and 35.191.0.0/16 preserves Google health checks while all other sources are blocked by implied deny.

Incorrect. VPC firewall rules cannot target instances by labels. Firewall rule targets can be specified using network tags or service accounts (and apply within the VPC). While labeling is useful for organization and billing, it is not a valid selector for firewall enforcement, so this option is not implementable as stated.

問題分析

Core Concept: This question tests how to restrict internet client source IPs for an external passthrough Network Load Balancer (NLB) while preserving Google Cloud health checks. For passthrough NLBs, traffic is delivered directly to backend VM NICs, so VPC firewall rules on the backends are the primary control for allowing/denying client sources. Why the Answer is Correct: Option C correctly uses an ingress VPC firewall rule targeting the backend instances (via network tags) to allow TCP/1935 only from the two permitted client CIDRs plus the Google health check ranges (130.211.0.0/22 and 35.191.0.0/16). Because VPC firewall rules are stateful and evaluated on the VM’s ingress, this effectively blocks all other internet sources by relying on the implied deny (or lower-priority deny rules) for traffic not explicitly allowed. This approach works regardless of the backends being in different regions (us-central1 and europe-west1) because firewall rules apply at the VPC network level and can target instances in any region. Key Features / Configurations: - Use a network tag (e.g., rtmp-backend) on the instance template so all MIG instances inherit it automatically. - Create an ingress allow rule with: - Target: the tag - Protocol/port: tcp:1935 - Source ranges: 203.0.113.0/24, 198.51.100.64/26, 130.211.0.0/22, 35.191.0.0/16 - Priority: higher precedence than any broad allow rules - Ensure no other firewall rule broadly allows tcp:1935 from 0.0.0.0/0 to those instances. Common Misconceptions: VPC Service Controls and Access Context Manager are frequently (and incorrectly) chosen for “IP allowlisting.” VPC Service Controls protects Google-managed APIs/services from data exfiltration and controls access to Google APIs, not raw TCP/UDP connectivity to external load balancer VIPs. Also, labels cannot be used as firewall rule targets; only network tags and service accounts are valid targets. Exam Tips: - For external passthrough NLB, control client access with VPC firewall rules on backends. - Always include Google health check IP ranges when restricting sources. - Remember: firewall targets are tags or service accounts (not labels), and VPC Service Controls is for API/service perimeters, not L4 internet traffic filtering.

8
問題 8

A global retail company operates a single Shared VPC (prod-hub) in Google Cloud and connects two on-premises data centers via dual 10-Gbps Dedicated Interconnect attachments terminated in us-east4 for private reachability. Compliance requires that all on-premises access to Cloud Storage must traverse the Interconnect links, but requests to all other Google APIs and services (for example, Pub/Sub and BigQuery) must continue to egress over the public internet through the existing NAT; what should you do?

This leaves Cloud Storage on its default public endpoint along with every other Google API and service. As a result, on-premises Cloud Storage access would continue to use the public internet path through NAT rather than traversing Dedicated Interconnect. That directly violates the compliance requirement. Although it preserves the current behavior for other APIs, it fails the most important condition in the question.

This option best matches the requirement to make only Cloud Storage use the private hybrid path while leaving all other Google APIs and services on their existing public internet path. Private Service Connect for Google APIs lets you expose a private endpoint in the VPC that on-premises clients can reach over Dedicated Interconnect, satisfying the compliance requirement for Cloud Storage. By continuing to use the default public domains for all other Google APIs and services, those requests still resolve to public addresses and egress through the existing NAT as stated. This is the only option that cleanly separates Cloud Storage from the rest of the Google API traffic.

This option would direct Cloud Storage to restricted.googleapis.com, but it would also direct all other Google APIs and services to private.googleapis.com. Both restricted.googleapis.com and private.googleapis.com are private Google API VIPs intended to be reached over private connectivity, so this would move other API traffic off the public NAT path. That contradicts the explicit requirement that all non-Cloud Storage Google API traffic must continue to egress over the public internet. Therefore, C over-applies private access and does not satisfy the selective-routing requirement.

This option sends Cloud Storage to private.googleapis.com and all other Google APIs and services to restricted.googleapis.com. In either case, both categories are still being directed to private Google API VIPs rather than leaving non-Storage APIs on public endpoints. That means other Google API traffic would no longer continue over the public internet via NAT, which violates the requirement. It also inverts the intended control posture by applying the more restrictive endpoint to the broader set of services.

問題分析

Core concept: This question is about selectively steering only Cloud Storage traffic from on-premises over Dedicated Interconnect while allowing all other Google APIs and services to continue using their normal public endpoints over the internet. The key distinction is between private access methods for Google APIs and leaving services on default public DNS names. Why correct: Cloud Storage can be accessed privately by using Private Service Connect for Google APIs so that on-premises clients reach it through the Interconnect-connected VPC, while all other APIs remain on default public domains and therefore continue to egress through the existing NAT path. Key features: Private Service Connect for Google APIs provides privately reachable endpoints inside the VPC; on-premises networks connected through Interconnect can reach those endpoints; DNS can be configured so only Cloud Storage resolves privately while other APIs keep public resolution. Common misconceptions: private.googleapis.com and restricted.googleapis.com are broad VIP-based mechanisms for many Google APIs, not a way to isolate only one API while keeping others public if the option explicitly maps both categories to those VIPs. Exam tips: When the requirement is to privatize access to only one Google API and keep others public, look for a service-specific private endpoint approach plus selective DNS, rather than broad private API VIPs that affect multiple services.

9
問題 9

You work for a global retail conglomerate migrating to Google Cloud. Cloud requirements: • Two on-premises data centers located in Japan (Tokyo) and Germany (Frankfurt) with Dedicated Interconnects connected to Cloud regions asia-northeast1 (primary HQ) and europe-west3 (backup), each link provisioned at 10 Gbps. • Multiple regional branch offices across LATAM and Middle East/Africa. • Regional data processing must occur in europe-west3 and asia-southeast1. • A centralized Network Operations team manages Shared VPC across projects. Your security and compliance team mandates a virtual inline security appliance to perform L7 URL filtering for north–south traffic, and you plan to place the appliance in asia-northeast1. What should you do?

Correct. Deploying the 2-NIC inline appliance in the Shared VPC Host Project matches the requirement that a centralized Network Operations team manages Shared VPC. Two VPCs plus a dual-NIC VM enables a clear trusted/untrusted separation and routing-based steering through the appliance. Enabling IP forwarding and using custom routes/firewall rules are required to make the VM act as an inline L7 inspection point.

This option is not the best choice because it places the inspection appliance in a Service Project even though the question emphasizes centralized Shared VPC management by the Network Operations team. While service-project VMs can attach to Shared VPC subnets, hosting a shared inline security function there creates split ownership and is less aligned with centralized governance. It also omits enabling IP forwarding, which is required for a VM to operate as a transit appliance. Because of both the governance mismatch and the missing forwarding requirement, this is inferior to A.

Incorrect. Using a single VPC with two NICs on different subnets can be used for some appliance patterns, but it does not provide the same strong segmentation and forced transit boundary as two separate VPCs. In a single VPC, it is easier to inadvertently allow bypass paths via routing, and the design is less aligned with a clear north–south “outside/inside” separation for URL filtering appliances.

Incorrect. This violates the stated requirement that Shared VPC is centrally managed and also misstates Shared VPC structure: a “Shared VPC Service Project” does not own the VPC network; the Host Project does. Creating the VPC in a service project would not meet the Shared VPC governance model described, and it would not provide the centralized control expected for an inline security enforcement point.

問題分析

Core Concept: This question tests how to implement an inline (bump-in-the-wire) virtual security appliance for north–south traffic inspection in Google Cloud using Shared VPC. The key concepts are Shared VPC ownership/administration boundaries, multi-NIC VM capabilities, and traffic steering using custom routes (and firewall rules) with IP forwarding. Why the Answer is Correct: Option A is the most appropriate because it places the inline appliance VM in the Shared VPC Host Project, where the centralized Network Operations team manages the Shared VPC. In Shared VPC, the VPC network is owned by the Host Project; service projects attach resources to subnets but do not own the network. For an inline appliance that must reliably steer and inspect traffic across multiple projects and potentially multiple network segments, hosting the appliance in the Host Project aligns with centralized control of routing, firewall policy, and lifecycle management. Using two VPCs with a 2-NIC VM is a common pattern to separate “untrusted/external” and “trusted/internal” segments and to force traffic through the appliance by routing between the two networks. Key Features / Configurations: - Multi-NIC VM: NIC0 in VPC #1 and NIC1 in VPC #2 to create a controlled transit point. - Enable IP forwarding on the VM so it can forward packets not destined to itself. - Custom routes: steer relevant north–south prefixes (for example, default route or specific on-prem/Internet-bound prefixes) toward the appliance as next hop. - Firewall rules: allow required flows to/from the appliance interfaces and restrict bypass paths. - Operational best practice: keep routing and security controls in the Host Project for consistent governance (aligns with Google Cloud Architecture Framework principles for centralized governance and security). Common Misconceptions: Many assume the appliance can be placed in a service project (Option B) because service projects can create VMs in Shared VPC subnets. However, for a cross-project, centrally governed inline inspection point, placing it in the Host Project reduces administrative friction and avoids split ownership of critical routing/security constructs. Exam Tips: - Remember: Shared VPC networks live in the Host Project; service projects consume subnets. - Inline appliances typically require: 2 NICs, IP forwarding, custom routes, and tight firewalling. - For “centralized NetOps manages Shared VPC,” prefer deploying shared network functions (NAT, inspection, routing hubs) in the Host Project unless a question explicitly requires service-project ownership.

10
問題 10

Your company operates a third-party edge firewall at a remote warehouse that only supports IKEv1 and does not support BGP. You must establish connectivity from the warehouse network to workloads running in Google Cloud using a policy-based VPN. The on-premises warehouse uses 10.30.20.0/24, 10.30.21.0/24, and 10.30.22.0/24. Your Google Cloud VPC uses 172.25.40.0/24, 172.25.41.0/24, and 172.25.42.0/24. You have already created a Cloud VPN gateway in Google Cloud and need to define the traffic selectors (LOCAL_TS and REMOTE_TS) on the legacy firewall to bring the tunnel up. What should you configure?

Incorrect. While 10.30.20.0/22 and 172.25.40.0/22 summarize the three /24s, they also include an extra /24 on each side (10.30.23.0/24 and 172.25.43.0/24). Policy-based VPNs often require selectors to match the actual encryption domains exactly; adding unintended ranges can cause negotiation mismatch or encrypt traffic you did not intend.

Incorrect. This option reverses the meaning of LOCAL_TS and REMOTE_TS from the perspective of the on-prem firewall. LOCAL_TS must represent the local (warehouse) networks behind the firewall, and REMOTE_TS must represent the remote (Google Cloud VPC) networks. Swapping them commonly prevents the tunnel from establishing because the proposed selectors do not match expectations.

Correct. For a policy-based VPN with IKEv1 and no BGP, you must explicitly define the interesting traffic. Setting LOCAL_TS to the three on-prem /24s and REMOTE_TS to the three VPC /24s precisely matches the required subnets without including extra address space. This is the most interoperable and least risky configuration for legacy policy-based IPsec with Cloud VPN.

Incorrect. Using REMOTE_TS as 0.0.0.0/0 is characteristic of route-based VPN designs (often with dynamic routing/BGP) and creates an overly broad encryption domain. With a policy-based VPN requirement, Google Cloud expects specific remote prefixes; 0.0.0.0/0 may be rejected, fail to match configured routes/selectors, or unintentionally send all traffic into the tunnel.

問題分析

Core concept: This question tests Classic Cloud VPN interoperability with legacy, policy-based IPsec VPNs (IKEv1, no BGP). In policy-based VPNs, the tunnel is defined by traffic selectors (LOCAL_TS and REMOTE_TS) that explicitly enumerate which local and remote CIDR ranges are permitted to be encrypted. Why the answer is correct: Because the third-party firewall is policy-based and does not support BGP, you must use static routing and explicitly define the interesting traffic. Google Cloud policy-based VPN requires that the on-prem device propose traffic selectors that match the subnets on each side. Here, the warehouse has three /24s (10.30.20.0/24, 10.30.21.0/24, 10.30.22.0/24) and the VPC has three /24s (172.25.40.0/24, 172.25.41.0/24, 172.25.42.0/24). Therefore, LOCAL_TS must include the three on-prem /24s and REMOTE_TS must include the three VPC /24s. This aligns with how Cloud VPN validates selectors and ensures the negotiated IPsec SAs cover exactly the intended prefixes. Key features / best practices: - Policy-based VPN uses traffic selectors; route-based VPN uses 0.0.0.0/0 selectors and relies on routing (often BGP with HA VPN). - With static routing, you must also ensure corresponding static routes exist on both sides (Cloud VPN static routes in Google Cloud; static routes/policies on the firewall). - Keep selectors as specific as required; avoid overly broad selectors that may unintentionally encrypt traffic or fail due to mismatch. Common misconceptions: - Aggregating to a /22 looks convenient, but it includes extra address space not actually used (e.g., 10.30.23.0/24 and 172.25.43.0/24). Many policy-based implementations require exact matches; extra ranges can cause selector mismatch or create unintended encryption domains. - Swapping LOCAL_TS and REMOTE_TS is a frequent mistake; LOCAL_TS is always the local (firewall-side) subnets. - Using 0.0.0.0/0 is typical for route-based VPN, not policy-based, and can be rejected or create an overly permissive encryption domain. Exam tips: When you see “IKEv1”, “no BGP”, and “policy-based VPN,” think: explicit subnet pairs in traffic selectors and static routes. Ensure LOCAL_TS corresponds to the on-prem prefixes and REMOTE_TS corresponds to the VPC prefixes, and avoid summarization unless you are certain both sides accept and intend the broader CIDR.

外出先でもすべての問題を解きたいですか?

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。

11
問題 11

Your retail company has set up a single IPSec Cloud VPN tunnel from its Google Cloud VPC to a logistics partner’s on-premises device; the VPN Tunnel Status shows Established, but the Cloud Router’s BGP Session Status shows BGP not configured. The partner provided these BGP parameters: • Partner BGP address: 169.254.22.1/30 • Partner ASN: 65044 • Google Cloud BGP address: 169.254.22.2 • Google Cloud ASN: 65001 • MD5 Authentication: Disabled You have already associated the Cloud Router with the Cloud VPN tunnel. Based on the partner’s settings, how should you configure the local BGP session on Google Cloud?

Incorrect because it sets Peer ASN to 65001, which is Google Cloud’s ASN, not the partner’s. In Cloud Router BGP peer configuration, “Peer ASN” must be the remote ASN (here 65044). The IPs and MD5 setting are otherwise aligned, which can make this option tempting, but the ASN mismatch prevents a proper eBGP session.

Incorrect because it swaps the local and peer BGP IP addresses. Google’s local BGP IP should be 169.254.22.2 and the partner’s peer IP should be 169.254.22.1, per the provided parameters. Reversing these addresses will cause BGP neighbor establishment to fail even though the IPSec tunnel is up.

This is correct because it uses the partner ASN 65044 as the Peer ASN, which is exactly how eBGP on Cloud Router should be configured. It also assigns Google Cloud’s BGP address 169.254.22.2 as the Local BGP IP and the partner’s address 169.254.22.1 as the Peer BGP IP, matching the values provided in the prompt. MD5 authentication is set to disabled, which matches the partner configuration and is required for successful session establishment. Although the route priority is shown as 100, that value is not the reason this option is correct; the critical match is the ASN, IP addressing, and MD5 settings.

This option is incorrect because it enables MD5 authentication even though the partner explicitly stated that MD5 authentication is disabled. BGP authentication settings must match on both sides, and a mismatch will prevent the BGP session from establishing even if the IPSec tunnel itself is up. The local and peer IPs and peer ASN are otherwise aligned, which makes this a plausible distractor. The route priority value of 1000 is not the key problem here; the MD5 mismatch is sufficient to make the configuration fail.

問題分析

Core Concept: This question tests how to configure a Cloud Router BGP peer for a Cloud VPN tunnel after the IPSec tunnel is already established. An established VPN tunnel only confirms the encrypted transport is up; dynamic route exchange still requires a correctly configured BGP session on the Cloud Router. Why the Answer is Correct: The partner provided the remote BGP parameters for the /30 link-local peering subnet. On Google Cloud, the BGP peer must use the partner ASN as the Peer ASN, Google Cloud’s BGP address as the Local BGP IP, the partner’s BGP address as the Peer BGP IP, and MD5 authentication must match the partner setting of disabled. Option C is the only choice that aligns with those required values. Key Features / Config Best Practices: - Cloud Router uses the router’s own ASN locally and the remote device’s ASN as the peer ASN. - The local and peer BGP IPs must match the exact addresses assigned to each side of the BGP link. - MD5 authentication must be configured identically on both sides or the BGP session will fail. - Advertised route priority influences path preference in some scenarios, but it is not what determines whether the BGP session is configured correctly in this question. Common Misconceptions: - Confusing the Cloud Router ASN with the peer ASN. The router ASN is configured on the Cloud Router itself, while the BGP session uses the partner ASN as the peer. - Assuming that because the VPN tunnel is established, routing should already work. Without a BGP peer, no dynamic routes are exchanged. - Swapping local and peer BGP IPs because both addresses are in the same /30 range. Each side must use its specifically assigned address. Exam Tips: - For Cloud Router BGP questions, map each provided parameter directly: remote ASN to Peer ASN, Google IP to Local BGP IP, partner IP to Peer BGP IP. - If the status says “BGP not configured,” the issue is usually missing or incorrect BGP peer settings rather than an IPSec failure. - Ignore distractors that change unrelated values unless the question explicitly asks about path preference or route selection.

12
問題 12

Your security team now requires capturing packet payloads for all egress traffic originating from Compute Engine instances in region europe-west4 within VPC prod-vpc, limited to subnets app-euw4 (10.70.0.0/20) and jobs-euw4 (10.70.16.0/20). You have deployed an IDS virtual appliance as a regional managed instance group with 3 VMs (ids-mig) in europe-west4. You must integrate the IDS so it receives mirrored packets for egress traffic only and production routing remains unchanged. What should you do?

Firewall rules logging records allow/deny decisions and some connection metadata in Cloud Logging, but it does not capture full packets or payloads. Forwarding logs to an IDS would not provide the raw traffic needed for deep packet inspection and signature-based detection. It also introduces log processing latency and does not meet the explicit “packet payloads” requirement.

VPC Flow Logs capture network flow metadata (e.g., src/dst IP, ports, protocol, bytes, packets, start/end time) and are useful for visibility and troubleshooting, not payload inspection. Even with a Logging sink and filtering for egress, you still cannot reconstruct packet contents. This fails the IDS requirement for mirrored packets with payloads.

Packet Mirroring is designed to send copies of packets (including payload) to security appliances without changing production routing. A packet mirroring collector is implemented using a regional internal TCP/UDP (passthrough) load balancer, which can target a managed instance group like ids-mig for scale and HA. Configuring a regional policy in europe-west4 with direction=EGRESS and selecting the two subnets precisely matches the requirements.

An internal HTTP(S) load balancer is a proxy-based Layer 7 service and is not used as a Packet Mirroring collector. Packet Mirroring collectors require an internal passthrough Network Load Balancer (TCP/UDP) so the mirrored packets are delivered without L7 termination or protocol constraints. Additionally, “HTTP(S)” would not cover arbitrary egress traffic (non-HTTP) from the subnets.

問題分析

Core concept: This question tests Google Cloud Packet Mirroring for out-of-band network security monitoring. Packet Mirroring copies packets (including payload) from selected sources (VM NICs, instance tags, or subnets) to a collector without changing routing or requiring inline appliances. This aligns with the Google Cloud Architecture Framework security principle of centralized visibility and detection while preserving reliability by avoiding inline dependencies. Why the answer is correct: The requirement is to capture packet payloads for egress traffic originating from Compute Engine instances in europe-west4, limited to two subnets, and to send mirrored packets to an IDS MIG, while keeping production routing unchanged. Packet Mirroring is the only option that provides full packet copies (payload) rather than metadata logs. In Google Cloud, mirrored traffic is delivered to a Packet Mirroring collector, which is implemented using a regional internal passthrough Network Load Balancer (internal TCP/UDP load balancer). You then configure a regional packet mirroring policy in europe-west4, select the two subnets as the mirrored sources, and set direction=EGRESS to mirror only outbound traffic. Key features/configuration points: - Scope: Packet mirroring policy is regional; sources and collector must be in the same region (europe-west4). - Sources: select subnets app-euw4 and jobs-euw4 to constrain coverage. - Direction: set to EGRESS to meet “egress only” requirement. - Collector: use a regional internal TCP/UDP load balancer (passthrough) targeting the ids-mig instance group so traffic is distributed across the 3 IDS VMs. - Routing unchanged: mirroring is a copy; it does not affect forwarding decisions or introduce latency to production flows. Common misconceptions: Flow Logs and firewall logs are often confused with packet capture. They provide metadata (5-tuple, bytes, actions) but not payload, so they cannot satisfy “packet payloads.” Also, HTTP(S) load balancers are proxy-based L7 services and are not used as packet mirroring collectors. Exam tips: When you see “payload,” “IDS,” “out-of-band,” and “routing remains unchanged,” think Packet Mirroring. Remember collectors use internal passthrough (TCP/UDP) load balancing, and policies are regional with selectable direction (INGRESS/EGRESS/BOTH). Also consider quotas/cost: mirrored traffic can be high-volume and billed; scope mirroring tightly (subnets, direction, optional filters) to control cost and capacity.

13
問題 13

Your company runs two microservices in a regional GKE cluster (name: prod-net, region: us-central1) exposed through a single external HTTP(S) Load Balancer configured by a Kubernetes Ingress; requests to shop.acmepuzzles.com/orders and shop.acmepuzzles.com/insights load correctly, but going to https://shop.acmepuzzles.com/ returns an HTTP 404 from the load balancer, and you must fix this without changing DNS or creating a new load balancer; what should you do?

Incorrect. A Kubernetes Service does not contain HTTP path rules; it is primarily an L4 construct (ClusterIP/NodePort/LoadBalancer) that selects Pods and exposes ports. Adding a “*” path rule to a Service is not a valid or effective way to influence a Google Cloud HTTP(S) Load Balancer URL map. Path-based routing is configured in the Ingress resource, not the Service.

Incorrect. While the Ingress is the right place to change routing, adding a path rule for “*” is not the correct or portable approach. Kubernetes Ingress path matching is typically Prefix or Exact; “*” is not a standard path pattern and may not be accepted or may behave unexpectedly. The clean, supported solution for unmatched paths is to use a default backend (or explicitly add a “/” prefix rule).

Correct. Defining spec.defaultBackend in the Ingress sets the load balancer URL map’s default service. This ensures requests that do not match any host/path rule (including the bare domain root if not otherwise matched) are routed to the intended base Service instead of returning the load balancer’s default 404. This change updates the existing GCLB created by the Ingress, meeting the constraints.

Incorrect. Services do not support defining an Ingress-style default backend. The concept of a default backend belongs to the Ingress/HTTP(S) load balancing layer (URL map). Editing the Service YAML cannot change how the external HTTP(S) Load Balancer matches paths or what it does when no path rule matches; only the Ingress configuration can do that.

問題分析

Core Concept: This question tests how Kubernetes Ingress on GKE programs a Google Cloud external HTTP(S) Load Balancer (GCLB) and how unmatched URL paths are handled. In GKE Ingress, path rules map to backend Services; if a request does not match any host/path rule, the load balancer returns a 404 generated by the URL map (not by your Pods). Why the Answer is Correct: /orders and /insights work, meaning the Ingress has explicit path rules for those prefixes. The root path “/” (https://shop.acmepuzzles.com/) is not matching any rule, so the GCLB URL map has no matching path matcher and returns a default 404. The correct fix is to configure a default backend in the Ingress so that any request that doesn’t match a specific path rule (including “/” if not otherwise defined) is routed to the intended “base” Service. This satisfies the constraints: no DNS change and no new load balancer—only an Ingress update that reprograms the existing URL map. Key Features / Configurations: In Kubernetes Ingress (networking.k8s.io/v1), you can set spec.defaultBackend (or in older APIs, spec.backend) to a Service. This becomes the URL map’s defaultService. It is a best practice to define a default backend to avoid unexpected 404s and to provide a controlled landing page or error handler. This aligns with Google Cloud Architecture Framework operational excellence: predictable routing and clear failure modes. Common Misconceptions: Many assume adding a wildcard path like “*” is valid in Kubernetes Ingress. In practice, Kubernetes path matching is based on Prefix/Exact (and ImplementationSpecific), and “*” is not a portable, standards-based path. Another misconception is that Services control HTTP routing; Services are L4 abstractions and do not define URL paths—Ingress does. Exam Tips: When you see “404 from the load balancer” with some paths working, think URL map path matching and Ingress rules. Fixes typically involve adding a “/” prefix rule or defining spec.defaultBackend. Also remember: DNS points to the forwarding rule IP; changing routing without a new load balancer usually means updating Ingress (which updates the URL map) rather than modifying Services.

14
問題 14

You are implementing a transit-hub network on Google Cloud for a media company with multiple regional spoke VPCs; the hub hosts a pair of third-party firewalls in high availability behind a regional internal passthrough Network Load Balancer with VIP 172.31.10.8, all spokes are already peered to the hub, and the requirement is that every spoke must use the hub firewalls for all internet egress (0.0.0.0/0) while only the firewall instances in the hub are allowed to use the default internet gateway in the hub; what should you configure to meet these requirements and maintain high availability?

This option is incomplete because it leaves the spokes’ own default internet gateway routes in place. In Google Cloud, a local route in the spoke VPC is preferred over a route learned through VPC peering, so the imported 0.0.0.0/0 from the hub would not actually be used for internet egress. As a result, spoke workloads would continue to exit directly to the internet instead of traversing the hub firewalls. That violates the explicit requirement that every spoke must use the hub firewalls for all internet-bound traffic.

This option correctly uses the regional internal passthrough Network Load Balancer VIP as the next hop for the hub’s 0.0.0.0/0 route, which preserves high availability across the firewall pair. It also replaces the unrestricted hub default internet gateway route with a tag-scoped route so only the firewall instances can send traffic directly to the internet after inspection. Exporting custom routes from the hub and importing them into the spokes allows the centralized default route to propagate over VPC peering. Finally, removing the spokes’ own default route ensures they do not bypass the hub, which is necessary to satisfy the requirement that every spoke use the hub firewalls for all internet egress.

This option uses individual firewall instances as route next hops instead of the internal passthrough NLB VIP, which is not the best high-availability design for the stated architecture. The question explicitly says the firewalls are deployed behind a regional internal passthrough Network Load Balancer, and that VIP is the supported resilient next hop for steering traffic to the active appliance. Using per-instance next hops complicates failover and can lead to asymmetric or failed routing during appliance outages. It also does not address the spoke-side route-precedence problem as clearly as the best answer does.

This option is invalid because a spoke VPC cannot create a static route with a next hop that is an internal passthrough load balancer VIP located in a different VPC over VPC peering. The supported model is to create the custom route in the hub and exchange it through peering by enabling export and import of custom routes. Directly referencing the remote VIP from the spoke bypasses the peering route-exchange mechanism and is not how next hops work across peered VPCs. Therefore, it does not meet the technical constraints or the intended centralized egress design.

問題分析

Core concept: This question tests centralized internet egress through third-party firewalls in a hub VPC using VPC Network Peering custom route exchange, while preserving high availability with an internal passthrough Network Load Balancer. The critical routing rule is that in Google Cloud, local routes in a VPC are preferred over peering-learned routes, including a local default route to the default internet gateway. Therefore, for spokes to actually use the hub-advertised 0.0.0.0/0 route, their own local default internet gateway route must be removed so the imported custom default route from the hub is selected. Why correct: Option B correctly creates a custom default route in the hub pointing to the internal passthrough NLB VIP, which provides a highly available next hop across the firewall pair. It also restricts direct internet access in the hub by replacing the broad default internet gateway route with a tag-scoped route that only the firewall instances can use. Then it enables export/import of custom routes across the hub-and-spoke peerings so the spokes learn the hub’s 0.0.0.0/0 route. Finally, it removes the spokes’ own default internet gateway routes, which is necessary because local routes take precedence over peering-learned routes. Key features: - Internal passthrough Network Load Balancer VIP can be used as the next hop for a custom static route to provide HA for third-party appliances. - Tag-scoped default internet gateway route in the hub ensures only firewall instances can egress directly to the internet. - VPC Network Peering can exchange custom routes only when export/import is explicitly enabled. - Spoke VPCs must not retain their own local default route if they are expected to use the imported hub default route. Common misconceptions: - Simply importing a custom default route over peering is not enough if the spoke still has its own local default internet gateway route. The local route wins. - You cannot point a route in one VPC directly to an internal load balancer VIP in another peered VPC. - Using individual firewall instances as route next hops does not provide the same HA behavior as using the internal passthrough NLB VIP. Exam tips: - Remember route selection precedence in Google Cloud: local routes are preferred over peering routes. - For centralized egress over peering, always verify both custom route exchange and whether local spoke routes must be removed or replaced. - For HA third-party appliances, prefer an internal passthrough NLB as the route next hop instead of individual instances.

15
問題 15

Your company is deploying a new 20-Gbps Dedicated Interconnect with two VLAN attachments in us-east4 and BGP peering to on-premises ASN 65010; three departments (R&D, HR, and Finance) each use separate service projects attached to a single Shared VPC host project that owns the central VPC, and you need all departments to exchange routes with on-premises over this Interconnect—where should you create the Cloud Router instance?

Incorrect. You cannot create Cloud Router “in the VPC networks of all projects” to solve Shared VPC hybrid routing. Cloud Router is tied to a specific VPC network, and in Shared VPC the VPC network exists in the host project. Service projects don’t have separate VPC networks for the shared subnets; they attach to the host’s VPC. Multiple routers in multiple projects would not provide shared VPC route exchange.

Incorrect. Creating the Cloud Router in the Finance service project would not attach it to the Shared VPC host project’s VPC network. Service projects typically do not own the shared VPC network; they only use subnets from the host. As a result, the VLAN attachments/BGP sessions for the Dedicated Interconnect would not be correctly associated with the central VPC used by all departments.

Correct. The Shared VPC host project owns the central VPC network, and Cloud Router must be created in the same project as the VPC network it serves. Since the Dedicated Interconnect VLAN attachments are in us-east4 and need to exchange routes with on-premises for all departments, placing Cloud Router in the host project’s VPC enables dynamic route import/export for the shared network consumed by R&D, HR, and Finance.

Incorrect. R&D, HR, and Finance are service projects and do not each have their own VPC network for the shared subnets; they attach to the host project’s VPC. Creating separate Cloud Routers in each service project would not connect those routers to the shared VPC, and would not correctly terminate the VLAN attachments for the Dedicated Interconnect that must be associated with the host VPC.

問題分析

Core Concept: This question tests how Cloud Router, VLAN attachments (Dedicated Interconnect), and Shared VPC interact. Cloud Router is a regional, VPC-scoped managed BGP control plane used to dynamically exchange routes between a VPC network and on-premises via Interconnect (or VPN). Why the Answer is Correct: In a Shared VPC architecture, the VPC network is owned by the host project, and service projects attach resources (VMs, GKE nodes, etc.) to subnets in that host project’s VPC. Hybrid connectivity components that are VPC-scoped—such as Cloud Router and VLAN attachments—must be created in the project that owns the VPC network (the Shared VPC host project). Creating the Cloud Router in the host project’s VPC ensures that routes learned from on-premises over BGP are imported into the shared VPC and therefore become available to all attached service projects (R&D, HR, Finance) according to routing mode and any custom route import/export policies. Key Features / Configurations: - Cloud Router is regional, so you create it in us-east4 to match the VLAN attachments in us-east4. - VLAN attachments terminate into the host project’s VPC and are associated with a Cloud Router for BGP sessions to on-prem ASN 65010. - Dynamic route exchange then propagates within the VPC (subject to global vs regional dynamic routing mode, and any custom route advertisements). - Best practice: use redundant VLAN attachments and BGP sessions for high availability; ensure appropriate route advertisement (custom prefixes if needed) and consider route limits/quotas. Common Misconceptions: A frequent mistake is thinking each department’s service project needs its own Cloud Router. Service projects do not “own” the shared VPC network; they consume it. Therefore, placing Cloud Router in a service project won’t attach it to the shared VPC network and won’t provide hybrid route exchange for all departments. Exam Tips: - Remember the scoping rule: Cloud Router and Interconnect VLAN attachments are created in the project that owns the VPC network they attach to. - For Shared VPC, hybrid connectivity is typically centralized in the host project. - Always align Cloud Router region with the Interconnect/VLAN attachment region, and validate dynamic routing mode and route advertisement/import settings for cross-project reachability.

外出先でもすべての問題を解きたいですか?

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。

16
問題 16

Your organization operates 4 VPC networks across 3 Google Cloud projects under a single folder. The Compliance team owns all firewall rules and SSL certificates for audit purposes, while the Platform Networking team administers VPCs, subnets, routes, and peering. The networking team must be able to view firewall rules across all projects for troubleshooting (read/list only), but must not be able to create, modify, or delete any firewall rules. You plan to grant access at the folder level so it inherits to all current and future projects. What IAM permissions or roles should you assign to the Platform Networking team to meet these requirements while adhering to least privilege?

compute.networkUser is intended for service projects or principals that need to attach resources such as VMs or load balancers to a shared VPC network or subnet. It does not provide the broad read-only visibility into firewall rules across projects that the troubleshooting requirement calls for, nor does it align with administering network topology resources. This role is about consuming networks, not inspecting firewall policy. Therefore it does not meet the stated operational need.

compute.networkAdmin is too permissive because it includes the ability to manage firewall rules in addition to other networking resources. That directly violates the separation-of-duties requirement that the Compliance team owns firewall rules and SSL certificates for audit purposes. Even though it would let the networking team troubleshoot, it would also let them create, modify, or delete firewall rules, which the question explicitly forbids. Least privilege rules this option out immediately.

A custom role containing only compute.networks.* and compute.firewalls.list is incomplete for the responsibilities described. The scenario says the Platform Networking team administers VPCs, subnets, routes, and peering, but compute.networks.* covers only network resources and does not include the separate permission families for subnetworks, routes, and network peerings. In addition, limiting firewall access to only list is narrower than the stated read/list need and may not support full troubleshooting visibility. Because the custom role omits required networking administration permissions, it does not satisfy the full requirement.

compute.networkViewer is the predefined least-privilege role that gives read-only visibility into networking resources, including firewall rules, across the folder and all descendant projects. That satisfies the explicit requirement that the Platform Networking team be able to view and troubleshoot firewall rules without being able to create, modify, or delete them. The added compute.networks.use permission is unnecessary for the stated firewall-viewing requirement, but among the provided choices this option is still the only one built around a read-only networking role rather than an administrative or incomplete custom role. Because the binding is applied at the folder level, the access inherits consistently to all current and future projects under that folder.

問題分析

Core concept: This question is about selecting the least-privilege IAM role assignment at the folder level so the Platform Networking team can troubleshoot by viewing firewall rules across all inherited projects, while preserving separation of duties because Compliance owns firewall administration. Why correct: The team needs read-only visibility into firewall rules and networking resources, but must not be able to create, update, or delete firewall rules. Key features: Folder-level IAM inheritance applies to all current and future projects, predefined viewer roles are preferred when they satisfy requirements, and firewall rule visibility should come from read-only permissions rather than admin permissions. Common misconceptions: compute.networkAdmin is too broad because it includes firewall management, compute.networkUser is about attaching resources to networks rather than viewing/administering them, and a custom role limited to compute.networks.* does not cover the broader networking resources named in the scenario. Exam tips: When a question says read/list only, eliminate any role with mutation permissions; when least privilege is required, prefer the narrowest predefined role that satisfies the visibility requirement and avoid custom roles that omit required resource families.

17
問題 17

Your media startup is launching a stateless HTTPS landing site in europe-west1 and asia-southeast1. The site runs on Compute Engine instances in two regional managed instance groups (one per region) with autoscaling and autohealing; no database or session persistence is required. You need a single global endpoint (site.example.com) that minimizes latency for EMEA and APAC users and can withstand a full regional outage, following Google-recommended practices. What should you do?

Incorrect. Regional network load balancers are L4 and regional; they don’t provide a single global anycast endpoint. Cloud CDN is not enabled on a regional TCP/UDP network load balancer (CDN integrates with external HTTP(S) load balancing). Using two IPs in DNS (round-robin) is not health-aware and can continue sending users to an unhealthy region due to DNS caching/TTL, hurting outage resilience.

Correct. A global external HTTP(S) load balancer (Premium Tier) provides one global anycast IP and routes users to the nearest edge, then to the closest healthy backend across regions. Attaching both regional MIGs enables automatic cross-region failover via health checks. Cloud CDN improves latency for cacheable content, and Cloud Armor is a recommended managed security layer for internet-facing apps. Cloud DNS points a single A record to the global VIP.

Incorrect. You do not need separate VPCs or VPC peering to use a global external HTTP(S) load balancer with backends in multiple regions; a single VPC is standard. Also, Cloud DNS should typically use an A/AAAA record to the load balancer’s IP; a CNAME is not used to point directly “to the load balancer” unless you have a provided DNS name (the LB primarily exposes an IP). This adds complexity without benefit.

Incorrect. Standard Network Tier does not provide a global anycast IP for external HTTP(S) load balancing; it is regional and won’t meet the “single global endpoint minimizing latency” requirement. Additionally, a CNAME record cannot point to an IP address (CNAME targets must be hostnames). Even if corrected to an A record, Standard Tier would still not satisfy the global latency and resilience goals.

問題分析

Core concept: This tests Google Cloud global external HTTP(S) Load Balancing (Application Load Balancer) with Premium Tier, which provides a single global anycast IP, latency-based routing on Google’s edge, and cross-region failover. It also touches Cloud CDN/Cloud Armor as managed services attached to the load balancer. Why the answer is correct: You need one global endpoint (site.example.com) that minimizes latency for users in EMEA and APAC and survives a full regional outage. A global external HTTP(S) load balancer with Premium Tier advertises a single anycast IP from many edge locations. Users are routed to the closest edge, then carried over Google’s backbone to the healthiest backend (europe-west1 or asia-southeast1). If an entire region fails, health checks mark that backend unhealthy and traffic automatically fails over to the remaining region—no DNS changes or client-side retries required. This is the Google-recommended pattern for global, highly available HTTPS services. Key features / best practices: - Global anycast VIP + Premium Tier: best latency and global resiliency; Standard Tier is regional and does not provide the same edge-based global routing. - Backend services can include multiple regional MIGs (one per region) with autoscaling/autohealing. - Health checks and capacity-based balancing provide automatic regional failover. - Cloud CDN integrates at the HTTP(S) load balancer for caching static content at the edge (useful for a landing site). - Cloud Armor is a recommended security control for internet-facing endpoints (WAF/DDoS policy), though not strictly required for availability. - Cloud DNS uses a single A record to the global VIP (not multiple A records for “DIY” failover). Common misconceptions: - Using DNS with multiple A records (round-robin) is not true load balancing or fast failover; caching/TTL and lack of health awareness can send users to a dead region. - Network Load Balancer (L4) is not the right tool for HTTPS features like host/path routing, managed certs, CDN integration, and global anycast. - Standard Tier cannot deliver a single global anycast endpoint. Exam tips: When you see “single global endpoint,” “minimize latency globally,” and “withstand regional outage,” default to Global External HTTP(S) Load Balancing with Premium Tier and multi-region backends. Use Cloud DNS A record to the global VIP; avoid DNS-based failover as the primary HA mechanism.

18
問題 18

Your media analytics team runs workers in a VPC-native GKE Standard cluster in us-east1 where nodes currently have external IPs; the cluster uses the default ip-masq-agent settings (SNAT enabled). A partner exposes an API only to traffic originating from your Cloud NAT public addresses, and the partner's allowlist covers 203.51.64.0/19; you must ensure all pod egress to 203.51.64.0/19 uses Cloud NAT rather than the nodes' external IPs. You will configure Cloud NAT on the cluster subnet; what change should you make on the cluster to ensure traffic to 203.51.64.0/19 is NATed by Cloud NAT?

Incorrect. Adding 203.51.64.0/19 to nonMasqueradeCIDRs disables pod-to-node SNAT for that destination, which preserves the pod IP rather than translating it to the node IP. That does not inherently cause Cloud NAT to be used, and in a cluster whose nodes have external IPs it does not solve the stated requirement. The explanation confuses disabling ip-masq with forcing Cloud NAT, which are not equivalent behaviors. This option changes pod source preservation, not Cloud NAT eligibility.

Correct. nonMasqueradeCIDRs lists destinations that should bypass pod-to-node SNAT, so removing 203.51.64.0/19 means traffic to that partner range will be masqueraded to the node IP. Among the provided choices, this is the only cluster-side change that actually alters egress behavior for that destination in the intended direction. It also correctly uses the ip-masq-agent mechanism rather than inventing unsupported Cloud NAT features. The option reflects the fact that ip-masq-agent, not Cloud NAT, is the cluster component that controls whether pods are SNATed to node IPs.

Incorrect. Cloud NAT does not provide a destination-based exclusion rule that says 'do not translate traffic to this external CIDR.' Cloud NAT is attached to a Cloud Router and subnet ranges, and its behavior is not configured as per-destination internet exclusions in the way this option describes. Therefore this is not a valid or meaningful configuration change for the requirement. The option is based on a feature that Cloud NAT does not expose.

Incorrect. Cloud NAT has no configuration called nonMasqueradeCIDRs, and it does not consume or mirror ip-masq-agent settings. This option incorrectly mixes Kubernetes pod SNAT controls with VPC egress NAT controls as if they were one system. Because the two layers are independent, duplicating a list in Cloud NAT is not a supported solution. The wording is technically inaccurate and does not describe a real configuration pattern.

問題分析

Core concept: This question tests the interaction between GKE ip-masq-agent and Cloud NAT for pod egress from a VPC-native GKE Standard cluster whose nodes have external IP addresses. The key point is that ip-masq-agent controls whether pod traffic is source-NATed to the node IP before leaving the node, while Cloud NAT is a VPC-level NAT service and does not share configuration with ip-masq-agent. Why correct: To make pod egress to a specific destination use the node identity rather than preserve the pod IP, that destination must not be listed in nonMasqueradeCIDRs. Removing 203.51.64.0/19 from nonMasqueradeCIDRs causes pod traffic to that partner range to be masqueraded to the node IP, which is the only cluster-side change among the options that aligns with using node-level egress behavior. The other options either preserve pod IPs, describe unsupported Cloud NAT behavior, or incorrectly conflate Cloud NAT with ip-masq-agent. Key features: - ip-masq-agent nonMasqueradeCIDRs defines destination CIDRs for which pod traffic keeps the pod source IP instead of being SNATed to the node IP. - If a destination is not in nonMasqueradeCIDRs, pod traffic is masqueraded to the node IP before leaving the node. - Cloud NAT is configured on subnets/routers and is independent of ip-masq-agent settings; it does not have a matching nonMasqueradeCIDRs concept. Common misconceptions: - Adding a destination to nonMasqueradeCIDRs does not force Cloud NAT; it only disables pod-to-node SNAT for that destination. - Cloud NAT does not support destination-based exclusion rules of the type described in option C. - Cloud NAT and ip-masq-agent solve different problems and cannot be configured with a shared destination list as implied by option D. Exam tips: - Separate Kubernetes SNAT behavior from VPC NAT behavior when reading GKE networking questions. - nonMasqueradeCIDRs means “do not SNAT pod traffic to node IP for these destinations.” - If an option claims Cloud NAT has destination-based filtering or shares config semantics with ip-masq-agent, it is usually incorrect.

19
問題 19

You are deploying a new VPC in europe-west1 to host internal microservices that must bind to two distinct private IP ranges. Your application VMs will reside in a subnet using 10.20.0.0/24, but a legacy client integration requires the same VMs to also listen on IPs from 192.168.70.0/24 for inbound connections. Without adding a second NIC or introducing new gateways, you need the instances to have addresses in both ranges; what should you do?

A global external HTTP(S) load balancer provides an external anycast VIP and L7 routing, not a way to make VMs natively own addresses from an internal RFC1918 range like 192.168.70.0/24. It also introduces a new front end (a managed gateway) and is protocol-specific (HTTP/S). This violates the requirement to avoid new gateways and doesn’t satisfy “instances have addresses in both ranges.”

DNS records can direct clients to different IPs, but DNS does not assign IP addresses to VM interfaces. If the VM does not actually have 192.168.70.0/24 addresses configured, it cannot receive traffic destined to those IPs (unless another device/load balancer owns them). DNS is a naming solution, not an IP addressing mechanism, so it cannot meet the requirement.

Alias IP ranges are the correct mechanism to give a VM additional internal IPs on the same NIC. You add 192.168.70.0/24 as a secondary range on the subnet and then assign alias IPs from that range to the instances. This satisfies: same VMs, two private ranges, no second NIC, and no additional gateways. It’s a standard VPC feature for multi-IP workloads.

VPC peering connects two separate VPC networks and exchanges routes, but it does not make a single VM interface hold IPs from two unrelated ranges. You would still need the 192.168.70.0/24 range to exist as a subnet/secondary range somewhere and a way to assign those IPs to the VM. Peering also adds architectural complexity and doesn’t meet the “same instances have addresses in both ranges” requirement.

問題分析

Core concept: This question tests VPC subnet design and how to assign multiple IP addresses/ranges to the same VM interface in Google Cloud without adding NICs or routing gateways. The key feature is Alias IP ranges (secondary ranges) on a subnet and alias IP assignment to VM instances. Why the answer is correct: You already have a primary subnet range (10.20.0.0/24) for the VM NICs, but the same VMs must also “listen” on addresses from a second private range (192.168.70.0/24). In Google Cloud, you can add a secondary IP range to the same subnet and then assign alias IPs from that secondary range to the VM’s existing NIC. This gives the instance additional internal IP addresses on the same interface, meeting the requirement of “no second NIC” and “no new gateways.” Traffic to those alias IPs is delivered directly to the VM by the VPC dataplane. Key features / configuration notes: 1) Add 192.168.70.0/24 as a secondary range on the subnet in europe-west1. 2) Configure each VM NIC with one or more alias IPs from that secondary range (or a /32 per VM, depending on how many IPs each needs). 3) Ensure firewall rules allow ingress to the alias IPs/ports as needed (firewall rules can target instances/tags/service accounts; they don’t require separate NICs). 4) Confirm the guest OS/app is configured to bind/listen on the additional IP(s). Common misconceptions: People often confuse “multiple IPs on a VM” with “multiple NICs” or think they need load balancers, DNS tricks, or VPC peering. DNS only changes name-to-IP mapping; it doesn’t make a VM own an IP. Load balancers are for distributing traffic and typically use VIPs that are not simply arbitrary RFC1918 ranges inside your subnet. VPC peering connects two VPCs; it doesn’t assign a second IP range to the same VM interface. Exam tips: When you see requirements like “same VM, two private ranges, no extra NIC, no gateway,” think: subnet secondary ranges + alias IPs. Also remember alias IPs are internal-only and are commonly used for multi-IP workloads, container networking, and service IPs, aligning with the Google Cloud Architecture Framework principle of choosing managed, native primitives that reduce operational complexity.

20
問題 20

You are deploying an internal-only metrics ingestion HTTP endpoint on a Compute Engine VM named collector-01 in zone us-central1-b within the project analytics-prd, the VM has no external IP and must be reachable only by multiple client VMs in the same VPC network, and you need a simple, built-in way for those clients to obtain the service’s IP address without creating public DNS records or exposing the service; what should you do?

Incorrect. Reserving a static external IP and using an external HTTP(S) load balancer creates an internet-reachable frontend by design. Even if backends are private, the forwarding rule is public and violates the requirement that the service be internal-only and not exposed. It also adds unnecessary cost and complexity compared to built-in internal DNS for VM discovery.

Correct. Compute Engine provides internal DNS that allows VMs in the VPC to resolve an instance’s internal FQDN to its private IP address. Using https://collector-01.us-central1-b.c.analytics-prd.internal/ is a simple, built-in service discovery mechanism that does not require public DNS records, does not expose the service externally, and works naturally for clients in the same VPC.

Incorrect. This option explicitly creates a public DNS A record and uses an external HTTP(S) load balancer with a static external IP. That contradicts the requirement to avoid public DNS records and to not expose the service. Even if access is restricted at the firewall, the endpoint is still publicly addressed and increases attack surface and operational overhead.

Incorrect. A short alias like https://metrics/v1/ is not something Compute Engine automatically provides. Default search domains are not guaranteed to resolve arbitrary hostnames unless you configure custom DNS (for example, a private Cloud DNS zone with an A record for “metrics”). Without that configuration, clients will fail to resolve the name, so it is not a reliable built-in solution.

問題分析

Core Concept: This question tests knowledge of Google Cloud’s built-in name resolution for private resources: Compute Engine internal DNS (provided by Cloud DNS in “internal” mode for Google-managed zones). It’s a managed network service that lets VMs discover other VMs by name using private RFC1918 addresses, without creating public DNS records. Why the Answer is Correct: The VM collector-01 has no external IP and must be reachable only by clients in the same VPC. The simplest built-in way for clients to obtain the service IP is to use the instance’s internal fully qualified domain name (FQDN), which resolves to the VM’s internal IP from within the VPC. The internal DNS name format includes the instance name, zone, and project-specific internal domain (…c.<project-id>.internal). Clients can call https://collector-01.us-central1-b.c.analytics-prd.internal/ and rely on Google-provided internal DNS resolution. This keeps the endpoint private and avoids managing Cloud DNS public zones or exposing any external load balancer. Key Features / Best Practices: - Compute Engine provides internal DNS resolution for instances on VPC networks; names resolve to internal IPs and are only resolvable from within the VPC (and, depending on configuration, from connected networks via Cloud VPN/Interconnect with DNS forwarding). - This approach aligns with the Google Cloud Architecture Framework principles of security (no public exposure) and operational excellence (managed service discovery without extra components). - If you need a stable name independent of VM lifecycle, you could later add a private Cloud DNS zone and records, but the question asks for “simple, built-in” and “no public DNS records.” Common Misconceptions: A and C look attractive because load balancers and static IPs provide stable endpoints, but they are external HTTP(S) load balancers with external IPs—contradicting “internal-only” and adding unnecessary exposure risk and cost. D assumes generic search domains and a short alias like “metrics,” which is not a guaranteed or default resolvable name in GCE; it would require custom DNS configuration. Exam Tips: - For private VM-to-VM discovery inside a VPC, think “Compute Engine internal DNS” first. - For private service names not tied to instance naming/zone, think “Cloud DNS private zone.” - For private L7 load balancing, think “Internal HTTP(S) Load Balancer,” not external.

合格体験記(5)

E
E**********Nov 25, 2025

学習期間: 1 month

I really appreciated the detailed explanations. This app strengthened my fundamentals more than any video course.

혜
혜**Nov 17, 2025

学習期間: 1 month

문제를 다 풀긴했는데 정답률이 65%가 나와서 1번 더 리셋해서 문제 풀었어요. 문제와 정답을 외우기보다 실제 개념 학습에 초점을 맞춰서 그런지 공부량이 많았고, 실제 시험에서 비슷한 유형도 나왔지만 처음보는 시나리오가 나왔는데도 잘 풀 수 있었어요. 수험생분들도 잘 준비해서 꼭 합격하시길!!

S
S***********Nov 6, 2025

学習期間: 1 month

I was surprised how similar the question style was to the actual PCNE exam. Practicing with this app made complex topics like VPC peering and NAT configuration much easier. Passed and I’m really satisfied.

A
A************Oct 25, 2025

学習期間: 2 weeks

I spent two weeks solving about 30 questions a day, and Cloud Pass helped me reinforce my weak spots in hybrid networking and load balancing strategies. This app is a must-have for anyone preparing for PCNE.

R
R*********Oct 17, 2025

学習期間: 1 month

Good questions and similar to the real exam questions. The app is very helpful tool

模擬試験

Practice Test #1

50 問題·120分·合格 700/1000

他のGCP認定資格

Google Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer

Professional

Google Associate Cloud Engineer

Google Associate Cloud Engineer

Associate

Google Associate Data Practitioner

Google Associate Data Practitioner

Associate

Google Cloud Digital Leader

Google Cloud Digital Leader

Foundational

Google Professional Cloud Security Engineer

Google Professional Cloud Security Engineer

Professional

Google Professional Cloud Architect

Google Professional Cloud Architect

Professional

Google Professional Cloud Database Engineer

Google Professional Cloud Database Engineer

Professional

Google Professional Data Engineer

Google Professional Data Engineer

Professional

Google Professional Cloud Developer

Google Professional Cloud Developer

Professional

Google Professional Machine Learning Engineer

Google Professional Machine Learning Engineer

Professional

今すぐ学習を始める

Cloud Passをダウンロードして、すべてのGoogle Professional Cloud Network Engineer練習問題を利用しましょう。

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT認定試験問題演習アプリ

Get it on Google PlayDownload on the App Store

認定資格

AWSGCPMicrosoftCiscoCompTIADatabricks

法務

FAQプライバシーポリシー利用規約

会社

お問い合わせアカウント削除

© Copyright 2026 Cloud Pass, All rights reserved.

外出先でもすべての問題を解きたいですか?

アプリを入手

Cloud Passをダウンロード — 模擬試験、学習進捗の追跡などを提供します。