CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Google Professional Cloud Network Engineer
Google Professional Cloud Network Engineer

Practice Test #1

Simulez l'expérience réelle de l'examen avec 50 questions et une limite de temps de 120 minutes. Entraînez-vous avec des réponses vérifiées par IA et des explications détaillées.

50Questions120Minutes700/1000Score de réussite
Parcourir les questions d'entraînement

Propulsé par l'IA

Réponses et explications vérifiées par triple IA

Chaque réponse est vérifiée par 3 modèles d'IA de pointe pour garantir une précision maximale. Obtenez des explications détaillées par option et une analyse approfondie des questions.

GPT Pro
Claude Opus
Gemini Pro
Explications par option
Analyse approfondie des questions
Précision par consensus de 3 modèles

Questions d'entraînement

1
Question 1

Your team manages a VPC named Studio that was created in auto mode for a global video-rendering platform; auto-mode VPCs reserve 10.128.0.0/9 for their subnets across regions. You must create a new VPC named Archive in the same project and connect it to Studio using VPC Network Peering so that internal RFC1918 traffic routes privately end-to-end without NAT; the two VPCs must have non-overlapping IP ranges now and as they scale. How should you configure the Archive VPC?

Incorrect. An auto-mode VPC uses the same predefined 10.128.0.0/9 space for its automatically created regional subnets. Since Studio already uses 10.128.0.0/9, creating Archive in auto mode will overlap, and VPC Network Peering requires non-overlapping subnet IP ranges. This option fails the core peering prerequisite and does not support safe scaling.

Correct. Archive should be created as a custom-mode VPC so you can create subnets from a different RFC1918 address plan than Studio’s auto-mode 10.128.0.0/9 space. In Google Cloud, you do not assign a CIDR to the VPC itself; instead, you create subnets with primary and optional secondary ranges, and those ranges must not overlap with the peered network. Using subnets carved from a plan such as 10.0.0.0/9 provides room for future growth while satisfying the peering requirement for private, non-NAT connectivity.

Incorrect. Assigning Archive 10.128.0.0/9 directly overlaps with Studio’s auto-mode reserved/used range. Overlapping IP ranges prevent VPC Network Peering from being established (or will break routing expectations). Even if you attempted to avoid overlap by careful subnetting, Studio’s auto-mode growth across regions makes future collisions likely.

Incorrect. Renaming the default VPC does not solve the IP overlap problem and is operationally risky. The default VPC is typically auto mode with 10.128.0.0/9 subnets, which would still overlap with Studio. Additionally, relying on the default VPC is discouraged for production due to uncontrolled subnet creation and weaker governance.

Analyse de la question

Core Concept: This question tests VPC Network Peering prerequisites and IP planning in Google Cloud. VPC Network Peering provides private connectivity over Google's backbone without NAT, but it requires that all subnet primary and secondary IP ranges in the peered VPCs do not overlap. Auto-mode VPCs automatically create regional subnets from the 10.128.0.0/9 space, so any peered network must use a different RFC1918 address plan. Why the Answer is Correct: Studio is an auto-mode VPC, so its subnets come from 10.128.0.0/9. To connect Archive privately now and in the future, Archive should be created as a custom-mode VPC so you can define subnets from a different non-overlapping RFC1918 plan, such as ranges carved from 10.0.0.0/9. After creating the required subnets, you can establish VPC Network Peering and route internal traffic privately without NAT. Key Features / Best Practices: - VPC Peering is non-transitive and requires non-overlapping subnet primary and secondary ranges. - Auto-mode VPCs automatically create one subnet per region from 10.128.0.0/9, which often causes collisions. - Custom-mode VPCs let you explicitly create only the subnets you need and choose scalable, non-overlapping CIDRs. - Good IP planning should reserve enough address space for future regional growth and secondary ranges. Common Misconceptions: A common mistake is thinking a custom-mode VPC itself is assigned a single CIDR block; in reality, the VPC contains subnets, and those subnet ranges must be planned to avoid overlap. Another misconception is that two auto-mode VPCs can be peered because they are separate networks, but they still conflict because both use 10.128.0.0/9. Renaming or reusing the default network also does not avoid address overlap. Exam Tips: - Remember that auto-mode VPCs use 10.128.0.0/9 for automatically created regional subnets. - For peering questions, always verify that all current and future subnet ranges can remain non-overlapping. - Prefer custom-mode VPCs in production because they provide better control over subnet creation, governance, and long-term scaling.

2
Question 2

You are a network engineer at a global streaming company migrating core APIs to Google Cloud. These are the connectivity requirements:

  • On-premises connectivity with at least 20 Gbps from a single metro data center
  • Lowest-latency private access to Google Cloud (target: <5 ms one-way to nearest POP)
  • A centralized Network Engineering team must administer all WAN links and routing New product groups are creating separate Google Cloud projects and require private access to on-prem services from their workloads. You need the most cost-efficient design to connect the data center to Google Cloud while meeting the above requirements. What should you do?

Correct. Putting the Interconnect (Dedicated or Partner) and all VLAN attachments in the Shared VPC host project centralizes control of WAN links, Cloud Router/BGP, and routing policy while allowing many service projects to use the shared VPC subnets and on-prem routes. This is typically the most cost-efficient and operationally consistent approach for a centralized Network Engineering team supporting many product projects.

Incorrect. Creating VLAN attachments in service projects decentralizes hybrid connectivity administration. Even if attachments connect to the Shared VPC network, ownership and lifecycle management are spread across product teams, conflicting with the requirement that a centralized Network Engineering team administer all WAN links and routing. It also increases the risk of inconsistent BGP/route advertisement settings and complicates troubleshooting and governance.

Incorrect. Standalone projects with VLAN attachments per project require each project to manage its own hybrid connectivity constructs and routing, which violates the centralized administration requirement. It is also less cost-efficient at scale because each project may need its own Cloud Router sessions and attachment configuration, increasing operational overhead and making consistent routing policy enforcement harder.

Incorrect. Deploying both Interconnects and VLAN attachments in every project is the least cost-efficient and most operationally complex design. It duplicates expensive connectivity resources, increases port and attachment counts, and makes centralized routing control nearly impossible. Project-level isolation should be achieved through Shared VPC governance, IAM, and network segmentation rather than replicating physical Interconnect infrastructure.

Analyse de la question

Core concept: This question tests hybrid connectivity design using Cloud Interconnect (Dedicated or Partner), VLAN attachments, Cloud Router/BGP, and Shared VPC for multi-project private connectivity with centralized administration. Why the answer is correct: A single metro data center needing >=20 Gbps and <5 ms one-way latency to the nearest Google POP strongly points to Cloud Interconnect rather than VPN. Dedicated Interconnect provides 10/100 Gbps per link and is designed for lowest-latency private connectivity at a colocation facility; Partner Interconnect can also meet bandwidth needs depending on provider offerings, but Dedicated is the typical choice for strict latency/throughput targets. To keep WAN links and routing centrally administered while enabling many product teams to use separate projects, you should place the Interconnect and all VLAN attachments in a Shared VPC host project. This centralizes ownership of the physical connectivity, VLAN attachments, Cloud Router sessions, and route exchange policies, while service projects consume the shared subnets and automatically gain access to on-prem routes (subject to IAM and firewall policy). Key features / best practices: - Use Shared VPC to separate network administration (host project) from application ownership (service projects), aligning with the Google Cloud Architecture Framework principle of centralized governance with delegated usage. - Terminate Interconnect VLAN attachments in the host project VPC and use Cloud Router with BGP to exchange routes. Apply consistent routing policy (advertised prefixes, BGP communities, route priority) centrally. - For >=20 Gbps, deploy redundant Interconnect connections (and redundant VLAN attachments/Cloud Routers) to meet HA best practices and avoid single points of failure. Consider quotas/limits for VLAN attachments and Cloud Router sessions per region. - Cost efficiency: one set of Interconnect resources shared across many projects is typically cheaper and operationally simpler than duplicating attachments per project. Common misconceptions: It may seem appealing to create VLAN attachments in each service project for autonomy, but that fragments routing control and increases operational overhead and cost. Another misconception is that “isolation” requires separate Interconnects per project; isolation is usually achieved with IAM, firewall policies, and segmentation (subnets, VPC design), not duplicating physical connectivity. Exam tips: When you see multi-project private hybrid access plus centralized network control, think Shared VPC host project owns the hybrid connectivity (Interconnect/VLAN attachments/Cloud Router). Use Dedicated Interconnect for strict latency and high throughput from a single metro, and design for redundancy (at least two connections and dual attachments) even if not explicitly asked.

3
Question 3

Your security team now requires capturing packet payloads for all egress traffic originating from Compute Engine instances in region europe-west4 within VPC prod-vpc, limited to subnets app-euw4 (10.70.0.0/20) and jobs-euw4 (10.70.16.0/20). You have deployed an IDS virtual appliance as a regional managed instance group with 3 VMs (ids-mig) in europe-west4. You must integrate the IDS so it receives mirrored packets for egress traffic only and production routing remains unchanged. What should you do?

Firewall rules logging records allow/deny decisions and some connection metadata in Cloud Logging, but it does not capture full packets or payloads. Forwarding logs to an IDS would not provide the raw traffic needed for deep packet inspection and signature-based detection. It also introduces log processing latency and does not meet the explicit “packet payloads” requirement.

VPC Flow Logs capture network flow metadata (e.g., src/dst IP, ports, protocol, bytes, packets, start/end time) and are useful for visibility and troubleshooting, not payload inspection. Even with a Logging sink and filtering for egress, you still cannot reconstruct packet contents. This fails the IDS requirement for mirrored packets with payloads.

Packet Mirroring is designed to send copies of packets (including payload) to security appliances without changing production routing. A packet mirroring collector is implemented using a regional internal TCP/UDP (passthrough) load balancer, which can target a managed instance group like ids-mig for scale and HA. Configuring a regional policy in europe-west4 with direction=EGRESS and selecting the two subnets precisely matches the requirements.

An internal HTTP(S) load balancer is a proxy-based Layer 7 service and is not used as a Packet Mirroring collector. Packet Mirroring collectors require an internal passthrough Network Load Balancer (TCP/UDP) so the mirrored packets are delivered without L7 termination or protocol constraints. Additionally, “HTTP(S)” would not cover arbitrary egress traffic (non-HTTP) from the subnets.

Analyse de la question

Core concept: This question tests Google Cloud Packet Mirroring for out-of-band network security monitoring. Packet Mirroring copies packets (including payload) from selected sources (VM NICs, instance tags, or subnets) to a collector without changing routing or requiring inline appliances. This aligns with the Google Cloud Architecture Framework security principle of centralized visibility and detection while preserving reliability by avoiding inline dependencies. Why the answer is correct: The requirement is to capture packet payloads for egress traffic originating from Compute Engine instances in europe-west4, limited to two subnets, and to send mirrored packets to an IDS MIG, while keeping production routing unchanged. Packet Mirroring is the only option that provides full packet copies (payload) rather than metadata logs. In Google Cloud, mirrored traffic is delivered to a Packet Mirroring collector, which is implemented using a regional internal passthrough Network Load Balancer (internal TCP/UDP load balancer). You then configure a regional packet mirroring policy in europe-west4, select the two subnets as the mirrored sources, and set direction=EGRESS to mirror only outbound traffic. Key features/configuration points: - Scope: Packet mirroring policy is regional; sources and collector must be in the same region (europe-west4). - Sources: select subnets app-euw4 and jobs-euw4 to constrain coverage. - Direction: set to EGRESS to meet “egress only” requirement. - Collector: use a regional internal TCP/UDP load balancer (passthrough) targeting the ids-mig instance group so traffic is distributed across the 3 IDS VMs. - Routing unchanged: mirroring is a copy; it does not affect forwarding decisions or introduce latency to production flows. Common misconceptions: Flow Logs and firewall logs are often confused with packet capture. They provide metadata (5-tuple, bytes, actions) but not payload, so they cannot satisfy “packet payloads.” Also, HTTP(S) load balancers are proxy-based L7 services and are not used as packet mirroring collectors. Exam tips: When you see “payload,” “IDS,” “out-of-band,” and “routing remains unchanged,” think Packet Mirroring. Remember collectors use internal passthrough (TCP/UDP) load balancing, and policies are regional with selectable direction (INGRESS/EGRESS/BOTH). Also consider quotas/cost: mirrored traffic can be high-volume and billed; scope mirroring tightly (subnets, direction, optional filters) to control cost and capacity.

4
Question 4

Your company is deploying a new 20-Gbps Dedicated Interconnect with two VLAN attachments in us-east4 and BGP peering to on-premises ASN 65010; three departments (R&D, HR, and Finance) each use separate service projects attached to a single Shared VPC host project that owns the central VPC, and you need all departments to exchange routes with on-premises over this Interconnect—where should you create the Cloud Router instance?

Incorrect. You cannot create Cloud Router “in the VPC networks of all projects” to solve Shared VPC hybrid routing. Cloud Router is tied to a specific VPC network, and in Shared VPC the VPC network exists in the host project. Service projects don’t have separate VPC networks for the shared subnets; they attach to the host’s VPC. Multiple routers in multiple projects would not provide shared VPC route exchange.

Incorrect. Creating the Cloud Router in the Finance service project would not attach it to the Shared VPC host project’s VPC network. Service projects typically do not own the shared VPC network; they only use subnets from the host. As a result, the VLAN attachments/BGP sessions for the Dedicated Interconnect would not be correctly associated with the central VPC used by all departments.

Correct. The Shared VPC host project owns the central VPC network, and Cloud Router must be created in the same project as the VPC network it serves. Since the Dedicated Interconnect VLAN attachments are in us-east4 and need to exchange routes with on-premises for all departments, placing Cloud Router in the host project’s VPC enables dynamic route import/export for the shared network consumed by R&D, HR, and Finance.

Incorrect. R&D, HR, and Finance are service projects and do not each have their own VPC network for the shared subnets; they attach to the host project’s VPC. Creating separate Cloud Routers in each service project would not connect those routers to the shared VPC, and would not correctly terminate the VLAN attachments for the Dedicated Interconnect that must be associated with the host VPC.

Analyse de la question

Core Concept: This question tests how Cloud Router, VLAN attachments (Dedicated Interconnect), and Shared VPC interact. Cloud Router is a regional, VPC-scoped managed BGP control plane used to dynamically exchange routes between a VPC network and on-premises via Interconnect (or VPN). Why the Answer is Correct: In a Shared VPC architecture, the VPC network is owned by the host project, and service projects attach resources (VMs, GKE nodes, etc.) to subnets in that host project’s VPC. Hybrid connectivity components that are VPC-scoped—such as Cloud Router and VLAN attachments—must be created in the project that owns the VPC network (the Shared VPC host project). Creating the Cloud Router in the host project’s VPC ensures that routes learned from on-premises over BGP are imported into the shared VPC and therefore become available to all attached service projects (R&D, HR, Finance) according to routing mode and any custom route import/export policies. Key Features / Configurations: - Cloud Router is regional, so you create it in us-east4 to match the VLAN attachments in us-east4. - VLAN attachments terminate into the host project’s VPC and are associated with a Cloud Router for BGP sessions to on-prem ASN 65010. - Dynamic route exchange then propagates within the VPC (subject to global vs regional dynamic routing mode, and any custom route advertisements). - Best practice: use redundant VLAN attachments and BGP sessions for high availability; ensure appropriate route advertisement (custom prefixes if needed) and consider route limits/quotas. Common Misconceptions: A frequent mistake is thinking each department’s service project needs its own Cloud Router. Service projects do not “own” the shared VPC network; they consume it. Therefore, placing Cloud Router in a service project won’t attach it to the shared VPC network and won’t provide hybrid route exchange for all departments. Exam Tips: - Remember the scoping rule: Cloud Router and Interconnect VLAN attachments are created in the project that owns the VPC network they attach to. - For Shared VPC, hybrid connectivity is typically centralized in the host project. - Always align Cloud Router region with the Interconnect/VLAN attachment region, and validate dynamic routing mode and route advertisement/import settings for cross-project reachability.

5
Question 5

You are deploying a new VPC in europe-west1 to host internal microservices that must bind to two distinct private IP ranges. Your application VMs will reside in a subnet using 10.20.0.0/24, but a legacy client integration requires the same VMs to also listen on IPs from 192.168.70.0/24 for inbound connections. Without adding a second NIC or introducing new gateways, you need the instances to have addresses in both ranges; what should you do?

A global external HTTP(S) load balancer provides an external anycast VIP and L7 routing, not a way to make VMs natively own addresses from an internal RFC1918 range like 192.168.70.0/24. It also introduces a new front end (a managed gateway) and is protocol-specific (HTTP/S). This violates the requirement to avoid new gateways and doesn’t satisfy “instances have addresses in both ranges.”

DNS records can direct clients to different IPs, but DNS does not assign IP addresses to VM interfaces. If the VM does not actually have 192.168.70.0/24 addresses configured, it cannot receive traffic destined to those IPs (unless another device/load balancer owns them). DNS is a naming solution, not an IP addressing mechanism, so it cannot meet the requirement.

Alias IP ranges are the correct mechanism to give a VM additional internal IPs on the same NIC. You add 192.168.70.0/24 as a secondary range on the subnet and then assign alias IPs from that range to the instances. This satisfies: same VMs, two private ranges, no second NIC, and no additional gateways. It’s a standard VPC feature for multi-IP workloads.

VPC peering connects two separate VPC networks and exchanges routes, but it does not make a single VM interface hold IPs from two unrelated ranges. You would still need the 192.168.70.0/24 range to exist as a subnet/secondary range somewhere and a way to assign those IPs to the VM. Peering also adds architectural complexity and doesn’t meet the “same instances have addresses in both ranges” requirement.

Analyse de la question

Core concept: This question tests VPC subnet design and how to assign multiple IP addresses/ranges to the same VM interface in Google Cloud without adding NICs or routing gateways. The key feature is Alias IP ranges (secondary ranges) on a subnet and alias IP assignment to VM instances. Why the answer is correct: You already have a primary subnet range (10.20.0.0/24) for the VM NICs, but the same VMs must also “listen” on addresses from a second private range (192.168.70.0/24). In Google Cloud, you can add a secondary IP range to the same subnet and then assign alias IPs from that secondary range to the VM’s existing NIC. This gives the instance additional internal IP addresses on the same interface, meeting the requirement of “no second NIC” and “no new gateways.” Traffic to those alias IPs is delivered directly to the VM by the VPC dataplane. Key features / configuration notes: 1) Add 192.168.70.0/24 as a secondary range on the subnet in europe-west1. 2) Configure each VM NIC with one or more alias IPs from that secondary range (or a /32 per VM, depending on how many IPs each needs). 3) Ensure firewall rules allow ingress to the alias IPs/ports as needed (firewall rules can target instances/tags/service accounts; they don’t require separate NICs). 4) Confirm the guest OS/app is configured to bind/listen on the additional IP(s). Common misconceptions: People often confuse “multiple IPs on a VM” with “multiple NICs” or think they need load balancers, DNS tricks, or VPC peering. DNS only changes name-to-IP mapping; it doesn’t make a VM own an IP. Load balancers are for distributing traffic and typically use VIPs that are not simply arbitrary RFC1918 ranges inside your subnet. VPC peering connects two VPCs; it doesn’t assign a second IP range to the same VM interface. Exam tips: When you see requirements like “same VM, two private ranges, no extra NIC, no gateway,” think: subnet secondary ranges + alias IPs. Also remember alias IPs are internal-only and are commonly used for multi-IP workloads, container networking, and service IPs, aligning with the Google Cloud Architecture Framework principle of choosing managed, native primitives that reduce operational complexity.

Envie de vous entraîner partout ?

Téléchargez Cloud Pass — inclut des tests d'entraînement, le suivi de progression et plus encore.

6
Question 6

You are deploying an internal-only metrics ingestion HTTP endpoint on a Compute Engine VM named collector-01 in zone us-central1-b within the project analytics-prd, the VM has no external IP and must be reachable only by multiple client VMs in the same VPC network, and you need a simple, built-in way for those clients to obtain the service’s IP address without creating public DNS records or exposing the service; what should you do?

Incorrect. Reserving a static external IP and using an external HTTP(S) load balancer creates an internet-reachable frontend by design. Even if backends are private, the forwarding rule is public and violates the requirement that the service be internal-only and not exposed. It also adds unnecessary cost and complexity compared to built-in internal DNS for VM discovery.

Correct. Compute Engine provides internal DNS that allows VMs in the VPC to resolve an instance’s internal FQDN to its private IP address. Using https://collector-01.us-central1-b.c.analytics-prd.internal/ is a simple, built-in service discovery mechanism that does not require public DNS records, does not expose the service externally, and works naturally for clients in the same VPC.

Incorrect. This option explicitly creates a public DNS A record and uses an external HTTP(S) load balancer with a static external IP. That contradicts the requirement to avoid public DNS records and to not expose the service. Even if access is restricted at the firewall, the endpoint is still publicly addressed and increases attack surface and operational overhead.

Incorrect. A short alias like https://metrics/v1/ is not something Compute Engine automatically provides. Default search domains are not guaranteed to resolve arbitrary hostnames unless you configure custom DNS (for example, a private Cloud DNS zone with an A record for “metrics”). Without that configuration, clients will fail to resolve the name, so it is not a reliable built-in solution.

Analyse de la question

Core Concept: This question tests knowledge of Google Cloud’s built-in name resolution for private resources: Compute Engine internal DNS (provided by Cloud DNS in “internal” mode for Google-managed zones). It’s a managed network service that lets VMs discover other VMs by name using private RFC1918 addresses, without creating public DNS records. Why the Answer is Correct: The VM collector-01 has no external IP and must be reachable only by clients in the same VPC. The simplest built-in way for clients to obtain the service IP is to use the instance’s internal fully qualified domain name (FQDN), which resolves to the VM’s internal IP from within the VPC. The internal DNS name format includes the instance name, zone, and project-specific internal domain (…c.<project-id>.internal). Clients can call https://collector-01.us-central1-b.c.analytics-prd.internal/ and rely on Google-provided internal DNS resolution. This keeps the endpoint private and avoids managing Cloud DNS public zones or exposing any external load balancer. Key Features / Best Practices: - Compute Engine provides internal DNS resolution for instances on VPC networks; names resolve to internal IPs and are only resolvable from within the VPC (and, depending on configuration, from connected networks via Cloud VPN/Interconnect with DNS forwarding). - This approach aligns with the Google Cloud Architecture Framework principles of security (no public exposure) and operational excellence (managed service discovery without extra components). - If you need a stable name independent of VM lifecycle, you could later add a private Cloud DNS zone and records, but the question asks for “simple, built-in” and “no public DNS records.” Common Misconceptions: A and C look attractive because load balancers and static IPs provide stable endpoints, but they are external HTTP(S) load balancers with external IPs—contradicting “internal-only” and adding unnecessary exposure risk and cost. D assumes generic search domains and a short alias like “metrics,” which is not a guaranteed or default resolvable name in GCE; it would require custom DNS configuration. Exam Tips: - For private VM-to-VM discovery inside a VPC, think “Compute Engine internal DNS” first. - For private service names not tied to instance naming/zone, think “Cloud DNS private zone.” - For private L7 load balancing, think “Internal HTTP(S) Load Balancer,” not external.

7
Question 7
(Sélectionnez 2)

Your company uses a third-party virtual firewall appliance to inspect egress traffic. In a spoke VPC in us-central1, you configured a custom default route 0.0.0.0/0 (priority 100) that sends all egress to the firewall in a hub VPC via next hop instance. None of the VM instances in the spoke subnets have external IP addresses. The data team needs VM instances in subnet analytics-subnet (10.50.1.0/24) to access the Cloud Storage JSON API and the Cloud Logging API directly without traversing the firewall, but all other internet-bound traffic must continue to use the firewall route. You must not assign public IPs to any VMs and must retain the existing 0.0.0.0/0 to the firewall for general egress. Which two actions should you take? (Choose two.)

Correct. Private Google Access is enabled at the subnet level and allows VMs without external IPs to reach Google APIs/services. Enabling it on analytics-subnet satisfies the requirement that only that subnet’s VMs access Cloud Storage JSON API and Cloud Logging API directly, without needing public IPs. It is a foundational requirement before those VMs can use Google APIs VIPs successfully.

Incorrect. Private Google Access is not enabled at the VPC level; it is configured per subnet. The exam often tests this scope detail. While organization policies and DNS can be centralized, the PGA on/off switch is a subnet attribute. Therefore, this option is not a valid configuration action in Google Cloud.

Incorrect. Private Services Access is used to access managed services privately over VPC peering (for example, Cloud SQL private IP) and is unrelated to accessing Google APIs like Cloud Storage JSON API or Cloud Logging API. It would not help bypass the firewall for Google APIs, nor does it address the routing requirement described.

Correct. Creating more-specific routes for the Google APIs VIP ranges to the default internet gateway ensures those destinations bypass the 0.0.0.0/0 next-hop-instance route to the firewall. Because longest prefix match wins over a /0 route, only Google API traffic is carved out; all other internet-bound traffic still follows the existing default route to the firewall.

Incorrect. Google APIs are not reached via internal RFC1918 addresses in your VPC; they are accessed via well-known VIP ranges (e.g., 199.36.153.4/30 and 199.36.153.8/30) when using private/restricted Google access. Routing to “internal IP addresses of Google APIs” via the default internet gateway is not a valid or meaningful configuration and would not achieve the goal.

Analyse de la question

Core concept: This question tests how to let VMs without external IPs reach Google APIs privately while still forcing general internet egress through a third-party firewall using custom routes. The key services are Private Google Access (PGA) and VPC routing/route priority behavior. Why the answer is correct: 1) Enable Private Google Access on analytics-subnet. PGA allows VMs that have only internal IPs to reach Google APIs and services via Google’s network by targeting Google APIs VIPs (e.g., restricted.googleapis.com or private.googleapis.com). Without PGA, those VMs cannot reach Google APIs unless they have external IPs or use a proxy/NAT path. 2) Add more-specific static routes for Google APIs VIP ranges to the default internet gateway. You must keep the existing 0.0.0.0/0 route (priority 100) pointing to the firewall. In Google Cloud routing, the most specific (longest prefix) route wins before priority. Therefore, adding routes for the Google APIs VIP(s) (more specific than /0) will override the default route and send only that traffic directly to the internet gateway, bypassing the firewall, while all other destinations still match 0.0.0.0/0 and continue to traverse the firewall. Key features / configurations: - Private Google Access is enabled per subnet (not per VPC). Apply it only to analytics-subnet to limit scope. - Use the documented Google APIs VIP ranges (private.googleapis.com 199.36.153.8/30 and/or restricted.googleapis.com 199.36.153.4/30) and create custom routes to the default internet gateway. Ensure DNS for the subnet resolves Google APIs to the intended VIP (often via Cloud DNS policy or on-prem DNS forwarding rules). - This design aligns with the Google Cloud Architecture Framework (security and network design): least privilege routing (only carve out what’s needed) and centralized inspection for everything else. Common misconceptions: - Private Services Access is for private connectivity to services using Private Service Connect/peering (e.g., Cloud SQL) and does not enable Google APIs access. - “Internal IP addresses of Google APIs” is incorrect; Google APIs are reached via VIPs, not internal RFC1918 addresses. Exam tips: - Remember route selection order: longest prefix match first, then priority. - PGA is subnet-level. If a question says “some subnets need Google APIs without external IPs,” think PGA + DNS + specific routes (when a default route to a firewall/proxy exists).

8
Question 8

Helios Labs operates three product squads—API, Batch, and Analytics—each requiring Compute Engine instances in us-central1, europe-west1, and asia-east1 (up to 20 VMs per squad per region) and separate projects for billing isolation and least-privilege access. Your 2-person network/security team must retain centralized control of IP space 10.64.0.0/12, including subnets, routes, and firewall rules, avoid overlapping subnets, and minimize operational overhead for inter-squad connectivity. How should you design the Google Cloud network topology?

Correct. A single Shared VPC host project centralizes IP space (10.64.0.0/12), subnet creation, routes, and firewall rules while allowing each squad to operate in its own service project for billing isolation and least privilege. Subnet-level IAM (e.g., Compute Network User) enables squads to deploy VMs without network admin rights. All workloads share one VPC routing domain, minimizing connectivity overhead and preventing overlapping CIDRs.

Incorrect. Three standalone VPCs plus full-mesh HA VPN adds significant operational overhead (multiple gateways, tunnels, routing, monitoring) and cost. It also does not provide centralized control of subnets and firewall rules across projects; each VPC remains independently administered. VPN is better suited for hybrid connectivity or when encryption over the public internet is required, not for routine intra-org east-west connectivity.

Incorrect. Using three separate Shared VPC host projects would still split the environment into three distinct VPC networks, each with its own routing domain, firewall rules, and subnet administration. While a central team could manually coordinate IP allocations across those hosts, this design does not provide the single authoritative network control plane or the simplest east-west connectivity model requested. Inter-squad communication would still require additional constructs such as VPC Network Peering or VPN, which increases operational overhead compared with placing all squads in one Shared VPC.

Incorrect. VPC Network Peering reduces some overhead versus VPN, but still creates separate routing/security domains with decentralized firewall and route management in each VPC—contrary to the requirement for centralized control. Peering also introduces design constraints (no transitive peering; careful route exchange planning) and does not inherently prevent overlapping subnets unless you enforce IPAM externally. Shared VPC is the cleaner governance model here.

Analyse de la question

Core concept: This question tests Shared VPC design for multi-project environments, emphasizing centralized network governance (IPAM, subnets, routes, firewall policy) while enabling decentralized workload ownership. It also touches on minimizing operational overhead for east-west connectivity and preventing overlapping CIDRs. Why the answer is correct: A single Shared VPC host project with 10.64.0.0/12 provides one authoritative IP space and one VPC routing domain. Attaching three service projects (API, Batch, Analytics) preserves billing isolation and least-privilege access at the project level while allowing all squads to deploy VMs into centrally controlled subnets in us-central1, europe-west1, and asia-east1. Because all instances are in the same VPC, inter-squad connectivity is native (no peering/VPN meshes), and the network/security team retains centralized control over subnets, routes, and firewall rules—exactly the stated requirements. Key features / best practices: - Shared VPC host project owns the VPC network, subnets, routes, and firewall rules; service projects consume them. - Subnet-level IAM delegation (e.g., Compute Network User on specific subnets) lets squads create instances without granting broad network admin rights. - Centralized firewall governance can be implemented with hierarchical firewall policies (organization/folder) or host-project firewall rules, aligning with the Google Cloud Architecture Framework’s governance and security principles. - A single IP plan (10.64.0.0/12) avoids overlap by design; regional subnets can be sized for up to 20 VMs per squad per region with growth buffer. Common misconceptions: Teams often choose VPC peering or VPN to connect separate VPCs, but that increases operational overhead and fragments firewall/routing control. Peering also has transitivity limitations and can complicate centralized security posture. Exam tips: When you see “separate projects for billing/least privilege” plus “centralized control of subnets/routes/firewalls” and “avoid overlap,” Shared VPC is the default pattern. Prefer a single Shared VPC when you want simple east-west connectivity and centralized governance; use peering/VPN primarily when you must keep separate routing domains or connect across organizations/hybrid environments.

9
Question 9

Your company runs a manufacturing plant with an on-premises Juniper SRX gateway in Tokyo (ASN 65010) that must exchange routes dynamically with a Google Cloud VPC in asia-northeast1. You have already created an HA VPN gateway in Google Cloud and a peer VPN gateway that points to the SRX public IPs. The on-prem network advertises 172.20.0.0/16, and the VPC will advertise 10.60.0.0/16 and 10.61.0.0/16, with the requirement that any future subnets be learned automatically without manual updates. What should you do next to complete the setup for dynamic routing?

Correct. Dynamic routing over Cloud VPN requires Cloud Router with BGP. Create the Cloud Router in asia-northeast1, then create two tunnels on the HA VPN gateway (one per interface) and configure a BGP session per tunnel peering to the on-prem ASN 65010. This enables automatic learning/advertising of current and future routes without manual updates and provides high availability.

Incorrect. Adding static routes contradicts the requirement to learn future subnets automatically. Static routes would require manual changes whenever new VPC subnets are created or on-prem prefixes change. While Cloud Router can exist alongside static routes, static routing is not “dynamic routing” and is not the intended solution for BGP-based HA VPN connectivity.

Incorrect. A second HA VPN gateway is unnecessary because HA VPN already includes two interfaces designed for redundancy with two tunnels. Also, you generally do not create VPC firewall rules to allow BGP (tcp/179) to Cloud Router because Cloud Router is a managed control-plane service, not a VM endpoint in your VPC that receives BGP packets directly.

Incorrect. Enabling global dynamic routing may be useful if you need routes learned in asia-northeast1 to be available in other regions, but it does not replace the required Cloud Router + BGP configuration. Creating a second HA VPN gateway is also unnecessary. The missing core step is establishing BGP sessions via Cloud Router on the existing HA VPN gateway tunnels.

Analyse de la question

Core Concept: This question tests dynamic routing for hybrid connectivity using HA VPN + Cloud Router (BGP). In Google Cloud, dynamic route exchange over Cloud VPN requires a Cloud Router in the same region as the HA VPN gateway, with BGP sessions established over each VPN tunnel. Why the Answer is Correct: You already have an HA VPN gateway and a peer VPN gateway (the on-prem Juniper SRX). To exchange routes dynamically (and learn future subnets automatically), you must use Cloud Router with BGP. The correct next steps are: create a Cloud Router in asia-northeast1 with a Google-side ASN (any private ASN that doesn’t conflict, e.g., 64514), create two VPN tunnels (one per HA VPN interface) to the on-prem peer, and configure BGP sessions (one per tunnel) peering to the SRX ASN 65010. Once BGP is up, the SRX can advertise 172.20.0.0/16 and Google can advertise VPC subnet routes (10.60.0.0/16, 10.61.0.0/16, and future subnets) without manual route changes. Key Features / Best Practices: - HA VPN provides two interfaces; best practice is two tunnels and two BGP sessions for redundancy. - Cloud Router is a managed BGP speaker; it automatically advertises eligible VPC subnet routes and learns on-prem prefixes. - Ensure BGP IPs (link-local /30s) are configured consistently on both sides and that Cloud Router is in the same region as the HA VPN gateway. - Consider VPC dynamic routing mode (regional vs global). Global is only needed if you want learned routes to propagate across regions; it’s not required just to establish BGP in asia-northeast1. Common Misconceptions: - Thinking static routes satisfy “dynamic routing” (they don’t; they require manual updates for future subnets). - Believing you need a second HA VPN gateway for HA (HA VPN already provides redundancy with two interfaces). - Assuming firewall rules are needed for BGP to Cloud Router; Cloud Router is not a VM and BGP runs inside the VPN/IKE/IPsec construct. Exam Tips: For “learn future subnets automatically” keywords, look for Cloud Router + BGP. For HA VPN, expect two tunnels and two BGP sessions in the same region. Only choose “global dynamic routing” when the question explicitly requires cross-region propagation of learned routes.

10
Question 10
(Sélectionnez 2)

Aurora Analytics ordered two 10-Gbps Dedicated Interconnect circuits at the Equinix TY2 facility in Tokyo under project net-prod-123 and named the first interconnect aurora-ty2-ic1; you must provide the Letter of Authorization/Connecting Facility Assignment (LOA-CFA) to the colocation provider within 5 business days to complete the cross-connect. During the order, you specified noc-aurora@aurora.example as the NOC contact email. Which two actions will allow you to obtain the LOA-CFA documents? (Choose two.)

Opening a Cloud Support case is not the standard or required method to obtain an LOA-CFA. In normal Dedicated Interconnect provisioning, the LOA-CFA is available through self-service mechanisms (Console download and email to the NOC contact). Support may help only if there is an abnormal situation (e.g., LOA-CFA not generated, access issues, or incorrect contact details), but it’s not one of the primary actions expected on the exam.

Correct. The Google Cloud Console provides the LOA-CFA for a Dedicated Interconnect under the Hybrid Connectivity/Cloud Interconnect section. You select the specific interconnect resource (such as aurora-ty2-ic1) and download the LOA-CFA document needed by the colocation provider to complete the cross-connect. This is a common operational step and aligns with self-service provisioning workflows for Dedicated Interconnect.

Incorrect. The gcloud compute interconnects describe command returns resource metadata (state, location, link type, etc.) but does not directly download a LOA-CFA PDF as part of the standard CLI output. While APIs/console can expose LOA-CFA retrieval, the described command is not the typical or documented method to download the LOA-CFA document itself for cross-connect ordering.

Correct. Google sends the LOA-CFA to the NOC contact email specified during the Dedicated Interconnect order. Since noc-aurora@aurora.example was provided as the NOC contact, checking that inbox is an appropriate way to obtain the LOA-CFA within the required timeline. This is why using a monitored NOC distribution list is a best practice for interconnect operations.

Incorrect. Google does not generally send the LOA-CFA directly to the colocation provider as a fully hands-off process. The customer typically must provide the LOA-CFA to the provider to authorize and instruct the cross-connect. Telling the provider no action is needed risks missing the 5-business-day window and delaying circuit turn-up, impacting project timelines and redundancy goals.

Analyse de la question

Core Concept: This question tests operational steps for provisioning Dedicated Cloud Interconnect, specifically how to retrieve the Letter of Authorization/Connecting Facility Assignment (LOA-CFA). The LOA-CFA is required by the colocation provider to complete the physical cross-connect between your router and Google’s interconnect port at a facility (here, Equinix TY2). This is part of hybrid connectivity lifecycle management. Why the Answer is Correct: You can obtain LOA-CFA documents in two standard ways. First, Google Cloud Console provides the LOA-CFA for the specific interconnect resource under Hybrid Connectivity (Cloud Interconnect), where you can download the document tied to the interconnect (e.g., aurora-ty2-ic1). Second, Google sends the LOA-CFA to the NOC contact email specified during ordering; checking that inbox is a valid retrieval method. These align with the expected workflow: once the interconnect is provisioned/ready for cross-connect, the LOA-CFA becomes available and is distributed to the operational contact. Key Features / Best Practices: Dedicated Interconnect requires coordination among three parties: customer, colocation provider, and Google. The LOA-CFA contains details such as the meet-me room/cage information and port identifiers needed for the cross-connect order. Best practice is to ensure the NOC email is a monitored distribution list and to download/store LOA-CFAs in a controlled repository for auditability. From a Google Cloud Architecture Framework perspective, this supports Operational Excellence (repeatable provisioning processes) and Reliability (timely completion of physical connectivity to meet redundancy/availability goals, especially with dual circuits). Common Misconceptions: It’s tempting to assume Cloud Support must be contacted (A), but LOA-CFAs are normally self-service via Console/email unless there is an exception (lost access, provisioning issues). It’s also common to assume Google sends LOA-CFA directly to the colocation provider (E); in practice, the customer typically provides it to the provider to initiate the cross-connect. Exam Tips: For Dedicated Interconnect, remember the sequence: order ports -> receive/download LOA-CFA -> place cross-connect with colocation -> configure VLAN attachments (interconnect attachments) -> BGP. When asked “how to obtain LOA-CFA,” think “Console download” and “NOC email.”

Témoignages de réussite(5)

E
E**********Nov 25, 2025

Période de préparation: 1 month

I really appreciated the detailed explanations. This app strengthened my fundamentals more than any video course.

혜
혜**Nov 17, 2025

Période de préparation: 1 month

문제를 다 풀긴했는데 정답률이 65%가 나와서 1번 더 리셋해서 문제 풀었어요. 문제와 정답을 외우기보다 실제 개념 학습에 초점을 맞춰서 그런지 공부량이 많았고, 실제 시험에서 비슷한 유형도 나왔지만 처음보는 시나리오가 나왔는데도 잘 풀 수 있었어요. 수험생분들도 잘 준비해서 꼭 합격하시길!!

S
S***********Nov 6, 2025

Période de préparation: 1 month

I was surprised how similar the question style was to the actual PCNE exam. Practicing with this app made complex topics like VPC peering and NAT configuration much easier. Passed and I’m really satisfied.

A
A************Oct 25, 2025

Période de préparation: 2 weeks

I spent two weeks solving about 30 questions a day, and Cloud Pass helped me reinforce my weak spots in hybrid networking and load balancing strategies. This app is a must-have for anyone preparing for PCNE.

R
R*********Oct 17, 2025

Période de préparation: 1 month

Good questions and similar to the real exam questions. The app is very helpful tool

← Voir toutes les questions Google Professional Cloud Network Engineer

Commencer à s'entraîner

Téléchargez Cloud Pass et commencez à vous entraîner sur toutes les questions Google Professional Cloud Network Engineer.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

Application d'entraînement aux certifications IT

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Mentions légales

FAQPolitique de confidentialitéConditions d'utilisation

Entreprise

ContactSupprimer le compte

© Copyright 2026 Cloud Pass, Tous droits réservés.

Envie de vous entraîner partout ?

Obtenir l'application

Téléchargez Cloud Pass — inclut des tests d'entraînement, le suivi de progression et plus encore.