
50問と120分の制限時間で実際の試験をシミュレーションしましょう。AI検証済み解答と詳細な解説で学習できます。
AI搭載
すべての解答は3つの主要AIモデルで交差検証され、最高の精度を保証します。選択肢ごとの詳細な解説と深い問題分析を提供します。
In your company’s Google Cloud organization, the qa-analytics-123 project contains 12 custom IAM roles that you must replicate exactly (same role IDs, titles, and permissions) to a new staging project stg-analytics-123 in the same organization, and you want to accomplish this in the fewest possible steps without promoting the roles to the organization level or manually re-selecting permissions; what should you do?
Correct. gcloud iam roles copy is intended to duplicate a custom role definition from one parent (project) to another. It preserves the permissions and metadata, and you can keep the destination at the project level (stg-analytics-123) to meet the requirement of not promoting roles to the organization. It also avoids manually re-selecting permissions, minimizing errors and steps.
Incorrect. Copying roles to the organization would change the scope from project-level to org-level, making the roles available to all projects. The question explicitly says not to promote the roles to the organization level. While org-level roles can be useful for standardization, it violates the stated constraint and is not the most appropriate action here.
Incorrect. The Google Cloud Console does not reliably provide a simple “clone/copy role from another project with the same role ID” workflow that meets the “replicate exactly” and “fewest steps” requirements. Even if you can base a role on an existing one, you typically still end up editing and saving per role, and exact ID preservation across projects is not the primary console path.
Incorrect. Manually creating 12 roles and re-selecting permissions is time-consuming and error-prone, and it directly contradicts the requirement to avoid manual permission selection. This approach increases the risk of configuration drift between QA and staging, and it is not aligned with best practices for repeatable IAM configuration.
Core concept: This question tests IAM custom role management and portability. Custom roles are scoped either to a project or an organization. Project-level custom roles live under projects/PROJECT_ID/roles/ROLE_ID and are not automatically available in other projects. To replicate roles “exactly” (same role IDs, titles, permissions) into another project with minimal effort, you should use the gcloud CLI role copy capability. Why the answer is correct: Option A uses gcloud iam roles copy to copy each custom role definition from qa-analytics-123 to stg-analytics-123. This is the fewest-step approach that preserves the role definition without manually reselecting permissions. It also keeps the roles at the project scope, satisfying the requirement to not promote them to the organization level. The copy operation is designed for this use case: duplicating an existing custom role into a different parent resource (another project). Key features / best practices: - Custom roles are resources with metadata (roleId, title, description, stage) and a permissions list. Copying avoids human error and ensures parity between environments (QA vs staging). - You must have appropriate permissions such as iam.roles.get on the source and iam.roles.create (and potentially iam.roles.update) on the destination project. - Role IDs must be unique within the destination project. Copying “exactly” implies the destination project does not already have roles with the same IDs. - From an Architecture Framework perspective (Security, Reliability, Operational Excellence), using automation/CLI reduces drift and improves repeatability across environments. Common misconceptions: Some assume roles must be elevated to the organization to share across projects (option B). That’s unnecessary and violates the requirement. Others look for a Console “clone role” workflow (option C), but the Console typically requires creating/editing roles and may not provide a true bulk/clone-from-project feature that preserves IDs without manual steps. Exam tips: - Remember the scope boundary: project custom roles are not reusable across projects unless copied or recreated. - Prefer gcloud/automation for repeatable IAM configuration and to avoid permission-selection mistakes. - If a question says “fewest steps” and “replicate exactly,” look for a direct copy/export-import mechanism rather than manual recreation.
You plan to host a geospatial tile-rendering backend on Compute Engine; traffic averages 150 concurrent render jobs but spikes to 1,500 during quarterly map releases, and customers must be able to submit jobs 24/7 without interruption or CPU throttling. Your SLO requires 99.9% availability across at least two zones in us-central1, and you want to follow Google-recommended practices so capacity scales automatically without manual intervention. What should you do?
Multiple standalone VMs with a CPU alert threshold still requires manual action to add capacity (or scripting not mentioned). Standalone instances also lack managed autohealing and consistent rollout via instance templates. It doesn’t address the requirement for Google-recommended automatic scaling practices, and it doesn’t explicitly ensure multi-zone high availability across at least two zones in us-central1.
Replacing VMs with high-CPU instances is a vertical scaling approach that is slower and potentially disruptive (instance replacement/restart). It still relies on manual intervention and doesn’t inherently provide multi-zone resilience or managed autohealing. Vertical scaling also won’t handle a 10x spike as effectively as horizontal scaling with autoscaling instance counts.
Using an instance group is closer, but the option says you increase instances “whenever you see high CPU utilization,” which is manual and violates the requirement for automatic scaling without intervention. It also doesn’t specify spanning two zones; a zonal instance group would not meet the SLO requirement for availability across at least two zones in us-central1.
A managed instance group spanning two zones (regional MIG) is the recommended Compute Engine pattern for high availability and elasticity. Autoscaling based on average CPU utilization automatically adds/removes instances to match demand spikes without manual work. MIGs also support autohealing, rolling updates, and consistent configuration via instance templates, aligning with Google best practices and the stated 99.9% multi-zone SLO.
Core Concept: This question tests Compute Engine scalability and high availability using Managed Instance Groups (MIGs), multi-zone deployment, and autoscaling. It also implicitly tests understanding of avoiding manual intervention and meeting an availability SLO. Why the Answer is Correct: A regional (multi-zone) Managed Instance Group in us-central1 can distribute instances across at least two zones, so a single-zone failure doesn’t take the service down, supporting a 99.9% availability target when paired with proper health checks and load balancing. Autoscaling based on average CPU utilization automatically adds instances during quarterly spikes (1,500 concurrent jobs) and scales back during normal load (150 jobs), meeting the requirement for automatic capacity management without manual intervention. Key Features / Configurations: 1) Regional MIG spanning two (or more) zones in us-central1 to satisfy “across at least two zones.” 2) Autoscaler policy using average CPU utilization (or a custom metric like queue depth if jobs are queued) to scale out/in automatically. 3) Instance template to ensure consistent VM configuration and rapid, repeatable scaling. 4) Health checks and autohealing so unhealthy VMs are recreated automatically. 5) Use non-preemptible VMs (standard on-demand) to avoid interruption; avoid overcommitted/shared-core machine types if “no CPU throttling” is a hard requirement. Common Misconceptions: Options that mention “standalone instances” or “increase when you see high CPU” imply manual operations, which violates the requirement for automatic scaling. Also, simply resizing to high-CPU machines doesn’t address sudden spikes well and can still leave you underprovisioned or cause disruptive replacements. Exam Tips: When you see: (1) variable traffic, (2) need for automatic scaling, and (3) multi-zone availability requirements, the default recommended pattern is a Managed Instance Group (often regional) with autoscaling and autohealing. For job-processing workloads, consider whether CPU is the best scaling signal; however, on the ACE exam, “autoscaling based on average CPU utilization” is the canonical answer unless queue-based metrics are explicitly introduced.
You manage two named gcloud configurations on your workstation—dev-eu (active) and prod-us (inactive). Without changing the active configuration or modifying your kubeconfig, you must verify which Google Kubernetes Engine (GKE) cluster (project ID, location, and cluster name) is configured in prod-us using the fewest possible commands. What should you do?
Correct. `gcloud config configurations describe prod-us` inspects the stored properties of the named configuration without activating it. That satisfies the requirement to leave the active configuration unchanged and does not touch kubeconfig at all. While it does not normally expose a GKE cluster name directly, it is still the best and fewest-command way to verify the project and location settings associated with the prod-us environment.
Incorrect. `gcloud config configurations activate prod-us` changes the active configuration, which the question explicitly forbids. Although `gcloud config list` would then show that configuration’s properties, this approach violates the requirement and uses more commands than necessary. On the exam, any option that changes state when the prompt requires read-only inspection should be ruled out.
Incorrect. `kubectl config get-contexts` reads Kubernetes contexts from kubeconfig, not gcloud named configurations. The question asks about the inactive gcloud configuration prod-us, so kubeconfig is the wrong source of truth for this task. Even though this command is read-only, it does not directly tell you what properties are set in prod-us.
Incorrect. `kubectl config use-context` changes the current Kubernetes context, which modifies kubeconfig state and violates the requirement. In addition, kubectl contexts are separate from gcloud named configurations, so switching contexts does not reveal what is stored in prod-us. This option both changes state and consults the wrong configuration system.
Core Concept: This question tests the difference between gcloud named configurations and kubectl kubeconfig contexts. A gcloud configuration stores CLI properties such as project, compute region, and compute zone, and you can inspect an inactive configuration directly without activating it. kubectl commands read kubeconfig, which is separate from gcloud named configurations. Why the Answer is Correct: The requirement is to inspect the inactive prod-us configuration without changing the active configuration and without modifying kubeconfig. The command `gcloud config configurations describe prod-us` is the only option that directly reads that named configuration in a single, read-only step. It lets you verify the project and location-related properties associated with prod-us, which is the closest available way to identify the intended GKE environment from gcloud configuration alone. Key Features / Best Practices: - Use `gcloud config configurations describe NAME` to inspect a non-active configuration safely. - Remember that gcloud configurations and kubeconfig serve different purposes and are managed independently. - A GKE cluster name is typically not stored as a standard gcloud configuration property; it is usually reflected in kubeconfig contexts after running `gcloud container clusters get-credentials`. Common Misconceptions: A common mistake is assuming kubectl contexts tell you what an inactive gcloud configuration contains. Another is assuming a gcloud configuration inherently stores a GKE cluster name; in practice, it usually stores project and location defaults, not the selected cluster itself. Activating a configuration just to inspect it is unnecessary when a read-only describe command exists. Exam Tips: - If a question says not to change the active configuration, avoid `activate`. - If it says not to modify kubeconfig, avoid commands that switch kubectl context. - On the exam, choose the minimal read-only command that inspects the requested configuration directly.
You run three Compute Engine virtual machines in the us-central1 region that listen for TCP traffic on port 25565 and must enforce per-client IP rate limits. You need to make this service publicly accessible on the internet via a Google Cloud load balancer while preserving the original client source IP; which load balancer should you use?
Correct. The external TCP/UDP Network Load Balancer (passthrough) is a regional L4 load balancer that forwards packets directly to backends without terminating the connection. Because it is not a proxy, the backend VM receives the original client source IP, enabling per-client IP rate limiting. It supports arbitrary TCP ports such as 25565 and makes the service publicly reachable via an external forwarding rule.
Incorrect. A TCP Proxy Load Balancer is a proxy-based external load balancer. It terminates the client TCP session at Google’s edge and then opens a separate connection to the backend. As a result, the backend typically sees a Google proxy source IP rather than the real client IP, which breaks per-client IP rate limiting based on source address at the VM.
Incorrect. An SSL Proxy Load Balancer is also proxy-based and is intended for SSL/TLS termination and proxying at L4. Like TCP Proxy, it does not preserve the original client source IP at the network layer because the proxy establishes a new connection to the backend. It’s useful for offloading TLS, but not for source-IP-based rate limiting on the VM.
Incorrect. An internal TCP Network Load Balancer is not internet-facing; it provides load balancing for private RFC1918 traffic within a VPC (and connected networks). The question explicitly requires the service to be publicly accessible on the internet, so an internal load balancer cannot meet the accessibility requirement even though it is L4.
Core Concept: This question tests Google Cloud load balancing types and, specifically, whether the original client source IP is preserved to the backend. Preserving source IP is critical when backends must enforce per-client IP rate limits at the application or host firewall level. Why the Answer is Correct: An external TCP/UDP Network Load Balancer (passthrough) is a regional, L4 load balancer that forwards packets to backend VMs without proxying the connection. Because it is “passthrough,” the backend VM sees the real client source IP (not a Google proxy IP). This enables accurate per-client IP rate limiting on your Compute Engine instances listening on TCP port 25565. It also matches the requirement to be publicly accessible on the internet. Key Features / Configuration Notes: - Use an external passthrough Network Load Balancer with a forwarding rule for TCP:25565 in us-central1. - Backends are typically an unmanaged instance group (or managed instance group) with a health check. - Because it’s regional, it aligns with “three VMs in us-central1.” - Source IP preservation is inherent to passthrough NLB; no special headers are required. - Ensure firewall rules allow traffic from the load balancer and health check ranges as required. Common Misconceptions: Many candidates choose TCP Proxy or SSL Proxy load balancers because they are “external” and support TCP. However, those are proxy-based load balancers: they terminate the client connection at Google’s edge and create a new connection to the backend, so the backend does not see the original client IP at the network layer. While some proxy LBs can provide client IP via PROXY protocol in certain products, the exam’s standard expectation for preserving source IP for rate limiting is to use a passthrough Network Load Balancer. Exam Tips: When you see requirements like “preserve original client source IP” or “backend must see client IP,” think passthrough (Network Load Balancer) rather than proxy (TCP/SSL Proxy, HTTP(S)). Also map scope: external + regional L4 + non-HTTP custom port strongly points to external TCP/UDP Network Load Balancer (passthrough).
Your telemedicine platform must retain daily anonymized diagnostic image bundles in Cloud Storage for 24 months (about 2 TB per month, ~100 GB per object), and compliance reviewers only read a small subset during quarterly audits (4 times per year), so to minimize total cost while allowing occasional reads, which Cloud Storage class should you choose?
Coldline Storage is optimized for data accessed at most once per quarter, aligning with quarterly compliance audits. It offers lower storage cost than Nearline/Standard, with higher retrieval charges that are acceptable when only a small subset is read infrequently. The 24-month retention exceeds Coldline’s 90-day minimum storage duration, so early deletion penalties are not a concern, minimizing total cost.
Nearline Storage is intended for data accessed less than once per month. While it supports infrequent access, it is typically more expensive to store than Coldline. Given the stated access pattern (quarterly) and long retention (24 months), Nearline usually results in higher total storage cost without providing a meaningful benefit, unless reads were closer to monthly or operational constraints required lower retrieval costs.
Regional Storage refers to Standard storage in a single region (not an infrequent-access class). It is designed for frequently accessed data with low latency in that region. For 24-month retention with only occasional reads, Regional/Standard storage would be unnecessarily expensive compared to Coldline because you pay higher per-GB storage rates even though the data is rarely accessed.
Multi-Regional Storage is Standard storage replicated across multiple regions (legacy concept; commonly replaced by multi-region/dual-region location choices). It targets high availability and low latency for globally accessed, active content. For long-term retention with quarterly reads, it is the most expensive option among those listed due to higher storage costs, and it does not reduce retrieval charges enough to justify the premium.
Core Concept: This question tests Google Cloud Storage (GCS) storage classes and how to choose the lowest total cost based on access frequency and retention. In GCS, cost is a combination of storage price per GB-month, retrieval (read) charges, and early deletion minimum storage duration charges. Why the Answer is Correct: Coldline Storage is designed for data accessed at most once per quarter, which matches the stated pattern: compliance reviewers read only a small subset during quarterly audits (4 times/year). The data must be retained for 24 months, so it comfortably exceeds Coldline’s minimum storage duration (90 days), avoiding early deletion penalties. With ~2 TB/month ingested and large objects (~100 GB each), the platform benefits from Coldline’s lower storage cost compared to Nearline/Regional/Multi-Regional, while still allowing occasional reads when audits occur. Key Features / Best Practices: - Storage class selection should be driven by access frequency: Standard/Regional/Multi-Regional for frequent access, Nearline for monthly, Coldline for quarterly, Archive for yearly/rare. - Coldline has higher retrieval and operation costs than Nearline/Standard, but those are outweighed when reads are infrequent. - Use Object Lifecycle Management to enforce retention and optionally transition objects (e.g., Standard -> Coldline) if ingestion workflows require “hot” storage briefly. - Consider bucket location separately from storage class (region/dual-region/multi-region) based on compliance and latency; the question focuses on class for cost. Common Misconceptions: - Nearline can seem appropriate because audits happen multiple times per year, but Nearline is optimized for about once per month access; with quarterly reads, Coldline is typically cheaper overall. - Regional/Multi-Regional are often mistaken as “classes” for infrequent access; they are Standard storage offerings with higher storage cost intended for active data and availability/latency needs. Exam Tips: Memorize the access-frequency mapping: Nearline (~monthly), Coldline (~quarterly), Archive (~yearly). Always check minimum storage duration (Nearline 30 days, Coldline 90 days, Archive 365 days) and factor retrieval charges. For long retention with rare reads, pick the coldest class that still matches access needs and avoids early deletion penalties.
外出先でもすべての問題を解きたいですか?
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。
Your broadcast company runs 18 transcoding servers on-premises in subnet 172.20.32.0/20 inside a locked-down production VLAN. These servers must read and write objects to Cloud Storage buckets in the prod-media project for daily ingest (up to 800 GB/day), but company policy forbids assigning public IPs to the servers and blocks any outbound internet access; only site-to-site connectivity over an existing Cloud VPN (up to 1 Gbps) is allowed, with a Dedicated Interconnect scheduled within 60 days. Security requires that all Google API traffic stay on private paths and be restricted to Google-controlled IP ranges, and you may not deploy public NAT egress or rely on hard-coded public IPs. Following Google-recommended practices, how should you provide the on-prem servers with access to Cloud Storage while keeping them off the public internet?
Incorrect. This uses public internet egress and depends on resolving and allowlisting public IPs for storage.googleapis.com, which can change and are not intended for static firewall rules. It violates the constraints: no outbound internet access, no reliance on hard-coded public IPs, and the requirement that Google API traffic stay on private paths restricted to Google-controlled ranges.
Incorrect. A proxy VM in Google Cloud does not automatically make traffic private to Google APIs; the proxy still needs a path to reach Cloud Storage endpoints. Without additional configuration, it typically uses public internet/NAT or external IPs, which is disallowed. It also adds operational overhead and becomes a bottleneck/failure domain, and it’s not the recommended pattern versus Restricted Google APIs.
Incorrect. Migrating servers to Compute Engine is unnecessary and does not address the core requirement for on-prem access. An internal load balancer cannot use storage.googleapis.com as a backend in this way; Cloud Storage is not a backend service you can attach to an internal TCP/HTTP(S) load balancer. This option reflects a misunderstanding of load balancing and Cloud Storage access patterns.
Correct. This is the standard hybrid design for private access from on-premises environments to supported Google APIs, including Cloud Storage, without using the public internet. By making the Restricted Google APIs VIP range 199.36.153.4/30 reachable over Cloud VPN or Interconnect and using DNS so the required API hostnames resolve through restricted.googleapis.com, the servers send traffic over private connectivity to Google-controlled addresses only. This satisfies the security requirements for no public NAT, no public IPs on the servers, and no reliance on changing public service IPs. Cloud Router is appropriate because it exchanges routes dynamically and supports the transition from VPN to Dedicated Interconnect cleanly.
Core concept: This question tests the recommended hybrid pattern for private access from on-premises systems to Google APIs such as Cloud Storage by using Restricted Google APIs over Cloud VPN or Interconnect. The goal is to keep traffic off the public internet, avoid public NAT or external IP dependencies, and constrain destinations to Google-controlled IP ranges. Why the answer is correct: Option D matches Google-recommended practice for on-prem access to Google APIs: establish hybrid connectivity to a VPC, use Cloud Router for route exchange, make the restricted.googleapis.com VIP range (199.36.153.4/30) reachable over the private connection, and configure DNS so the needed Google API names resolve via restricted.googleapis.com or directly to the restricted VIP. This ensures requests to Cloud Storage travel over the VPN/Interconnect rather than the public internet, while limiting API access to the restricted set of Google services. It satisfies the requirements for no public IPs, no internet egress, and no hard-coded public service IP allowlists. Key features/configurations: 1) Hybrid connectivity: Use the existing Cloud VPN now and Dedicated Interconnect later, with Cloud Router handling dynamic route exchange. 2) Routing: Ensure on-premises networks learn a route for 199.36.153.4/30 so traffic for restricted Google APIs is sent across the private link. 3) DNS: Use private DNS overrides or forwarding so specific Google API hostnames needed by the workload, such as Cloud Storage endpoints, resolve to restricted.googleapis.com or the restricted VIP, while preserving valid TLS/SNI behavior. 4) Security posture: Traffic remains on private connectivity to Google-controlled VIPs, with no public NAT, no external IPs on the on-prem servers, and no dependence on changing public endpoint addresses. Common misconceptions: A common mistake is to think that simply reaching a proxy VM in Google Cloud makes API access private; the proxy still needs a compliant path to Google APIs and often introduces unnecessary complexity. Another misconception is that allowlisting the current public IPs of storage.googleapis.com is acceptable, but those addresses are not intended to be treated as fixed customer-managed allowlists and would still require internet egress. Candidates may also overgeneralize DNS changes; the correct approach is targeted private resolution for Google API names, not an indiscriminate wildcard CNAME for all googleapis.com names. Exam tips: When a question mentions on-premises hosts, no internet egress, no public IPs, and access to Google APIs only over private paths, think of restricted.googleapis.com plus hybrid connectivity and route advertisement for 199.36.153.4/30. Remember that Private Google Access is primarily for Google Cloud resources without external IPs, while on-premises private API access is achieved through VPN or Interconnect with proper routing and DNS. Also watch for wording about Google-controlled IP ranges, which strongly points to the restricted VIP pattern rather than proxies or public endpoint allowlists.
A logistics company operates three Google Cloud projects named north-1, south-1, and west-1, each running about 20 Compute Engine VMs. The SRE team needs a single pane of glass to monitor CPU utilization, memory usage, and Persistent Disk throughput across all three projects without switching contexts, and they want to set it up in under an hour using built-in capabilities and IAM. What should you do?
Sharing charts from individual dashboards is not the intended mechanism for centralized, ongoing observability. It can create a fragmented experience and typically still relies on per-project dashboards and maintenance. It also doesn’t provide a true unified metrics exploration layer across projects (e.g., filtering/grouping by project in one place). This may look like “single pane of glass,” but it’s not the standard, scalable Cloud Monitoring approach.
Granting Monitoring Viewer on all three projects is necessary for access control, but it does not solve the “single pane of glass” requirement by itself. The SREs would still need to switch project context in the Cloud Console (or run separate queries per project) because there is no aggregated metrics scope. IAM answers “who can view,” not “where the unified view lives.”
Using default dashboards sequentially is explicitly not acceptable because it requires switching contexts (project-by-project). It also doesn’t provide cross-project charts, aggregated alerting, or a unified metrics explorer experience. While it is quick, it fails the core requirement of a single pane of glass across north-1, south-1, and west-1.
Creating a Cloud Monitoring workspace (metrics scope) in north-1 and adding south-1 and west-1 is the correct built-in way to centralize monitoring across projects. It enables a unified Metrics Explorer and dashboards from the scoping project while pulling metrics from all monitored projects. Combined with appropriate IAM (e.g., Monitoring Viewer in each project), it meets the requirement quickly and cleanly without custom tooling.
Core Concept: This question tests Cloud Monitoring “workspace/metrics scope” (formerly Stackdriver Workspace) for centralized observability across multiple Google Cloud projects, plus the IAM needed to view metrics across those projects. A metrics scope lets you aggregate metrics and dashboards from multiple “monitored projects” into one place (single pane of glass). Why the Answer is Correct: Option D is the only choice that creates a centralized Monitoring view. By creating a Cloud Monitoring workspace (metrics scope) in a host project (north-1) and adding south-1 and west-1 as monitored projects, SREs can open Cloud Monitoring once (in the host project) and query/graph metrics across all three projects without switching project context. This aligns with the requirement to set it up quickly using built-in capabilities and IAM. Key Features / Configurations: 1) Metrics scope: The host project becomes the “scoping project.” Adding other projects enables cross-project metrics, dashboards, alerting policies, and uptime checks from a single UI. 2) IAM: Users still need permissions in each monitored project to view metrics (commonly roles/monitoring.viewer) and access relevant resources. The metrics scope provides aggregation; IAM provides authorization. 3) Built-in metrics: CPU utilization and Persistent Disk throughput are available by default for Compute Engine. Memory usage requires the Ops Agent (or legacy agent) to be installed on VMs; however, the question focuses on the monitoring setup approach, and the metrics scope is still the correct mechanism for cross-project viewing. Common Misconceptions: - Simply enabling the API or granting Monitoring Viewer (option B) does not create a unified view; it only grants access per project. Users would still need to switch projects in the console. - Sharing charts (option A) is not the standard “single pane of glass” approach and is cumbersome to maintain; it also doesn’t provide a unified exploration experience. - Viewing projects one after another (option C) explicitly violates the “without switching contexts” requirement. Exam Tips: For “monitor multiple projects in one place,” think “Cloud Monitoring metrics scope/workspace.” For “who can see metrics,” think IAM on each project. Also remember which metrics are native (CPU/disk) vs agent-based (memory). In ACE scenarios, the fastest built-in cross-project monitoring setup is: create a metrics scope in one project and add the other projects as monitored projects.
At a logistics company, you plan to run a one-time BigQuery SQL that joins two US multi-region tables (shipments ~1.6 TB and scan_events ~400 GB) and could return about 50 million rows; you use on-demand pricing and must estimate the cost before running the query. You are allowed to use the bq command-line tool but cannot change the billing model or pre-materialize data. What should you do?
Incorrect. The prompt explicitly says you cannot change the billing model, and arranging a slot commitment/capacity pricing is a billing model change. Also, slot commitments are not typically “temporary” in the sense of a one-off query without administrative overhead and minimum commitment considerations. Even if possible, it’s not the required method to estimate on-demand query cost.
Correct. A BigQuery dry run with the bq CLI returns an estimate of total bytes processed (totalBytesProcessed). On-demand BigQuery query charges are based on bytes processed, so converting that estimate to TB and applying the on-demand $/TB rate (or using the Pricing Calculator) provides the required pre-run cost estimate without executing the query.
Incorrect. BigQuery on-demand analysis pricing is not based on bytes returned to the client; it is based on bytes processed by the query. While returning 50 million rows may have practical implications (time, client memory, export costs), it does not drive the core on-demand query charge in the way bytes processed does.
Incorrect. Row counts do not map directly to bytes processed because BigQuery charges by bytes read/processed, which depends on column sizes, selected columns, partition pruning, clustering, and query execution plan. Also, running SELECT COUNT(*) itself can scan large amounts of data and incur costs, defeating the purpose of a safe pre-estimate.
Core Concept: This question tests BigQuery cost estimation under on-demand (analysis) pricing and how to estimate query cost safely using BigQuery’s dry run feature via the bq CLI. In on-demand pricing, BigQuery charges primarily for bytes processed (bytes read) by the query, not for the number of rows returned. Why the Answer is Correct: A BigQuery dry run parses and plans the query without executing it, and returns an estimate of total bytes processed. This is exactly what you need to estimate on-demand query cost before running a one-time join over large tables. With bq, you can run a dry run (e.g., bq query --dry_run --use_legacy_sql=false '...') and read the “totalBytesProcessed” value. You then convert bytes processed to TB and multiply by the on-demand price per TB (or use the Google Cloud Pricing Calculator). This approach meets all constraints: it uses bq, does not change billing model, and does not require pre-materializing data. Key Features / Best Practices: Dry run provides a no-cost way to estimate bytes processed and validate SQL. BigQuery on-demand charges are based on bytes processed after considering partition pruning, clustering, and column projection. Because the tables are US multi-region, there are no cross-location query issues. The potential 50 million result rows affects output size and client-side handling, but not the primary on-demand query charge. Common Misconceptions: It’s easy to confuse “bytes returned” with “bytes processed.” BigQuery’s on-demand analysis cost is driven by bytes scanned/processed, not the size of the result set. Another misconception is that counting rows (SELECT COUNT(*)) indicates scan cost; it does not, because scan cost depends on bytes read and query plan optimizations. Exam Tips: For cost estimation questions, remember: on-demand BigQuery = cost per TB processed. Use dry run to get totalBytesProcessed. Also distinguish between analysis cost (bytes processed) and other potential costs (storage, BI Engine, exports), which are not the focus here. This aligns with the Google Cloud Architecture Framework’s cost optimization pillar: measure and estimate before executing large workloads.
An independent animation studio occasionally renders 4K scenes on a GKE cluster you manage; their renderer requires NVIDIA GPUs, each render job runs 10–16 hours with no checkpoints (non-restartable), and work arrives only 2–3 days per month; you must minimize cost while ensuring these long-running jobs are not interrupted and that capacity scales automatically when work arrives. What should you do?
Node auto-provisioning is the best fit because it can automatically create new node pools when pending pods request resources that existing nodes cannot satisfy, including GPUs. That means the cluster can remain free of expensive GPU nodes when no rendering work is queued, which is important because jobs arrive only 2–3 days per month. Using standard non-preemptible VMs for those auto-provisioned nodes satisfies the requirement that 10–16 hour, non-checkpointable jobs must not be interrupted. This option best balances automatic scaling with the lowest idle infrastructure cost.
Vertical Pod Autoscaler only adjusts CPU and memory requests for pods; it does not create GPU-capable nodes or manage cluster infrastructure for specialized accelerators. It also may evict and recreate pods to apply updated recommendations, which is dangerous for long-running rendering jobs that cannot resume from checkpoints. Even if VPA improved resource sizing, it would not ensure that NVIDIA GPU capacity appears when work arrives. Therefore it does not address the core scheduling and cost problem in this scenario.
Preemptible VMs are explicitly the wrong choice for non-restartable jobs because they can be terminated by Google Cloud before the render completes. A 10–16 hour rendering task with no checkpointing would lose all progress if the node were reclaimed. Although this option reduces cost, it violates the requirement that jobs must not be interrupted. Cost savings cannot come at the expense of guaranteed completion for these workloads.
A dedicated GPU node pool with autoscaling is workable, but setting a minimum size of 1 means at least one GPU node runs continuously even when there is no work for most of the month. That directly conflicts with the requirement to minimize cost for a workload that appears only a few days each month. Autoscaling from a baseline of 1 also provides less savings than provisioning GPU nodes only when pending jobs exist. Because a more cost-efficient automatic scaling option exists, this is not the best answer.
Core Concept: This question is about choosing the most cost-efficient GKE scaling mechanism for rare, bursty GPU workloads that must run to completion without interruption. The key is to provision GPU nodes only when jobs are queued, while avoiding any infrastructure choice that can terminate long-running, non-checkpointable workloads. Why the Answer is Correct: Because rendering work arrives only 2–3 days per month, the cheapest approach is to avoid keeping GPU nodes running when idle. Node auto-provisioning allows GKE to automatically create appropriately sized node pools in response to unschedulable pods that request GPUs, and then remove those nodes when they are no longer needed. Since the jobs cannot be interrupted, those auto-provisioned nodes should be standard, non-preemptible VMs rather than Spot/Preemptible instances. Key Features / Configurations: - Enable GKE node auto-provisioning so the cluster can create nodes for pending GPU workloads automatically. - Ensure the render pods request the required NVIDIA GPU resource so GKE knows specialized nodes are needed. - Use standard VM-backed nodes, not Spot/Preemptible, to avoid involuntary termination during 10–16 hour renders. - Let unused GPU nodes scale back down after jobs complete to minimize monthly cost. Common Misconceptions: - A dedicated GPU node pool with min=1 is reliable, but it is not the lowest-cost option for workloads that run only a few days each month. - Vertical Pod Autoscaler does not solve node provisioning or GPU capacity problems; it only adjusts pod resource requests. - Preemptible or Spot GPUs may look attractive for cost savings, but they are unsuitable for non-restartable jobs because they can be reclaimed at any time. Exam Tips: - For infrequent batch workloads, prefer autoscaling mechanisms that allow scale-to-zero when possible. - For non-checkpointable jobs, immediately eliminate Spot/Preemptible choices. - Distinguish pod-level scaling tools like VPA from infrastructure-level scaling tools like Cluster Autoscaler and node auto-provisioning. - When the question emphasizes both automatic scaling and minimizing idle cost, look for the option that provisions specialized nodes only on demand.
You are responsible for a Google Kubernetes Engine (GKE) cluster named "staging-ops" in the project "retail-ops-9912" located in the zone "europe-west1-b". You have just installed the Cloud SDK on a new admin VM and completed gcloud init to authenticate. You want all future CLI commands to default to targeting this specific cluster without having to specify the cluster name each time. What should you do?
Correct. gcloud config set container/cluster staging-ops sets a persistent default in the active Cloud SDK configuration. After also setting the appropriate project and zone/location, gcloud container commands can omit the cluster name because gcloud reads it from the config. This is the standard, supported way to establish CLI defaults and is commonly tested on the ACE exam.
Incorrect. gcloud container clusters update staging-ops modifies the cluster resource itself (e.g., node pool settings, add-ons, release channel, maintenance windows), not the local CLI defaults. It does not change what cluster future commands target by default. This option can look plausible because it includes the word “update,” but it updates the cluster configuration, not your gcloud environment.
Incorrect. Cloud SDK does not use a user-created file like ~/.gcloud/gke.default to set default cluster targeting. gcloud stores configuration properties via gcloud config in its managed configuration directories/files, and you should not invent custom files expecting gcloud to read them. This reflects a common misconception from other tools that use dotfiles for defaults.
Incorrect. There is no supported ~/.gcloud/defaults.json mechanism for setting gcloud defaults. While gcloud does maintain internal configuration state, it is managed through gcloud config set and named configurations, not by manually creating JSON files. On the exam, any answer involving creating arbitrary files for gcloud defaults is typically a red flag.
Core Concept: This question tests Cloud SDK (gcloud) configuration management for Google Kubernetes Engine access. In GKE workflows, you typically use gcloud to set defaults (project, region/zone, cluster) and then use gcloud container clusters get-credentials to populate kubeconfig so kubectl targets the right cluster context. Why the Answer is Correct: Option A uses gcloud’s built-in configuration system: gcloud config set container/cluster staging-ops. This sets a persistent property in the active gcloud configuration so future gcloud container commands can infer the cluster name without repeatedly passing --cluster. In practice, you would also ensure container/zone (or container/location) and core/project are set (e.g., retail-ops-9912 and europe-west1-b) so gcloud can uniquely identify the cluster. This aligns with best practices for “setting up a cloud solution environment” by establishing consistent defaults on an admin workstation/VM. Key Features / Best Practices: - gcloud configurations store properties in named profiles (e.g., default), enabling repeatable admin environments. - Common related properties: core/project, compute/zone, container/cluster, and container/use_application_default_credentials. - For kubectl usage, you typically run: gcloud container clusters get-credentials staging-ops --zone europe-west1-b --project retail-ops-9912, which sets the kubeconfig context. The question, however, is specifically about defaulting CLI commands, which is handled by gcloud config. - From an Architecture Framework perspective (Operational Excellence), setting defaults reduces human error and improves repeatability. Common Misconceptions: Many candidates confuse “default cluster for gcloud” with “default context for kubectl.” Setting container/cluster helps gcloud commands, but kubectl defaults are controlled by kubeconfig contexts (set by get-credentials and kubectl config use-context). Also, creating custom files in ~/.gcloud is not how Cloud SDK stores properties; it uses internal configuration files managed by gcloud. Exam Tips: - When you see “make future gcloud commands default to X,” think gcloud config set. - When you see “make kubectl point to the cluster,” think gcloud container clusters get-credentials and kubectl config use-context. - Always ensure project and location defaults are set to avoid ambiguity when multiple clusters share names across projects/regions.
学習期間: 1 month
I could count probably like 15+ question exactly the same on the real exam. Cloud pass always the best pratical exam questions.
学習期間: 1 month
Thank you for the excellent source for preparing for cert exams, detail explanation really helped. Passed the exam.
学習期間: 1 month
Got my cert after going though the practice questions. I have a background in GCP so it was a bit easy to grasp for me.
学習期間: 1 month
This helped my pass the ACE on 1/12 , highly recommended.
学習期間: 1 month
Passed the exam in first attempt!
外出先でもすべての問題を解きたいですか?
無料アプリを入手
Cloud Passを無料でダウンロード — 模擬試験、学習進捗の追跡などを提供します。