이 GCP ADP 시험 덤프는 최신 Google Associate Data Practitioner 시험 형식을 기반으로 한 실제 문제와 상세한 설명을 포함합니다. GCP 시험 덤프를 검증된 솔루션과 함께 찾고 있다면 Cloud Pass 앱에서 10,000개 이상의 연습 문제를 시도해보세요.
중복 문제 없음
모든 문제는 고유하며 신중하게 선별되었습니다
최신 기출 문제
2025년 시험 패턴으로 정기적으로 업데이트
Sample Questions
실전 문제
Question 1
A global sportswear retailer is standardizing on BigQuery for analytics and needs a fully managed way to run a nightly batch ETL at 02:00 UTC that pulls 50 tables (~12 TB total) from mixed sources (Cloud SQL, an SFTP server, and a partner REST API), triggers transformations across multiple Google Cloud services, and then loads curated datasets into BigQuery.
Your engineering team (8 developers) is strongest in Python and wants to write maintainable code, use pre-built connectors/operators for Google services, set task dependencies with retries/alerts, and avoid managing servers.
Which tool should you recommend to orchestrate these batch ETL workflows while leveraging the team’s Python skills?
Question 2
At a multinational retailer, you maintain a BigQuery dataset ret_prod.sales_tx in project ret-prod that stores tokenized credit card transactions, and you must ensure that only the 8-person Risk-Analytics Google Group (risk-analytics@retail.example) can run SELECT queries on the tables while preventing the other 120 employees in the organization from querying them and adhering to the principle of least privilege; what should you do?
Question 3
You work for a video-streaming platform.
An existing Bash/Python ETL script on a Compute Engine VM aggregates ~120,000 playback events each day from a legacy NFS share, transforms them, and loads the results into BigQuery.
The script is run manually today; you must automate a 02:00 UTC daily trigger and add centralized monitoring with run history, task-level logs, and retry visibility for troubleshooting.
You want a single, managed solution that uses open-source tooling for orchestration and does not require rewriting the ETL code.
What should you do?
Question 4
A gaming analytics startup collects in-app telemetry from 2 million daily active users across 6 Google Cloud regions (us-central1, europe-west1, asia-east1, australia-southeast1, southamerica-east1, us-east4), producing approximately 120,000 JSON events per minute.
You must deliver dashboards in BigQuery with near real-time freshness (under 90 seconds end-to-end). Before loading, each event must be cleaned (drop null fields), enriched with a region_code derived from the producing region, and flattened from nested JSON into a columnar schema.
To accelerate delivery and enable future maintainability, the pipeline must be built using a visual, low-code interface.
What should you do?
Question 5
Your healthcare analytics startup stores patient encounter data that is updated once per day at 02:00 UTC and is spread across 6 BigQuery datasets; several tables contain PHI fields like full_name, phone_number, and notes.
You need to let a new contract analyst query only non-sensitive operational metrics (e.g., clinic_id, visit_date, procedure_code, total_cost) for the last 180 days while ensuring they cannot access any PHI or underlying base tables. What should you do?