
Simula la experiencia real del examen con 65 preguntas y un límite de tiempo de 130 minutos. Practica con respuestas verificadas por IA y explicaciones detalladas.
Impulsado por IA
Cada respuesta es verificada de forma cruzada por 3 modelos de IA líderes para garantizar la máxima precisión. Obtén explicaciones detalladas por opción y análisis profundo de cada pregunta.
A streaming media company runs six production studios across five AWS Regions, each studio’s compliance team uses a distinct IAM role, and all raw subtitle files and QC logs are consolidated in a single Amazon S3 data lake with partitions by aws_region (for example, s3://media-lake/raw/aws_region=eu-central-1/), and the data engineering team must, with the least operational overhead and without creating new buckets or duplicating data, ensure that each studio can query only records from its own Region via services like Amazon Athena; which combination of steps should the team take? (Choose two.)
A media analytics company plans to lift-and-shift its on-premises Kafka cluster (3 brokers, 24 partitions, ~2 MB/s average ingest with bursts to 12 MB/s, 50-KB messages) and the consumer application that processes incremental CDC updates emitted by an on-premises MySQL via Debezium to AWS, and the team insists on a replatform (not refactor) strategy with minimal operational management while preserving Kafka APIs and automatic scaling—which AWS service choice meets these requirements with the least management overhead?
A data engineer must optimize a smart-utility analytics pipeline that processes residential smart-meter readings, where Apache Parquet files are delivered daily to an Amazon S3 bucket under the prefix s3://utility-raw/consumption/. Every Monday, the team runs ad hoc SQL to compute KPIs filtered by reading_date for multiple windows (last 7, 30, and 180 days). The dataset currently grows by about 15 GB per day and is expected to reach 60 GB per day within a year; the solution must prevent query performance from degrading as data volume increases while being the most cost-effective. Which approach meets these requirements most cost-effectively?
A data platform team queries time-series telemetry in Amazon S3 with Amazon Athena using the AWS Glue Data Catalog, but a single table has about 1.2 million partitions organized by year/month/day/hour under a prefix like s3://prod-telemetry/tenant_id={t}/year={YYYY}/month={MM}/day={DD}/hour={HH}, causing query planning to become a bottleneck; while keeping data in S3, which solutions will remove the bottleneck and reduce Athena planning time? (Choose two.)
A mobility analytics startup ingests vehicle telemetry into an Amazon MSK cluster at 2,800 JSON events per second on average (bursts up to 11,000 events/s, ~1.8 KB per event) and must make this data available in Amazon Redshift with sub-minute freshness (SLA: under 45 seconds end-to-end) for operational dashboards while optimizing storage cost by avoiding an extra durable raw copy outside the streaming source and keeping operational overhead to a minimum; which solution best meets these requirements with the least operational effort?
¿Quieres practicar todas las preguntas en cualquier lugar?
Descarga Cloud Pass gratis — incluye exámenes de práctica, seguimiento de progreso y más.
A transportation logistics startup ingests vehicle telemetry and order-tracking events into an Amazon DynamoDB table configured for provisioned capacity; traffic is highly predictable: every weekday from 06:45 to 10:00 local time the workload spikes to 6x the baseline, while from Friday 20:00 through Sunday 23:00 usage drops to about 10% of the weekday peak; the team needs to maintain single-digit millisecond latency during peaks and minimize spend during off-hours. Which solution will meet these requirements in the most cost-effective way?
A fintech startup runs an Amazon Aurora MySQL-Compatible DB cluster (port 3306) in two private subnets (subnet-10.0.1.0/24 in us-east-1a and subnet-10.0.2.0/24 in us-east-1b) with no route to an internet gateway, the DB security group (sg-db) currently allows inbound only from itself on TCP 3306, and a developer created an AWS Lambda function with default networking (no VPC) to insert/update/delete rows; the team must allow the function to connect to the cluster endpoint privately without traversing the public internet or using a NAT, with the least operational overhead—Which combination of steps meets the requirement? (Choose two.)
A logistics company stores a 120-million-row table named shipments in Amazon Redshift that includes a column called port_code, and analysts need a SQL query that returns all rows where port_code begins with 'NY' or 'LA'; which query meets this requirement?
A marine research vessel streams vibration, salinity, and gyro readings from 24 onboard sensor arrays, each sending 150 KB of JSON every 12 seconds through a shipboard gateway to AWS over TLS; an operations job polls an Amazon S3 bucket every 45 seconds to pick up the latest files for aggregation, and you must choose an ingestion design that delivers the arriving data into S3 with the least end-to-end latency while sustaining the throughput. Which solution will deliver the data to the S3 bucket with the least latency?
An IoT analytics team maintains a centralized AWS Glue Data Catalog for telemetry files arriving in multiple Amazon S3 buckets across two AWS accounts, and they must keep the catalog updated incrementally within 10 minutes of new object writes without building custom code or long-running infrastructure; S3 event notifications are already configured to publish ObjectCreated events to an Amazon SQS standard queue dedicated to catalog updates; which combination of steps should the team take to meet these requirements with the least operational overhead? (Choose two.)
Periodo de estudio: 1 month
문제 제대로 이해하고 풀었으면 여러분들도 합격 가능할거에요! 화이팅
Periodo de estudio: 1 month
I passed the AWS data engineer associate exam. Cloud pass questions is best app which help candidate to preparer well for any exam. Thanks
Periodo de estudio: 1 month
시험하고 문제 패턴이 비슷
Periodo de estudio: 2 months
813/1000 합격했어요!! 시험하고 문제가 유사한게 많았어요
Periodo de estudio: 1 month
해설까지 있어서 공부하기 좋았어요. 담에 또 올게요
Obtén la app gratis