
Simule a experiência real do exame com 100 questões e limite de tempo de 120 minutos. Pratique com respostas verificadas por IA e explicações detalhadas.
Powered by IA
Cada resposta é verificada por 3 modelos de IA líderes para garantir máxima precisão. Receba explicações detalhadas por alternativa e análise aprofundada das questões.
An MDM provides which two advantages to an organization with regards to device management? (Choose two.)
Correct. MDM provides centralized asset inventory for enrolled endpoints, including device model, serial/IMEI, OS version, ownership, compliance status, encryption state, and installed applications. This visibility supports lifecycle operations (procurement to retirement), auditing, and faster incident response by identifying affected devices and their posture.
Correct. MDM commonly enforces application control through allowlists/denylists, managed app deployment, and restrictions on app installation or data sharing. It can push required apps, remove prohibited apps, and apply managed app configurations (and sometimes per-app VPN). This reduces risk from unapproved apps and helps prevent data leakage.
Incorrect. AD Group Policy management is a Windows Active Directory domain feature (GPO) used primarily for domain-joined Windows systems. While modern Windows can be managed via MDM and can integrate with identity providers, MDM does not “manage AD GPO.” The mechanisms and policy delivery models are different.
Incorrect. Network device management refers to managing infrastructure devices like routers, switches, and firewalls using tools such as NMS/NSM, controllers, or vendor management platforms. MDM targets endpoint devices (phones/tablets/laptops) and their apps/configurations, not network infrastructure configuration and monitoring.
Incorrect. “Critical device management” is not a standard, recognized MDM advantage or feature category in typical enterprise endpoint management frameworks. MDM can apply different policies to different device groups (e.g., executives vs standard users), but the term itself is not a core MDM capability like inventory or app management.
Core Concept: Mobile Device Management (MDM), often delivered as part of Unified Endpoint Management (UEM), centrally administers mobile endpoints (iOS/iPadOS, Android, sometimes macOS/Windows) to enforce security posture and compliance. In SCOR terms, MDM is an endpoint control plane that supports secure access by ensuring devices meet policy before they connect to corporate resources. Why the Answer is Correct: A (asset inventory management) is a fundamental MDM advantage because the platform maintains a real-time inventory of enrolled devices: device identifiers, OS versions, ownership (BYOD vs corporate), compliance state, encryption status, and installed apps. This improves visibility, lifecycle management, and incident response (knowing “what devices exist and what they run”). B (allowed application management) is also a core MDM capability. MDM can enforce application policies such as allowlists/denylists, managed app deployment, app configuration, and restrictions (e.g., blocking unknown sources on Android or preventing unmanaged apps from accessing corporate data). This reduces malware risk and data leakage. Key Features / Best Practices: - Enrollment and device identity: supervised/managed modes, certificates, and device attestation where supported. - Compliance policies: minimum OS version, passcode/biometric requirements, encryption, jailbreak/root detection. - App management: managed app catalogs, required apps, app configuration, per-app VPN, and data separation (managed/unmanaged). - Reporting and automation: inventory reports, compliance dashboards, and conditional access integration (e.g., only compliant devices can access email/VPN). Common Misconceptions: - Confusing MDM with AD Group Policy (GPO): GPO is primarily Windows domain management; MDM uses profiles and device management APIs, not AD GPO. - Assuming MDM manages network infrastructure: routers/switches/firewalls are managed by NMS/NSM tools, not MDM. - “Critical device management” is not a standard MDM advantage/category; MDM focuses on endpoints, not a special class called “critical devices.” Exam Tips: For SCOR, remember MDM/UEM advantages map to endpoint visibility (inventory) and endpoint control (policy/app restrictions). If an option sounds like traditional Windows domain administration (GPO) or network infrastructure management, it’s likely not MDM. Look for keywords like enrollment, compliance, profiles, app allowlisting, remote wipe, and posture/conditional access.
Quer praticar todas as questões em qualquer lugar?
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.


Baixe o Cloud Pass e acesse todas as questões de prática de Cisco 350-701: Implementing and Operating Cisco Security Core Technologies (SCOR) gratuitamente.
Quer praticar todas as questões em qualquer lugar?
Baixe o app grátis
Baixe o Cloud Pass grátis — inclui simulados, acompanhamento de progresso e mais.
aaa new-model
radius-server host 10.0.0.12 key secret12
Refer to the exhibit. Which statement about the authentication protocol used in the configuration is true?
Incorrect. A RADIUS authentication request does not contain only a password. It normally includes the username and many other AV pairs such as NAS information, service type, port details, or EAP data depending on the use case. Also, only the password field is protected, not the entire request contents.
Incorrect. A RADIUS request does not contain only a username. It typically includes the username plus credential material such as User-Password or EAP payload, along with additional attributes describing the access request context. Therefore this option is too limited to describe RADIUS behavior accurately.
Correct. RADIUS commonly carries authentication and authorization as part of the same exchange. The client sends an Access-Request, and if the user is accepted, the Access-Accept can include authorization attributes such as service type, privilege level, VLAN assignment, or ACL information. This is the standard exam distinction from TACACS+, which separates authentication and authorization more explicitly.
Incorrect. Separate authentication and authorization request packets are characteristic of TACACS+, not RADIUS. In RADIUS, authorization data is usually returned in the same Access-Accept response associated with the authentication transaction. That is why grouped handling is the better description here.
Core concept: The configuration shows Cisco IOS AAA using a RADIUS server. The key distinction being tested is how RADIUS handles AAA compared with TACACS+, especially whether authentication and authorization are separated or combined in the protocol exchange. Why correct: RADIUS typically combines authentication and authorization within the same transaction. The NAS sends an Access-Request containing identity and credential information, and the server replies with Access-Accept or Access-Reject. If access is accepted, authorization details such as privilege, VLAN, ACL, or service parameters are returned as attributes in that same response flow. Key features: RADIUS is UDP-based and commonly uses ports 1812 and 1813. It encrypts only the user password field, not the entire packet payload. It is widely used for network access control such as VPN, wireless, and 802.1X, where authorization attributes are often delivered along with the authentication result. Common misconceptions: Many candidates confuse RADIUS with TACACS+. TACACS+ separates authentication, authorization, and accounting into distinct processes and is known for granular device administration and command authorization. RADIUS, by contrast, generally couples authentication and authorization in the same packet exchange. Exam tips: Remember the classic comparison: RADIUS combines authentication and authorization and encrypts only the password, while TACACS+ separates AAA functions and encrypts the full payload. If an option contrasts grouped versus separate auth/authz handling, grouped points to RADIUS and separate points to TACACS+.
Which technology must be used to implement secure VPN connectivity among company branches over a private IP cloud with any-to-any scalable connectivity?
DMVPN (Dynamic Multipoint VPN) provides scalable spoke-to-spoke connectivity using mGRE, NHRP, and IPsec, commonly over the Internet. It is excellent for dynamic full-mesh without predefining all tunnels, but it still relies on tunneling/encapsulation and is not the canonical solution for encrypting over a private MPLS/IP cloud while preserving the original IP header. In private WAN “IP cloud” questions, GET VPN is typically preferred.
FlexVPN is an IKEv2-based framework that can build site-to-site and remote-access VPNs and can support hub-and-spoke or dynamic topologies. While it is flexible and modern (IKEv2), it is generally implemented as tunnel-based VPN (route-based or policy-based) rather than group-based, tunnel-less encryption across a private MPLS cloud. It does not inherently provide the same group SA scaling model as GET VPN.
IPsec DVTI (Dynamic Virtual Tunnel Interface) is a route-based VPN approach where tunnels are created dynamically (often for remote access or dynamic peers). It simplifies routing compared to policy-based IPsec, but it still creates tunnel interfaces and does not provide group-based any-to-any encryption without building and managing many tunnels or relying on additional mechanisms. It is not the typical answer for scalable any-to-any over a private IP cloud.
GET VPN is designed for encrypting branch-to-branch traffic over a private IP cloud (such as MPLS) using group-based IPsec in transport mode with GDOI key management. It enables any-to-any connectivity without per-site tunnels, scales well as sites are added, and preserves the original IP header (helpful for MPLS QoS, traffic engineering, and routing visibility). This matches the requirement precisely.
Core Concept: This question tests site-to-site VPN design over a private WAN (often MPLS/VPN “IP cloud”) where the requirement is secure encryption with any-to-any connectivity that scales well without building large numbers of tunnels. Why the Answer is Correct: GET VPN (Group Encrypted Transport VPN) is purpose-built for encrypting traffic over a private IP cloud while preserving the original IP header (tunnel-less IPsec). Because it uses group-based IPsec with a Key Server (KS) and Group Members (GMs), branches can communicate any-to-any without creating per-site point-to-point tunnels. All members share group security associations distributed by the KS, enabling scalable full-mesh secure connectivity across many sites. Key Features / Best Practices: GET VPN uses IPsec in transport mode with GDOI (Group Domain of Interpretation) for key management. It supports large-scale deployments because adding a new branch typically means enrolling it as a GM rather than reconfiguring tunnels to every other site. It also preserves QoS markings and routing visibility because the packet is not encapsulated (important in MPLS clouds where the provider may rely on DSCP/EXP and where you want the provider to route based on the original IP header). Best practices include redundant Key Servers, careful group policy design, and understanding multicast/unicast support and rekey behavior. Common Misconceptions: DMVPN is often associated with “any-to-any scalable connectivity,” but DMVPN is primarily for building dynamic spoke-to-spoke tunnels over the public Internet or any IP network using mGRE/NHRP and typically results in dynamic tunnels (encapsulation) between sites. The question explicitly says “over a private IP cloud,” which is the classic GET VPN use case: encrypt over MPLS without tunneling. Exam Tips: Look for keywords: “private IP cloud/MPLS,” “any-to-any,” “scalable,” and “preserve routing/QoS.” Those point strongly to GET VPN. If the scenario emphasizes Internet transport, hub-and-spoke with dynamic spoke-to-spoke tunnels, think DMVPN/FlexVPN. If it emphasizes tunnel interfaces and route-based VPN, think VTI/DVTI.
What is the result of running the crypto isakmp key ciscXXXXXXXX address 172.16.0.0 command?
Incorrect because the command is under "crypto isakmp", which is IKEv1 configuration on Cisco IOS. IKEv2 PSKs are typically configured with "crypto ikev2 keyring" and applied via an IKEv2 profile. While the idea of authenticating peers in a range with a PSK is conceptually similar, the protocol version in the option is wrong.
Incorrect because the option claims a single peer 172.16.0.0/32. In many IOS/SCOR exam contexts, specifying 172.16.0.0 without a mask is interpreted as the classful network 172.16.0.0/16 (Class B), not a single host. Host-specific PSKs are typically represented by a specific host IP (and often explicitly treated as a host match).
Correct because "crypto isakmp key" configures a pre-shared key for IKEv1 (ISAKMP/IKE Phase 1). The address parameter is used to match the remote peer’s IP; with 172.16.0.0, exam intent is that it applies to the 172.16.0.0/16 network range (classful interpretation), enabling IKEv1 peers in that range to authenticate using the PSK ciscXXXXXXXX.
Incorrect because PSKs do not secure certificates. Certificate-based authentication uses PKI (trustpoints, CA certificates, RSA signatures) and is negotiated in IKE using digital signatures, not a shared secret configured with "crypto isakmp key". This command is specifically for symmetric PSK authentication, not for protecting or validating certificates in the exchange.
Core concept: The command "crypto isakmp key <key> address <ip>" configures a pre-shared key (PSK) for IKE Phase 1 using ISAKMP, which is the IKEv1 framework on Cisco IOS. This is used to authenticate VPN peers during IKEv1 negotiation. Why the answer is correct: "crypto isakmp" is specific to IKEv1 on IOS (IKEv2 uses "crypto ikev2" constructs such as "crypto ikev2 keyring" and profiles). When you configure a PSK with an address, IOS matches the remote peer’s IP address to select the correct key. In this command, the address is 172.16.0.0 without an explicit mask. In IOS ISAKMP PSK configuration, an address can represent a host or a network depending on how it is entered/parsed; exam questions commonly test that 172.16.0.0 implies the classful network 172.16.0.0/16 (Class B) unless a /32 host is explicitly intended. Therefore, the PSK "ciscXXXXXXXX" is used to authenticate IKEv1 peers whose source IP matches the 172.16.0.0/16 range. Key features/best practices: PSKs are simple but less scalable than certificates; for multiple peers, you can define multiple keys with different addresses. For modern designs, prefer IKEv2 with keyrings/profiles and consider certificate-based authentication for large deployments. Also, avoid relying on classful assumptions in production—use explicit masks/constructs where supported and document peer addressing. Common misconceptions: Many confuse IKEv1 and IKEv2 syntax and assume this command applies to IKEv2 (it does not). Others assume the address always means a single host (/32), but IOS behavior and exam intent often map 172.16.0.0 to the classful /16 network. Exam tips: Memorize the syntax split: "crypto isakmp" = IKEv1; "crypto ikev2" = IKEv2. For PSK questions, focus on how the peer is matched (by remote IP address) and whether the question implies a host-specific key or a network/range-based key.
HQ_Router(config)# username admin5 privilege 5
HQ_Router(config)#privilege interface level 5 shutdown
HQ_Router(config)#privilege interface level 5 ip
HQ_Router(config)#privilege interface level 5 description
Refer to the exhibit. A network administrator configures command authorization for the admin5 user. What is the admin5 user able to do on HQ_Router after this configuration?
Incorrect. Although 'privilege interface level 5 ip' lowers the privilege for the 'ip' command tree within interface configuration mode, the user still must first enter that mode with the global 'interface' command. The exhibit does not lower the privilege level for the parent 'interface' command, so admin5 cannot reach the point where 'ip address' could be entered. Because the command path is incomplete, setting an interface IP address is not actually possible from the shown configuration.
Incorrect. Adding subinterfaces requires access to the global 'interface' command and the ability to specify a new logical interface such as GigabitEthernet0/0.10. The exhibit does not grant privilege level 5 access to the parent interface-selection command, and it certainly does not grant broad interface creation capability. Therefore admin5 cannot add subinterfaces.
Correct. The user admin5 has privilege level 5, but the configuration shown only changes the privilege required for specific commands inside interface configuration mode. There is no command such as 'privilege configure level 5 interface' or equivalent lowering of the global 'interface' command, so admin5 cannot enter interface configuration mode in the first place. In addition, 'no' forms are treated as separate command paths in IOS privilege handling, so lowering 'shutdown', 'ip', and 'description' does not automatically authorize all corresponding 'no' variants. Therefore the best answer is that admin5 cannot complete no configurations.
Incorrect. Full configuration capability would require broad access to configuration mode and many commands that remain at higher privilege levels, typically level 15. The exhibit lowers only three interface-mode command paths and does not grant access to all configuration commands or even to the parent command needed to enter interface mode. As a result, admin5 is far from being able to complete all configurations.
Core concept: Cisco IOS privilege levels apply to specific commands and command modes, and lowering the privilege of subcommands does not automatically grant access to the parent command required to reach that mode. Why correct: admin5 is assigned privilege level 5, and the configuration lowers only certain interface-mode commands—shutdown, ip, and description—to level 5. However, there is no corresponding privilege change for the global configuration command 'interface', so the user cannot enter interface configuration mode to use those commands. Key features: IOS privilege customization is hierarchical in practice, and access to a child command is useless unless the user can access the parent mode or command path. Common misconceptions: many candidates assume that granting 'privilege interface level 5 ip' automatically lets a user configure interface IP addresses, but it only changes the required level for that command once already in interface configuration mode. Exam tips: always verify whether the user can access every step in the command path, especially the parent command that enters a configuration submode.
In which situation should an Endpoint Detection and Response solution be chosen versus an Endpoint Protection Platform?
Correct. EDR is selected when the organization needs advanced detection and response beyond traditional prevention. It provides continuous endpoint telemetry, behavioral analytics, investigation timelines, and response actions (isolation, process kill, remediation). This is especially important for detecting fileless attacks, living-off-the-land techniques, and post-exploitation activity that can bypass classic anti-malware controls.
Incorrect. The presence or absence of a firewall is not the deciding factor between EPP and EDR. Firewalls provide network-layer control, while EPP/EDR provide endpoint-layer protection and visibility. Even with strong perimeter security, endpoints can be compromised via phishing, stolen credentials, or remote work scenarios. EDR complements network controls; it does not replace them.
Incorrect. Traditional anti-malware detection aligns more closely with EPP, which focuses on prevention using signatures, reputation, and basic behavioral rules. While many EDR products may include or integrate EPP-like prevention, the defining reason to choose EDR is advanced detection, investigation, and response capabilities rather than classic anti-malware alone.
Incorrect. Both EPP and EDR are typically designed for centralized management to enforce policies, deploy agents, collect telemetry, and coordinate response actions. Saying there is “no need” for central management does not favor EDR; in fact, EDR’s value (correlation, hunting, response orchestration) depends heavily on centralized visibility and control.
Core Concept: This question tests the difference between an Endpoint Protection Platform (EPP) and Endpoint Detection and Response (EDR). EPP is primarily focused on prevention (blocking known threats using signatures, reputation, and basic behavioral rules). EDR is focused on detection, investigation, and response to advanced or unknown threats by collecting endpoint telemetry and enabling threat hunting and incident response. Why the Answer is Correct: An EDR solution should be chosen when an organization needs more advanced detection capabilities beyond traditional anti-malware. EDR continuously monitors endpoint activity (process execution, file changes, registry modifications, network connections, user actions) and correlates events to detect suspicious behaviors such as living-off-the-land attacks, credential dumping, lateral movement, and persistence mechanisms. It also supports rapid investigation and containment actions (isolate host, kill process, quarantine file, rollback/remediate), which are critical when prevention controls are bypassed. Key Features / Best Practices: EDR capabilities commonly include: continuous telemetry collection, behavioral analytics, MITRE ATT&CK mapping, alert triage with timelines, threat hunting queries, and response actions (containment and remediation). Best practice is to deploy EPP + EDR together (or an integrated platform) because EPP reduces commodity malware noise while EDR handles post-compromise visibility and response. Centralized management and policy enforcement are typically required for both. Common Misconceptions: Some assume EDR is only needed if perimeter controls (like firewalls) are missing, but endpoint security is not a substitute for network segmentation and perimeter defenses. Others think traditional anti-malware is “enough,” but modern attacks often use fileless techniques and legitimate tools that evade signature-based detection. Also, both EPP and EDR are generally centrally managed; lack of central management is not a reason to choose EDR. Exam Tips: For SCOR, remember: EPP = prevent/block (signatures, reputation, basic behavior). EDR = detect/investigate/respond (telemetry, hunting, containment). If the question mentions advanced detection, visibility, incident response, or threat hunting, it points to EDR. If it emphasizes traditional anti-malware prevention, it points to EPP.
What is the function of SDN southbound API protocols?
Incorrect. Southbound APIs are not intended for the static configuration of control plane applications. Their purpose is to let the controller interact with infrastructure devices, not to configure software applications that run above the controller. The phrase 'control plane applications' points more toward northbound integrations or application-layer functions. Also, emphasizing 'static configuration' conflicts with the programmable and centralized nature of SDN operations.
Incorrect. The role of a southbound API is not defined by whether the controller uses REST. The exam concept being tested is the direction and purpose of communication: controller to devices for programming the network. While some technologies may use specific protocols or data models, the essential function is infrastructure control, not REST usage itself. This option is too narrow and does not describe the actual purpose of southbound APIs.
Correct. Southbound API protocols are used by the SDN controller to communicate with the underlying network devices and make changes to their behavior. This includes programming forwarding tables, applying policy, and modifying operational parameters on switches, routers, or other infrastructure elements. In SDN, the controller is the centralized decision point, and southbound interfaces are how those decisions are enforced in the network. Therefore, the option describing the controller making changes is the best match for the function of southbound APIs.
Incorrect. Southbound APIs do not exist to dynamically configure control plane applications. They are used by the controller to manage the underlying network devices and influence data-plane behavior. Control plane applications typically interact with the controller through northbound APIs, not southbound ones. This option confuses application-to-controller communication with controller-to-device communication.
Core Concept: Software-Defined Networking (SDN) separates the control plane (centralized controller logic) from the data plane (forwarding devices). SDN APIs are commonly described as northbound (controller to applications) and southbound (controller to network devices). Southbound API protocols are the mechanisms the controller uses to program, monitor, and manage the forwarding behavior and state of the underlying infrastructure. Why the Answer is Correct: SDN southbound protocols primarily enable the controller to make changes on network devices (switches/routers/firewalls) by installing or modifying forwarding entries, policies, and operational parameters. Examples include OpenFlow (flow programming), NETCONF/RESTCONF (configuration/state via YANG models), gNMI/gRPC (telemetry and configuration in some ecosystems), and vendor-specific protocols/APIs. The key idea is controller-to-device control: pushing intent/policy into concrete device behavior. Key Features / Best Practices: - Device programming: install flows, ACL-like rules, QoS, segmentation constructs, or service chaining instructions depending on the architecture. - State retrieval: collect operational state and counters to validate intent and support closed-loop automation. - Model-driven management: NETCONF/RESTCONF with YANG provides structured, transactional configuration and consistent state representation. - Security considerations: authenticate/authorize controller-to-device sessions (TLS, SSH), least privilege, and audit changes—important in SCOR because SDN control channels are high-value targets. Common Misconceptions: Many learners confuse southbound with REST. REST is typically associated with northbound APIs (controller exposing RESTful endpoints to apps) or with RESTCONF (which is southbound but is not “the” defining function). Another confusion is focusing on “control plane applications” configuration; southbound is about programming devices/data plane behavior, not configuring apps. Exam Tips: Remember: Northbound = apps talk to controller (often REST). Southbound = controller talks to devices (OpenFlow, NETCONF/RESTCONF, etc.) to program and retrieve state. If an option describes “controller making changes to the network devices,” it aligns with southbound functionality.
An engineer needs behavioral analysis to detect malicious activity on the hosts, and is configuring the organization's public cloud to send telemetry using the cloud provider's mechanisms to a security device. Which mechanism should the engineer configure to accomplish this goal?
sFlow is a sampling-based telemetry protocol exported by capable network devices (switches/routers) to collectors for traffic visibility. It is common in campus/data center networks but is not typically something you can enable on a public cloud provider’s virtual network fabric as a native service. Because the question emphasizes using the cloud provider’s mechanism, sFlow is not the best fit.
NetFlow (and IPFIX) exports flow records from routers, switches, or firewalls to a collector and is widely used for network behavior analysis on-prem. In public cloud, you usually cannot configure NetFlow on the underlying virtual switches/routers managed by the provider. Instead, you use provider-native flow logging (for example, VPC Flow Logs), making NetFlow an unlikely answer here.
A mirror port (SPAN) is a switch feature that copies packets from one or more ports/VLANs to a monitoring port for packet capture/IDS. Public cloud environments generally do not provide traditional switch ports to configure SPAN. Some clouds offer packet mirroring services, but the option says “mirror port,” which is an on-prem construct and not the typical cloud-provider telemetry mechanism referenced in SCOR.
VPC Flow Logs are the AWS-native mechanism to capture network flow metadata for interfaces, subnets, or entire VPCs and export it to logging/streaming services for analysis. This aligns with “public cloud,” “cloud provider’s mechanisms,” and “send telemetry to a security device” for behavioral analytics and malicious activity detection. It provides scalable visibility without requiring access to underlying network devices.
Core concept: This question tests cloud-native network telemetry mechanisms used to provide visibility and support behavioral analytics/detection by exporting traffic metadata from a public cloud environment to a security analytics platform (for example, Cisco Secure Network Analytics/Stealthwatch or a SIEM). Why the answer is correct: In public cloud, you typically cannot rely on traditional on-prem mechanisms like SPAN/mirror ports on physical switches, and you often cannot enable classic NetFlow/sFlow on the cloud provider’s virtual switching fabric. Instead, cloud providers expose their own telemetry services. In AWS, the canonical mechanism is VPC Flow Logs, which records IP traffic metadata (5-tuple, bytes, packets, accept/reject, interface/ENI, etc.) for VPCs, subnets, or ENIs and exports it to CloudWatch Logs, S3, or Kinesis. Security tools can ingest these logs to perform behavioral analysis (east-west and north-south patterns, unusual connections, beaconing indicators, policy violations) and detect malicious activity affecting cloud-hosted workloads. Key features / configuration points: - Scope: enable at VPC, subnet, or ENI level depending on required granularity. - Filtering: capture accepted, rejected, or all traffic; rejected traffic is valuable for threat hunting and misconfiguration detection. - Delivery: choose CloudWatch Logs/S3/Kinesis; Kinesis is common for near-real-time streaming to analytics platforms. - Limitations: flow logs are metadata, not full packets; they support behavior analytics rather than deep packet inspection. Common misconceptions: - “Behavioral analysis on hosts” can mislead candidates into thinking endpoint/EDR telemetry is required. However, the question explicitly says “configure the organization’s public cloud to send telemetry using the cloud provider’s mechanisms,” pointing to cloud-native flow logging rather than endpoint agents. - NetFlow/sFlow are well-known telemetry protocols, but they generally require network devices that can export those records; public cloud virtual networks usually do not expose that capability directly. Exam tips: When you see “public cloud” + “cloud provider’s mechanisms” + “telemetry to a security device,” think: AWS VPC Flow Logs, Azure NSG Flow Logs, or GCP VPC Flow Logs. If the option list includes “VPC flow logs,” it is the best match. Mirror/SPAN is typically on-prem or limited to specific cloud packet mirroring features (and would be called out explicitly as packet mirroring, not a generic mirror port).
In which form of attack is alternate encoding, such as hexadecimal representation, most often observed?
Smurf is a classic ICMP amplification/reflection DoS attack using directed broadcasts and spoofed source IPs. Its effectiveness depends on network misconfiguration (allowing directed broadcast) and traffic amplification, not on hiding payloads with alternate encodings. You might see spoofing and ICMP patterns, but hexadecimal/URL encoding is not a typical characteristic of smurf attacks.
Distributed denial of service (DDoS) focuses on overwhelming a target with traffic volume or exhausting resources (SYN floods, UDP floods, HTTP floods). While HTTP-layer DDoS can include varied URIs, alternate encoding is not the defining or most commonly tested trait. The key indicators are scale, botnet distribution, and traffic patterns rather than encoded payload obfuscation.
Cross-site scripting (XSS) commonly uses alternate encoding (hex, Unicode, URL encoding, HTML entities) to evade filters and WAF signatures. Attackers encode special characters and script-related strings so input validation misses them, but the browser decodes them during rendering, enabling script execution. This encoding/decoding mismatch is a frequent technique in reflected and stored XSS payloads.
Rootkit exploits aim for stealth and persistence on endpoints, often through kernel/user-mode hooking, driver manipulation, or hiding processes/files/registry keys. Although malware may use packing or obfuscation, the exam context of “alternate encoding such as hexadecimal representation” is much more aligned with web injection evasion (like XSS) than with rootkit installation or concealment techniques.
Core Concept: Alternate encoding (for example, hexadecimal, Unicode, URL encoding, double-encoding) is a common obfuscation technique used in application-layer attacks to bypass input validation, web application firewalls (WAFs), and signature-based detection. It is especially associated with attacks that inject script or markup into web pages. Why the Answer is Correct: Cross-site scripting (XSS) frequently uses alternate encodings to disguise dangerous characters and keywords such as <, >, ", ', /, and strings like "script" or event handlers (onerror, onclick). For example, an attacker may encode characters as %3Cscript%3E or use HTML entities (<script>) so the payload passes through filters that only look for literal patterns. When the browser decodes the content during rendering, the malicious script executes in the victim’s context. This “encode to evade, decode to execute” behavior is a hallmark of XSS and other web injection attacks. Key Features / Best Practices: Effective defenses focus on canonicalization and context-aware output encoding. Canonicalization means normalizing input (decode/transform to a single standard form) before validation so that encoded variants are still caught. Use allow-list input validation where possible, and apply output encoding based on context (HTML, attribute, JavaScript, URL). Deploy a WAF with normalization enabled, and use modern browser protections (CSP, HttpOnly/Secure cookies) to reduce impact. Common Misconceptions: Candidates may associate “hex encoding” with low-level malware or rootkits, but rootkits more often rely on privilege escalation, persistence, and stealth techniques (hooking, kernel manipulation) rather than web-style encoding evasion. DDoS and smurf attacks are volumetric/reflection attacks; encoding tricks are not central to their operation. Exam Tips: When you see “alternate encoding,” “hex/Unicode/URL encoding,” “obfuscation of payload,” or “bypass filters,” think application-layer injection: XSS (and often SQLi). For SCOR, map the technique to the layer: encoding evasion is typically Content Security / web application security rather than network flooding or endpoint persistence.
Why is it important to implement MFA inside of an organization?
Correct. MFA significantly reduces the effectiveness of brute-force and password-spraying attacks because a guessed password alone is insufficient to authenticate. Attackers may still discover valid credentials, but they typically cannot complete login without the second factor. This is why MFA is a core identity control in Zero Trust and is commonly required for VPN, admin access, and cloud applications.
Partially true in some cases, but not the best general answer. MFA can reduce the impact of phishing when attackers only steal passwords; however, many phishing campaigns now use adversary-in-the-middle techniques to capture OTPs or hijack sessions, and “push fatigue” can trick users into approving prompts. Only phishing-resistant MFA (FIDO2/WebAuthn, cert-based) strongly addresses phishing.
Incorrect. MFA does not prevent Denial of Service (DoS) attacks. DoS targets availability by overwhelming resources (bandwidth, CPU, state tables, application threads). MFA may even add authentication overhead that could be abused in some scenarios. DoS mitigation typically involves rate limiting, DDoS scrubbing, WAF/CDN protections, and resilient architecture.
Incorrect as a general statement. MFA alone does not inherently prevent man-in-the-middle (MITM) attacks. A MITM can intercept traffic or proxy authentication flows; some MFA methods (OTP/push) can be relayed in real time. Preventing MITM relies on strong TLS, certificate validation, mutual authentication, secure DNS, and phishing-resistant MFA methods like FIDO2 that bind authentication to the origin.
Core Concept: Multi-Factor Authentication (MFA) strengthens identity security by requiring two or more independent factors (something you know, have, or are) before granting access. In SCOR, MFA is a foundational control aligned with Zero Trust principles: never trust, always verify, and reduce reliance on passwords as a single point of failure. Why the Answer is Correct: Option A is the best answer because MFA directly reduces the success rate of brute-force and password-spraying attacks. Even if an attacker guesses or cracks a password through repeated attempts, they typically cannot complete authentication without the additional factor (for example, a push approval, OTP, or hardware key). MFA therefore breaks the attacker’s ability to turn “credential discovery” into “account takeover,” which is the primary goal of brute-force techniques. Key Features / Best Practices: Effective MFA implementations include enforcing MFA for privileged/admin accounts first, then expanding to all users; integrating with centralized identity providers (IdP) and AAA (RADIUS/TACACS+); using conditional access (risk-based policies, device posture, geolocation, impossible travel); and preferring phishing-resistant methods (FIDO2/WebAuthn, hardware keys, certificate-based auth) over SMS OTP. Also pair MFA with account lockout/rate limiting, strong password policies, and monitoring (SIEM) for repeated failures. Common Misconceptions: MFA is often marketed as “stopping phishing,” which can be partially true, but not universally. Many MFA methods (push-to-approve, OTP) can be bypassed via real-time phishing proxies, MFA fatigue, or session token theft. Similarly, MFA does not inherently stop DoS attacks, and it does not automatically prevent man-in-the-middle attacks unless the MFA method is resistant to interception and the channel is protected (for example, mutual TLS or FIDO2). Exam Tips: For SCOR-style questions, map each control to the attack it most directly mitigates. MFA most directly mitigates credential-based compromise (brute force/spraying/reuse). If the question asked specifically about phishing-resistant MFA, then phishing might be the best fit. Watch for wording like “prevent phishing from being successful” versus “reduce account takeover from stolen passwords.”