Cisco
626+ Soal Latihan Gratis dengan Jawaban Terverifikasi AI
Didukung AI
Setiap jawaban Cisco 350-701: Implementing and Operating Cisco Security Core Technologies (SCOR) diverifikasi silang oleh 3 model AI terkemuka untuk memastikan akurasi maksimum. Dapatkan penjelasan detail per opsi dan analisis soal mendalam.
Interface MAC Address Method Domain Status Fg Session ID
Gi4/15 0050.b6d4.8a60 dot1x DATA Auth 0A02198200001
Gi8/43 0024.c4fe.1832 dot1x VOICE Auth 0A02198200000
Gi10/25 0026.7391.bbd1 dot1x DATA Auth 0A02198200001
Gi8/28 0026.0b5e.51d5 dot1x VOICE Auth 0A02198200000
Gi4/13 0025.4593.e575 dot1x VOICE Auth 0A02198200000
Gi10/23 0025.8418.217f dot1x VOICE Auth 0A02198200000
Gi7/4 0025.8418.1bc7 dot1x VOICE Auth 0A02198200000
Gi7/7 0026.0b5e.50fb dot1x VOICE Auth 0A02198200000
Gi8/14 c85b.7604.fa1d dot1x DATA Auth 0A02198200001
Gi10/29 0026.0b5e.528a dot1x VOICE Auth 0A02198200000
Gi4/2 0026.0b5e.4f9f dot1x VOICE Auth 0A02198200000
Gi10/30 0025.4593.e5ac dot1x VOICE Auth 0A02198200000
Gi8/29 68bd.aba5.2e44 dot1x VOICE Auth 0A02198200000
Gi7/4 54ee.75db.d766 dot1x DATA Auth 0A02198200001
Gi2/34 e804.62eb.a658 dot1x VOICE Auth 0A02198200000
Gi10/22 482a.e307.d9c8 dot1x DATA Auth 0A02198200001
Gi9/22 0007.b00c.8c35 mab DATA Auth 0A02198200000
Refer to the exhibit. Which command was used to generate this output and to show which ports are authenticating with dot1x or mab?
"show authentication registrations" is not the standard IOS/IOS-XE command used to display 802.1X/MAB session state in the format shown. The exhibit’s columns (Method, Domain, Status, Session ID) align with the authentication session table, not a “registrations” view. This option can look plausible because “registration” is a common term in NAC contexts, but it is not the typical command for per-session dot1x/mab visibility.
"show authentication method" is not a common Catalyst operational command that produces a per-interface/per-MAC table showing dot1x vs mab sessions. While “method” appears as a column in the exhibit, IOS-XE uses that as part of the session output, not as a standalone command to list all authenticated endpoints and their domains/status/session IDs.
"show dot1x all" is dot1x-focused and may provide 802.1X state information, but it typically does not present the combined 802.1X and MAB session table with VOICE/DATA domains and session IDs in the exact format shown. The exhibit includes a MAB entry, which strongly indicates the broader authentication session command rather than a dot1x-only command.
"show authentication sessions" is the correct command because it produces the table listing Interface, MAC Address, Method (dot1x or mab), Domain (DATA/VOICE), Status (Auth), and a Session ID. It is the primary operational command to confirm which ports/endpoints are authenticating via 802.1X versus MAB and to troubleshoot NAC/ISE deployments.
Core concept: This question tests visibility and verification of IEEE 802.1X and MAB (MAC Authentication Bypass) on access switch ports—key components of Cisco Identity Services Engine (ISE) / AAA-based Network Access Control (NAC). On Catalyst switches, the primary operational command to see per-port/per-MAC authentication state, method used (dot1x vs mab), authorization status, and session identifiers is the authentication session table. Why the answer is correct: The exhibit shows a tabular output with columns: Interface, MAC Address, Method, Domain, Status, and Session ID. This exact layout matches the common IOS/IOS-XE command output of "show authentication sessions" (often with optional keywords like "interface" or "details"). It enumerates active authentication sessions and explicitly indicates the method used (dot1x or mab) and the domain (DATA or VOICE), which is precisely what the question asks: “show which ports are authenticating with dot1x or mab.” Key features / best practices: - "show authentication sessions" is used to validate 802.1X/MAB operation, including multi-domain authentication (separate VOICE and DATA domains on the same physical port), common in IP phone + PC deployments. - The “Status Auth” indicates successful authorization; “Method” indicates whether the endpoint used 802.1X (supplicant-based) or MAB (fallback for non-802.1X devices). - The session ID shown is used for troubleshooting with RADIUS/ISE logs (correlating switch sessions to ISE Live Logs). Common misconceptions: - Engineers sometimes confuse dot1x-specific commands (e.g., "show dot1x") with the broader authentication session table. However, dot1x-only commands may not show MAB sessions or may present different fields. - “registrations” is more commonly associated with device tracking/registration tables or other features and is not the standard command for 802.1X/MAB session visibility. Exam tips: For SCOR, memorize the operational triad for NAC troubleshooting: (1) switch session view ("show authentication sessions"), (2) RADIUS/ISE live logs, and (3) endpoint supplicant status. Also remember that multi-domain (VOICE/DATA) outputs strongly suggest the authentication session table rather than a dot1x-only view.
Which two capabilities does TAXII support? (Choose two.)
Exchange is the fundamental purpose and capability of TAXII. The acronym itself stands for Trusted Automated eXchange of Indicator Information, which directly reflects its role in sharing cyber threat intelligence between organizations and tools. TAXII standardizes this exchange so that producers and consumers can interoperate without relying on proprietary mechanisms. In exam terms, 'exchange' is absolutely aligned with what TAXII is built to do.
Pull messaging is a recognized TAXII capability in which a client requests threat intelligence from a TAXII service or server. This model allows consumers to poll for new indicators, observables, or other CTI objects when needed rather than requiring unsolicited delivery. It is especially useful in controlled environments where clients initiate outbound connections through firewalls or proxies. Pull-based retrieval is one of the classic TAXII interaction patterns and is a strong exam keyword.
Binding is not typically treated as one of the core TAXII capabilities in certification-style questions. While TAXII can be implemented over specific transports such as HTTPS and may rely on protocol mappings, that is more of an implementation detail than a primary functional capability. The question asks what TAXII supports, and the expected focus is on intelligence exchange and messaging patterns. Therefore, binding is not the best answer choice here.
Correlation is not performed by TAXII itself. Correlation is an analytical function usually handled by SIEM, XDR, TIP, or other security analytics platforms that compare threat intelligence with logs, events, or telemetry. TAXII only provides a standardized way to move CTI data between systems. It does not define detection logic or event matching behavior.
Mitigating is not a TAXII protocol capability. Mitigation refers to taking defensive action, such as blocking malicious IPs, updating firewall rules, or triggering response workflows in other security tools. TAXII may deliver the intelligence that informs those actions, but it does not perform the mitigation itself. This makes mitigation an outcome of CTI usage rather than a TAXII-supported capability.
Core concept: TAXII (Trusted Automated eXchange of Indicator Information) is an application-layer protocol used to exchange cyber threat intelligence between systems, typically carrying STIX-formatted data. Why correct: TAXII explicitly exists to enable the exchange of CTI and supports pull-based messaging patterns for retrieving intelligence from a server. Key features: it standardizes how threat intelligence is requested, shared, and transported between producers and consumers, often over HTTPS. Common misconceptions: candidates often confuse TAXII transport capabilities with analytical functions like correlation or operational outcomes like mitigation, and may also overemphasize implementation details such as bindings. Exam tips: remember STIX defines the intelligence content, while TAXII defines how that intelligence is exchanged and accessed, especially through request/response and collection retrieval workflows.
Which flaw does an attacker leverage when exploiting SQL injection vulnerabilities?
Correct. SQL injection exploits improper handling of user-supplied input in a web page/application, especially when input is concatenated into SQL statements or not safely parameterized. The attacker injects SQL syntax through fields, parameters, cookies, or headers to change query behavior. The core flaw is failure to separate code (SQL) from data (user input) via prepared statements and robust server-side validation.
Incorrect. SQL injection is not primarily a vulnerability in Linux or Windows. While the underlying OS can influence post-exploitation impact (e.g., file access, command execution via DB features), SQLi occurs at the application/database query layer. The same SQLi flaw can exist regardless of whether the server runs Linux or Windows, so OS choice is not the leveraged flaw.
Incorrect. The database is the target and executes the resulting query, but the vulnerability is typically introduced by the application that constructs SQL unsafely. Databases can have their own vulnerabilities, yet classic SQL injection is caused by the application failing to parameterize queries and validate input. Treat “database” as a common distractor because it confuses the victim component with the root cause.
Incorrect. Web page images are unrelated to SQL injection. SQLi involves injecting SQL commands through input vectors that reach query construction (forms, URL parameters, API payloads). Images might be relevant to other web issues (e.g., XSS via SVG, content spoofing, or file upload vulnerabilities), but they are not the flaw leveraged in SQL injection.
Core Concept: SQL injection (SQLi) is an application-layer injection attack where an attacker supplies crafted input that is interpreted as part of a SQL query. The vulnerability exists when an application builds SQL statements by concatenating untrusted user input (or otherwise fails to safely parameterize queries), allowing the attacker to alter query logic. Why the Answer is Correct: Attackers leverage flaws in user input handling—specifically insufficient input validation/sanitization and unsafe query construction. If a web page or web application accepts input (login fields, search boxes, URL parameters, cookies, headers) and inserts it into SQL statements without proper controls (e.g., parameterized queries), the attacker can inject SQL syntax (like quotes, UNION, OR 1=1, stacked queries where supported) to bypass authentication, extract data, modify records, or potentially execute administrative database actions. Key Features / Best Practices: The primary defenses are (1) parameterized queries (prepared statements) and safe ORM usage, (2) server-side allow-list validation for expected formats, (3) least-privilege database accounts (app account should not be DBA), (4) proper error handling (avoid verbose SQL errors), and (5) compensating controls like WAF rules/signatures and database activity monitoring. In Cisco security contexts, this maps to application security fundamentals and content security controls (e.g., WAF/IPS) but the root flaw is still in the application’s input handling. Common Misconceptions: Many assume SQLi is a “database flaw” because the database is impacted. In reality, the database is doing what it is told; the application is the component that incorrectly mixes code and data. Others blame the OS (Linux/Windows), but SQLi is largely OS-agnostic and depends on the application and database interaction. Exam Tips: When you see “SQL injection,” immediately think “untrusted input + dynamic SQL.” The correct choice usually references improper input validation, lack of parameterized queries, or failure to separate code from data. If an option says “database vulnerability” without mentioning application input handling, it’s typically a distractor.
Sysauthcontrol Enabled
Dot1x Protocol Version 3
Dot1x Info for GigabitEthernet1/0/12
-------------------------------------
PAE = AUTHENTICATOR
PortControl = FORCE_AUTHORIZED
ControlDirection = Both
HostMode = SINGLE_HOST
QuietPeriod = 60
ServerTimeout = 0
SuppTimeout = 30
ReAuthMax = 2
MaxReq = 2
TxPeriod = 30
Refer to the exhibit. Which command was used to display this output?
'show dot1x all' is intended to provide a more exhaustive display across all interfaces and often includes more extensive systemwide details than what is shown here. The exhibit presents a standard global header and one interface detail block, not an obvious all-interfaces exhaustive listing. While the command name sounds plausible, it is not the canonical match for this concise mixed global-plus-interface output. On Cisco IOS, the simpler 'show dot1x' command is the expected command for this format.
The command 'show dot1x' displays the global 802.1X operational status first, including fields such as 'Sysauthcontrol Enabled' and 'Dot1x Protocol Version 3'. It also includes detailed per-interface sections like 'Dot1x Info for GigabitEthernet1/0/12', showing PAE role, PortControl, ControlDirection, HostMode, and timer values. That exact combination of global information plus interface-specific details is what appears in the exhibit. Because the output is not limited to only the interface block, the broader 'show dot1x' command is the best match.
'show dot1x all summary' would produce a summarized view rather than detailed operational parameters. Summary commands typically show condensed status information in tabular or abbreviated form, not individual timer values like QuietPeriod, SuppTimeout, MaxReq, and TxPeriod. The exhibit clearly contains detailed per-interface operational settings rather than a summary. Therefore this option does not fit the level of detail shown.
'show dot1x interface gi1/0/12' would be expected to focus on the specified interface only. The exhibit, however, begins with global 802.1X information such as 'Sysauthcontrol Enabled' and 'Dot1x Protocol Version 3' before the interface section. That inclusion of global status indicates the command is broader than an interface-only query. For that reason, this option is not the best match for the displayed output.
Core concept: This question tests recognition of Cisco IOS 802.1X verification commands and the difference between global, summary, and interface-specific output. The correct command is the one whose output includes both global 802.1X settings and a detailed interface section. Key features in the exhibit are the global lines 'Sysauthcontrol Enabled' and 'Dot1x Protocol Version 3' followed by 'Dot1x Info for GigabitEthernet1/0/12' with operational parameters such as PAE role, PortControl, HostMode, and timers. A common misconception is assuming that any output mentioning a specific interface must come from an interface-qualified command, but Cisco show commands often include interface sections in broader output. Exam tip: when you see both global 802.1X status and interface details in one display, prefer 'show dot1x' over a narrower interface-only or summary command.
What provides visibility and awareness into what is currently occurring on the network?
CMX (Cisco Connected Mobile Experiences) is primarily a wireless location and analytics solution. It provides visibility into Wi-Fi client presence, movement, and engagement (e.g., location-based services), not comprehensive awareness of overall network operations and security events across routing, switching, and security infrastructure. It’s valuable for wireless analytics use cases, but it is not the general mechanism for “what is currently occurring on the network.”
WMI (Windows Management Instrumentation) is a Microsoft framework for managing and monitoring Windows endpoints and servers. It can provide visibility into host processes, services, and system metrics, but it is not a network visibility technology for Cisco network devices. In SCOR context, WMI is more aligned with endpoint/host management rather than network-wide operational awareness.
Cisco Prime Infrastructure is a network management platform (especially for enterprise wired/wireless) that can monitor devices, configurations, and performance. While it can provide dashboards and reports, it is a specific product and often relies on underlying data collection methods (SNMP, syslog, NetFlow, telemetry). The question asks for what provides visibility conceptually; telemetry is the more direct and general answer.
Telemetry is the mechanism for exporting operational and security data from network devices to collectors/analytics tools, often in a streaming, near-real-time fashion (model-driven telemetry using YANG/gRPC/gNMI, or related exports). This continuous data feed enables rapid visibility and awareness of current network conditions, anomalies, and events, making it the best match for the question’s intent.
Core Concept: This question tests network visibility and situational awareness—how security and operations teams understand what is happening on the network in near real time. In Cisco security architectures, this is commonly achieved through streaming data (metrics, events, flow records, logs) from network devices to collectors/analytics platforms. Why the Answer is Correct: Telemetry provides visibility and awareness into what is currently occurring on the network by continuously exporting operational and security-relevant data from devices (routers, switches, firewalls, wireless controllers) to monitoring and analytics systems. Unlike periodic polling, modern telemetry is typically model-driven and streaming (for example, using YANG models over gRPC/gNMI), enabling faster detection of anomalies, performance issues, and security events. This “what is happening now” aspect aligns directly with telemetry’s purpose: timely, high-fidelity observability. Key Features / Best Practices: Telemetry can include interface statistics, CPU/memory, routing state, NetFlow/IPFIX flow data, security events, and application performance indicators. Model-driven telemetry reduces overhead versus frequent SNMP polling and supports structured data. Best practices include selecting the right data sources (flows + device health + security logs), tuning export intervals, ensuring secure transport (TLS), and integrating with SIEM/NDR tools (e.g., Secure Network Analytics/Stealthwatch, Splunk) for correlation and alerting. Common Misconceptions: Cisco Prime Infrastructure and similar management platforms provide monitoring and reporting, but they are products that may consume telemetry rather than being the foundational concept. CMX is focused on location analytics for wireless clients, not broad network-wide operational awareness. WMI is a Windows management interface, relevant to endpoint monitoring, not network device visibility. Exam Tips: When you see wording like “visibility and awareness of what is currently occurring,” think “observability” and “streaming/real-time data.” On SCOR, telemetry is a key enabler for visibility (often paired with NetFlow/IPFIX, syslog, and analytics). If the option list includes a general concept (Telemetry) versus specific tools (Prime, CMX), the concept is usually the best match unless the question explicitly names a platform.
Ingin berlatih semua soal di mana saja?
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.
For which two conditions can an endpoint be checked using ISE posture assessment? (Choose two.)
Computer identity is generally used for authentication/authorization decisions (for example, machine authentication with EAP-TLS or AD machine account lookups) and for policy matching in ISE. It is not a typical “posture assessment” condition because posture is about endpoint security health/compliance (firewall, AV, services, patches), not who/what the endpoint is from an identity perspective.
ISE posture can check Windows service state as part of compliance. This is useful to ensure required security components are active (for example, an endpoint protection/EDR service, Windows Update service, or a corporate VPN/security agent service). Service checks are classic posture conditions because they directly indicate whether a security control is running and enforceable.
User identity is evaluated during authentication (802.1X, VPN, etc.) and then used in authorization rules (for example, mapping AD groups to access levels). Posture assessment does not typically “check” user identity as a health requirement; instead, posture checks endpoint configuration and security status, and the posture result can be combined with user identity to decide access.
Windows firewall status is a common ISE posture requirement. ISE can assess whether the firewall is enabled and treat disabled firewall as noncompliant, triggering remediation or restricted access. This aligns with NAC best practices: verify baseline endpoint protections before granting full network access, especially on untrusted networks or for BYOD/corporate endpoints.
Default browser is not a standard ISE posture compliance check in the SCOR/NAC context. Posture focuses on security posture indicators (firewall, AV/EDR, disk encryption, patch level, services/processes, registry/files). While endpoint configuration can be broad, “default browser” is not typically used as a security compliance condition in ISE posture policies.
Core Concept: Cisco ISE Posture Assessment evaluates an endpoint’s security compliance (posture) before granting access or assigning an authorization profile. It is part of Network Access Control (NAC) and is commonly enforced with 802.1X, MAB, VPN, or wireless access. Posture uses an agent (Cisco Secure Client/AnyConnect Posture module or ISE Posture agent) and posture policies to check endpoint attributes such as OS settings, security applications, and specific services/processes. Why the Answer is Correct: ISE posture can validate host conditions that indicate security hygiene and readiness. Two classic posture checks are: 1) Windows service state (Option B): ISE can verify whether a required Windows service is running/stopped (for example, ensuring an endpoint protection service is running). This is a common compliance requirement because many security controls depend on services being enabled. 2) Windows firewall status (Option D): ISE can check whether the Windows Firewall is enabled/disabled and potentially validate profile/state depending on the posture configuration. Firewall enabled is a foundational endpoint security requirement and is frequently used as a posture condition. Key Features / How It Works: - Posture policy is built from conditions (requirements) and remediation actions. - Conditions can include registry keys, file existence/version, process/service state, and security product presence/status. - Noncompliant endpoints can be placed into a remediation VLAN/SGT/dACL or given limited access until they remediate. - Best practice is to combine posture with profiling and authentication/authorization (e.g., 802.1X + posture + SGT) for layered control. Common Misconceptions: - Identity attributes (user identity/computer identity) are primarily authentication/authorization inputs, not posture “health” checks. They are evaluated via AD/LDAP/certificates and used in policy sets, but they are not posture compliance conditions. - “Default browser” is a configuration preference and not a typical ISE posture condition in SCOR context; posture focuses on security controls (firewall, AV/EDR, disk encryption, patch level, services). Exam Tips: - For SCOR, remember: ISE posture = endpoint compliance/health (firewall, AV/EDR, services, processes, registry/files), while ISE authentication/authorization = identity (user/machine), posture results can then drive authorization outcomes (dACL, VLAN, SGT, quarantine/remediation).
Which policy represents a shared set of features or parameters that define the aspects of a managed device that are likely to be similar to other managed devices in a deployment?
Group policy is not the correct FMC policy construct for this definition. While the phrase sounds generic and could imply settings shared by a group of devices, Cisco FMC uses more specific policy names, and this wording maps to device management policy. Choosing group policy here reflects a terminology trap rather than the actual product feature name. On the exam, be careful to distinguish intuitive wording from official FMC policy taxonomy.
An access control policy defines how traffic is evaluated and handled, including permit, block, trust, and inspection actions. It is centered on security rule logic for network flows rather than on shared device-level operational parameters. Although an ACP can be deployed to multiple devices, that does not make it the policy type described in the question. The prompt is about common aspects of managed devices, not traffic enforcement behavior.
A device management policy in Cisco FMC is specifically intended to represent a shared set of features or parameters for managed devices that are likely to be similar across a deployment. It provides a reusable way to configure common device-level behavior without redefining the same settings on each firewall individually. This aligns directly with the wording of the question, which emphasizes shared characteristics among managed devices. In practice, it helps administrators maintain consistency and simplify operations across multiple similar devices.
A platform settings policy is used for platform-specific and system-level settings such as SNMP, syslog, NTP, SSH, and other device service parameters. It is narrower in scope and focused on platform behavior rather than serving as the broader shared policy construct described in the question. The exam wording points to a policy representing a shared set of device features and parameters across similar managed devices, which is the role of device management policy. Therefore, platform settings policy is too specific to be the best answer.
Core concept: This question tests knowledge of Cisco Secure Firewall Management Center (FMC) policy types and their intended scope. In FMC, a device management policy is used to define a shared set of device-level features and parameters for managed devices that have similar operational characteristics. Why correct: The wording in the question closely matches Cisco's description of device management policy as a reusable collection of settings for aspects of managed devices that are likely to be similar across a deployment. Key features: it centralizes common device configuration, improves consistency, reduces repetitive per-device setup, and supports scalable administration. Common misconceptions: access control policy governs traffic handling, platform settings policy focuses on lower-level platform/system behavior, and group policy is not the standard FMC term for this definition. Exam tips: when the prompt refers to shared device aspects or reusable device-level parameters across similar managed devices, think device management policy rather than traffic policy or platform-specific settings.
Which ID store requires that a shadow user be created on Cisco ISE for the admin login to work?
RSA SecurID is primarily an external authentication and token-based verification system rather than a directory used for ISE administrative role mapping. It can validate credentials or second-factor tokens, but it is not the identity store classically associated with requiring a local shadow user for ISE admin login. The question is focused on the store type that needs a local ISE representation for RBAC. That requirement is tied to LDAP, not RSA SecurID.
The Internal Database does not require a shadow user because the administrator account already exists locally on Cisco ISE. Since the user is native to ISE, both authentication and authorization can be handled directly without creating any additional representation. Admin roles and permissions are assigned straight to the local account. Therefore, there is no need for a shadow user when using the Internal Database.
Active Directory is a common external identity source for Cisco ISE, but it is not the one that requires manually created shadow users for admin login in the way LDAP does. ISE has native AD integration that supports authentication and group-based authorization workflows without the same shadow-user requirement emphasized for LDAP. This makes AD a plausible distractor, especially because AD uses LDAP-related technologies, but it is not the best answer to this specific question. The exam distinction is that LDAP requires the shadow user for admin access.
LDAP is the correct answer because Cisco ISE requires a shadow user to be created locally when an administrator authenticates through an LDAP identity store. LDAP can validate the administrator's credentials, but ISE still needs a local account object to associate that identity with admin roles and permissions. This local shadow user enables ISE to perform authorization, auditing, and RBAC enforcement for the administrative session. Without the shadow user, LDAP authentication alone is not sufficient for successful ISE admin access.
Core concept: This question tests Cisco ISE administrative authentication with external identity stores and specifically which store requires a locally created shadow user for administrator access. A shadow user is a local ISE representation of an externally authenticated administrator, used so ISE can associate that identity with administrative RBAC settings. Why correct: LDAP is the identity store that requires a shadow user for Cisco ISE admin login to work. LDAP can authenticate the administrator externally, but ISE still needs a local shadow account to bind that external identity to ISE admin roles, permissions, and authorization behavior. Without that local shadow user, the LDAP-authenticated admin cannot be properly authorized for ISE administrative access. Key features: - LDAP is supported as an external identity source for ISE administrator authentication. - ISE administrative access requires both authentication and local authorization through ISE RBAC. - Shadow users provide the local object needed to assign admin groups/roles to externally authenticated LDAP users. - Internal Database users do not need shadow users because they already exist locally. Common misconceptions: - Active Directory is often confused with LDAP because AD uses LDAP as a protocol, but ISE has tighter native AD integration and does not rely on manually created shadow users in the same way for admin access. - RSA SecurID is an authentication mechanism, not the canonical store associated with shadow-user requirements. - Any external authentication source does not automatically imply shadow users; this behavior is specific to certain integrations. Exam tips: - Distinguish between authentication and authorization in ISE admin access questions. - If the question asks specifically about a shadow user for admin login, think of LDAP-based admin authentication. - Remember that ISE RBAC is always local, even when authentication is external.
Which two kinds of attacks are prevented by multifactor authentication? (Choose two.)
Phishing commonly targets usernames and passwords. MFA mitigates phishing because captured credentials alone are not enough to authenticate; the attacker must also provide a second factor (token/push/FIDO key). Note that some advanced real-time phishing can still bypass OTP-based MFA by relaying codes, so phishing-resistant MFA (FIDO2/WebAuthn) is best practice.
Brute-force (online password guessing) is mitigated by MFA because even a successfully guessed password does not grant access without the additional factor. MFA increases attacker effort and typically triggers more detectable events (repeated failures). It should still be paired with rate limiting, lockout policies, and monitoring to reduce guessing attempts.
Man-in-the-middle attacks are not reliably prevented by MFA. An attacker can proxy the login flow and relay credentials and one-time codes in real time, or steal session cookies after authentication. Only specific MFA implementations that provide origin binding and channel protection (for example, FIDO2/WebAuthn) significantly reduce MITM risk; generic MFA alone is not a guarantee.
DDoS is an availability attack that overwhelms services or network links with traffic. MFA is an authentication/identity control and does not reduce the volume of malicious traffic. DDoS mitigation typically involves rate limiting, scrubbing centers, CDNs/Anycast, WAF protections, and upstream provider cooperation, not stronger user authentication.
Teardrop is a legacy denial-of-service attack exploiting IP fragmentation reassembly issues by sending malformed overlapping fragments. MFA has no relationship to packet fragmentation handling or OS network stack vulnerabilities. Prevention is achieved through patching, modern OS/network stack hardening, and network security controls (IPS/ACLs), not authentication mechanisms.
Core Concept: Multifactor authentication (MFA) strengthens identity verification by requiring two or more independent factors: something you know (password/PIN), something you have (token, authenticator app, push approval, FIDO2 key), and/or something you are (biometrics). The security goal is to reduce the impact of password compromise and make account takeover significantly harder. Why the Answer is Correct: A (phishing) is mitigated by MFA because stolen credentials alone are insufficient to authenticate. Many phishing campaigns aim to capture usernames/passwords; MFA blocks the attacker unless they can also satisfy the second factor. Stronger MFA methods (FIDO2/WebAuthn, certificate-based, number matching) are particularly effective against credential phishing. B (brute force) is mitigated because even if an attacker guesses or cracks the password through online guessing, they still cannot log in without the additional factor. MFA therefore reduces the success rate of password-guessing attacks and increases attacker cost/time. Key Features / Best Practices: - Prefer phishing-resistant MFA (FIDO2/WebAuthn security keys, certificate-based auth) over SMS OTP. - Use conditional access/risk-based policies (geo-velocity, device posture) to step up to MFA. - Combine MFA with account lockout/rate limiting, strong password policy, and monitoring of failed logins. - Educate users about MFA push fatigue; enable number matching or challenge-response to reduce approval spoofing. Common Misconceptions: - MFA does not inherently prevent man-in-the-middle (MITM). Real-time phishing proxies can relay credentials and OTPs, and some MITM can hijack sessions/cookies after MFA. Only phishing-resistant methods with origin binding (e.g., FIDO2) significantly reduce this. - MFA does not stop volumetric attacks like DDoS, nor does it address low-level protocol attacks like teardrop. Exam Tips: For SCOR-style questions, map the control to the attack type: MFA is an identity control that primarily mitigates credential theft and password guessing. It is not a network availability control (DDoS) and not a protocol stack hardening control (teardrop). Be cautious with MITM: basic MFA helps but is not a guaranteed prevention unless the question specifies phishing-resistant MFA (FIDO2/WebAuthn) and proper session protections.
What can be integrated with Cisco Threat Intelligence Director to provide information about security threats, which allows the SOC to proactively automate responses to those threats?
Cisco Umbrella is primarily a cloud-delivered security platform focused on DNS-layer protection, secure web gateway, and internet access control. Although it uses threat intelligence and can participate in a broader Cisco security ecosystem, it is not the canonical product described as feeding threat intelligence into Threat Intelligence Director for this purpose. The question is asking for the integrated source of threat information used by TID, and Umbrella is not the best match. On the exam, Umbrella is more commonly associated with DNS enforcement and web protection than with serving as TID’s primary intelligence integration in this context.
External Threat Feeds can generally provide indicators of compromise to many threat intelligence platforms, so this option may seem plausible at first glance. However, the question is framed around a Cisco integration and asks what can be integrated with Cisco Threat Intelligence Director to provide threat information for proactive automation. In Cisco product-specific exam wording, Cisco Threat Grid is the expected answer because it is the Cisco-native malware intelligence source integrated with TID. Therefore, while external feeds are conceptually related to threat intelligence, they are not the best answer to this Cisco-focused question.
Cisco Threat Grid is Cisco’s malware analysis and threat intelligence platform, and it integrates with Threat Intelligence Director to provide actionable threat data. Threat Grid analyzes suspicious files and URLs, extracts indicators of compromise such as hashes, domains, IPs, and behavioral traits, and makes that intelligence available for operational use. TID can then curate and distribute those indicators to security controls so the SOC can proactively automate blocking, alerting, or other response actions. This directly matches the question’s focus on integrating a source of threat information that enables proactive automated responses.
Cisco Stealthwatch, now known as Secure Network Analytics, is focused on network telemetry, behavioral analytics, and anomaly detection. It helps identify suspicious activity in network traffic, but it is not primarily the threat intelligence source integrated with TID for supplying curated malware indicators and intelligence. The question emphasizes providing information about security threats in a way that enables proactive automated response through TID, which aligns more directly with Threat Grid. Stealthwatch is better understood as a network detection and visibility tool rather than the intended TID integration here.
Core concept: Cisco Threat Intelligence Director (TID) is used to aggregate, curate, and operationalize threat intelligence so security controls can automatically act on indicators of compromise. In Cisco security architecture, one of the key integrations that provides threat information into TID is Cisco Threat Grid, which analyzes malware and produces actionable indicators and intelligence. Why correct: Threat Grid supplies detailed malware intelligence, file reputation, and extracted indicators that TID can consume and distribute to enforcement devices, enabling the SOC to automate response actions. Key features: TID centralizes intelligence, deduplicates and scores indicators, and shares them with tools such as firewalls and other security controls; Threat Grid contributes dynamic malware analysis and IOC generation. Common misconceptions: External threat feeds are generally a valid source of threat intelligence in many platforms, but this question asks what can be integrated with Cisco TID in the Cisco product context, and the expected Cisco-specific answer is Threat Grid. Exam tips: On SCOR exams, when a Cisco product is asked as an integration point for malware intelligence and proactive automated response, Threat Grid is the strongest match because it is Cisco’s malware analysis and threat intelligence platform.
Ingin berlatih semua soal di mana saja?
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.
What are the two most commonly used authentication factors in multifactor authentication? (Choose two.)
Biometric factor is a legitimate authentication factor category representing something you are, such as a fingerprint or facial scan. However, the question specifically asks for the two most commonly used factors in MFA, and in practice the most common pair is knowledge plus possession, not knowledge plus biometric. Because possession is not offered as an option, selecting biometric reflects the flawed option set rather than the true industry-standard answer. Therefore this option is valid as a factor type but not as one of the two most commonly used factors in the usual MFA context.
Time factor is not a recognized authentication factor category in standard MFA models. Time may be used as an input to generate a one-time password in TOTP systems, but the factor there is possession of the token or authenticator application, not time itself. Time can also be used in access policy decisions such as restricting logins to business hours. It does not independently prove identity as a factor.
Confidentiality factor is not an authentication factor category. Confidentiality is one of the core security objectives in the CIA triad and refers to preventing unauthorized disclosure of information. It is achieved through controls such as encryption, access restrictions, and data classification. None of those make it a factor used to authenticate a user.
Knowledge factor is a valid authentication factor category and represents something the user knows, such as a password, PIN, or passphrase. It is one of the most widely deployed factors because it is easy to implement and integrates with nearly every identity system. In real-world MFA, it is commonly paired with a possession factor like a token or authenticator app. Among the listed options, this is unquestionably one of the standard factor types.
Encryption factor is not an authentication factor category. Encryption is a security mechanism used to protect data at rest or in transit and may support authentication protocols, but it is not itself evidence of identity. Authentication factors are based on what a user knows, has, or is. This option confuses a cryptographic control with an identity-verification category.
Core concept: Multifactor authentication uses two or more different categories of evidence to verify identity. The standard factor categories are knowledge (something you know), possession (something you have), and inherence/biometric (something you are). Why correct: Of the options provided, only knowledge factor and biometric factor are legitimate authentication factor categories, but the question asks for the two most commonly used factors in MFA, which are typically knowledge and possession. Key features: Knowledge includes passwords and PINs, biometric includes fingerprints and facial recognition, while possession includes tokens, smart cards, and authenticator apps. Common misconceptions: Time, confidentiality, and encryption are not authentication factor categories; however, biometric is not generally considered one of the two most commonly used MFA factors compared with possession. Exam tips: On certification exams, if asked for the most common MFA factors, expect knowledge plus possession unless the available options force selection of the only valid factor categories listed.
Which algorithm provides encryption and authentication for data plane communication?
AES-GCM is an AEAD construction that provides both encryption (confidentiality) and authentication/integrity via an authentication tag. It is commonly used for data plane protection in protocols like IPsec ESP and TLS AEAD cipher suites. Because it combines encryption and integrity in one algorithm suite, it is a standard answer when a question asks for both encryption and authentication.
SHA-96 is not an encryption algorithm; it refers to a truncated hash output often associated with older IPsec integrity checks (e.g., HMAC-SHA-1-96). Hashes (even when truncated) provide integrity/authentication when used as HMAC, but they do not provide confidentiality. Therefore it cannot satisfy “encryption and authentication” by itself.
AES-256 indicates AES with a 256-bit key size, which provides strong encryption. However, AES-256 alone does not provide authentication/integrity unless used in an authenticated mode (like GCM/CCM) or combined with a separate integrity mechanism (like HMAC-SHA). The question asks for both encryption and authentication, so AES-256 alone is incomplete.
SHA-384 is a cryptographic hash function (SHA-2 family) used for integrity and as a building block for HMAC and digital signatures. It does not encrypt data and therefore cannot provide confidentiality. While it can support authentication when used in HMAC, it still would not meet the requirement for both encryption and authentication for data plane communication.
Core Concept: This question tests understanding of algorithms that provide both confidentiality (encryption) and integrity/authentication for data plane traffic. In Cisco security contexts (e.g., IPsec/ESP, MACsec, TLS AEAD ciphers), “encryption and authentication” typically refers to an authenticated encryption with associated data (AEAD) mode, where a single construction provides encryption plus an integrity check (authentication tag). Why the Answer is Correct: AES-GCM (Advanced Encryption Standard in Galois/Counter Mode) is an AEAD algorithm. It encrypts data using AES in counter mode and simultaneously computes an authentication tag using the Galois field hash (GHASH). The receiver verifies the tag to ensure the ciphertext (and optional associated data such as headers) was not modified and that it came from a party with the correct key. This is exactly “encryption and authentication” for data plane communication. Key Features / Best Practices: - Provides confidentiality + integrity in one algorithm suite, reducing configuration complexity compared to “AES + separate HMAC.” - Widely used in IPsec ESP (e.g., ESP AES-GCM), TLS (AEAD cipher suites), and other high-performance secure transport designs. - Requires unique nonces/IVs per key; nonce reuse with GCM is catastrophic (can reveal plaintext and compromise integrity). Operationally, ensure correct replay protection/sequence handling and proper key rotation. - Efficient in hardware and software; common in modern Cisco platforms due to performance and security properties. Common Misconceptions: - SHA variants (SHA-96, SHA-384) are hashing/integrity primitives, not encryption. They can support authentication when used in HMAC, but they do not provide confidentiality. - AES-256 is an encryption algorithm/key size, but by itself it does not define an authenticated mode. Without an AEAD mode (like GCM) or a separate integrity mechanism (like HMAC-SHA), it provides no built-in authentication. Exam Tips: When you see “encryption and authentication” together, look for AEAD modes (AES-GCM, AES-CCM, ChaCha20-Poly1305). If the option is only a hash (SHA) it cannot encrypt. If it’s only “AES-xxx” without a mode, it’s encryption-only unless paired with an integrity/authentication method.
Which two endpoint measures are used to minimize the chances of falling victim to phishing and social engineering attacks? (Choose two.)
Patching for cross-site scripting (XSS) is a web application security measure. It helps prevent attackers from injecting scripts into web pages and stealing sessions/credentials, but it is not an endpoint control aimed at reducing phishing and social engineering attempts. In exam terms, XSS mitigation belongs to application security/secure coding rather than user endpoint anti-phishing defenses.
Backups to a private cloud improve recovery and business continuity after an incident (for example, ransomware triggered by a phishing email). However, backups do not minimize the chance of becoming a victim of phishing or social engineering; they reduce downtime and data loss after compromise. This is a resilience control, not a preventive endpoint measure.
Input validation and character escaping are secure development practices used to prevent injection flaws (XSS, SQL injection) in applications. While these controls reduce certain web-based attacks, they are not typical endpoint measures for phishing/social engineering. The question focuses on user endpoints and email-borne threats, where filtering and endpoint security agents are the primary mitigations.
A spam and virus email filter is a direct anti-phishing measure because it blocks or quarantines malicious emails, attachments, and links before users interact with them. Effective solutions include reputation filtering, attachment scanning/sandboxing, and URL inspection. This reduces exposure to phishing lures and lowers the probability that social engineering attempts reach the endpoint user.
Up-to-date antimalware (often part of EPP/EDR) is an endpoint control that detects and blocks malicious files, scripts, and behaviors that may result from phishing clicks or attachment execution. Keeping engines/signatures current and enabling real-time and behavioral protection helps stop malware that bypasses email filters, reducing successful compromise from phishing-driven payloads.
Core Concept: This question tests endpoint-focused controls that reduce the likelihood and impact of phishing and social engineering. Phishing commonly arrives via email (links, attachments, credential-harvest pages) and succeeds when endpoints execute malware or users are not protected from malicious content. Effective endpoint measures include filtering malicious email content before it reaches users and using endpoint antimalware to detect/block malicious files and behaviors. Why the Answer is Correct: D (install a spam and virus email filter) directly reduces exposure by preventing or quarantining phishing emails, malicious attachments, and known-bad URLs before users interact with them. This is a primary control for phishing because email is the dominant delivery vector. E (protect systems with an up-to-date antimalware program) mitigates what gets through filtering or arrives via other channels (web downloads, removable media). Modern antimalware/EDR uses signatures plus behavioral detection to block droppers, ransomware, and malicious scripts that may be launched after a user clicks a link or opens an attachment. Key Features / Best Practices: Email security: anti-spam, anti-malware scanning, URL rewriting/time-of-click protection, attachment sandboxing/detonation, and impersonation/BEC detection. Pair with domain protections like SPF/DKIM/DMARC to reduce spoofing (often tested conceptually even if not listed). Endpoint protection: keep antimalware/EDR agents current, enable real-time scanning, cloud-delivered protection, exploit prevention, and automatic isolation/quarantine. Ensure frequent signature/engine updates and centralized policy enforcement. Common Misconceptions: A and C relate to secure application development (XSS, input validation) rather than endpoint anti-phishing controls. They are important, but they don’t directly minimize phishing/social engineering success on user endpoints. B (backups) is resilience/recovery, not prevention. Backups help after ransomware or destructive events, but they do not reduce the chance of a user being phished. Exam Tips: For SCOR, map the threat to the control plane: phishing is primarily “content delivery” (email/web) plus “endpoint execution.” Look for answers that reduce exposure (filtering) and reduce execution/impact (antimalware/EDR). If an option sounds like secure coding or disaster recovery, it’s usually not the best fit for phishing prevention questions.
Which feature is configured for managed devices in the device platform settings of the Firepower Management Center?
Quality of service (QoS) is not the primary or most commonly tested configuration item under FMC Device Platform Settings for managed devices. QoS is generally a traffic-handling feature and, depending on platform and software, may be configured elsewhere or not centrally managed in the same way as core platform parameters. In SCOR context, Device Platform Settings is more strongly associated with system services like NTP/time.
Time synchronization (typically via NTP) is a platform-level configuration that FMC can push to managed devices using Device Platform Settings. Accurate time is essential for correct event timestamps, cross-device correlation, SIEM investigations, certificate validation, and troubleshooting. This aligns with the purpose of platform settings: foundational device behavior rather than inspection or traffic policy logic.
Network address translation (NAT) is configured using a NAT Policy in FMC and then associated with an Access Control Policy/device deployment. NAT is a traffic policy function (how addresses/ports are translated) rather than a platform/system setting. Therefore it is not configured under Device Platform Settings.
An intrusion policy (Snort-based IPS settings) is configured as an Intrusion Policy and applied through the Access Control Policy (ACP) or related policy assignment mechanisms. It governs traffic inspection and detection/prevention behavior, not device platform operations. As such, it is not part of Device Platform Settings.
Core Concept: In Cisco Firepower Management Center (FMC), “Device Platform Settings” is used to configure platform-level (system) behavior that applies to managed devices (FTD/ASA with FirePOWER services) rather than traffic inspection policy. These settings cover foundational device operations such as time/NTP, DNS, logging transport details, and other device-level parameters that are not part of Access Control, NAT, or Intrusion policies. Why the Answer is Correct: Time synchronization is a classic platform-level requirement and is configured under Device Platform Settings for managed devices. Correct time is critical for security operations: correlation of events across systems, accurate timestamps in connection and intrusion events, certificate validation, and reliable troubleshooting. FMC can push NTP configuration (and related time settings) to managed devices so they maintain consistent time with the organization’s time sources. Key Features / Best Practices: - Use NTP (preferably multiple redundant NTP servers) and ensure consistent time zones across the environment. - Accurate time improves event correlation in FMC, SIEM integrations, and incident response timelines. - Many security functions depend on time: TLS certificate validity, authentication token lifetimes, and log integrity. - In exam context, remember that “platform settings” = device system settings, not inspection policies. Common Misconceptions: - NAT and intrusion are often assumed to be “device settings,” but in FMC they are configured as policies (NAT Policy, Intrusion Policy) and attached via Access Control Policy or policy assignment workflows. - QoS can exist on some platforms, but in FMC the commonly tested “Device Platform Settings” item is time/NTP; QoS is not the canonical answer for this menu in typical SCOR-level questions. Exam Tips: When you see “Device Platform Settings,” think: baseline device operations (NTP/time, DNS, syslog/logging parameters, SNMP, etc.). When you see “policy,” think: Access Control, Intrusion, File/Malware, and NAT—configured and applied through policy assignment, not platform settings.
An MDM provides which two advantages to an organization with regards to device management? (Choose two.)
Correct. MDM provides centralized asset inventory for enrolled endpoints, including device model, serial/IMEI, OS version, ownership, compliance status, encryption state, and installed applications. This visibility supports lifecycle operations (procurement to retirement), auditing, and faster incident response by identifying affected devices and their posture.
Correct. MDM commonly enforces application control through allowlists/denylists, managed app deployment, and restrictions on app installation or data sharing. It can push required apps, remove prohibited apps, and apply managed app configurations (and sometimes per-app VPN). This reduces risk from unapproved apps and helps prevent data leakage.
Incorrect. AD Group Policy management is a Windows Active Directory domain feature (GPO) used primarily for domain-joined Windows systems. While modern Windows can be managed via MDM and can integrate with identity providers, MDM does not “manage AD GPO.” The mechanisms and policy delivery models are different.
Incorrect. Network device management refers to managing infrastructure devices like routers, switches, and firewalls using tools such as NMS/NSM, controllers, or vendor management platforms. MDM targets endpoint devices (phones/tablets/laptops) and their apps/configurations, not network infrastructure configuration and monitoring.
Incorrect. “Critical device management” is not a standard, recognized MDM advantage or feature category in typical enterprise endpoint management frameworks. MDM can apply different policies to different device groups (e.g., executives vs standard users), but the term itself is not a core MDM capability like inventory or app management.
Core Concept: Mobile Device Management (MDM), often delivered as part of Unified Endpoint Management (UEM), centrally administers mobile endpoints (iOS/iPadOS, Android, sometimes macOS/Windows) to enforce security posture and compliance. In SCOR terms, MDM is an endpoint control plane that supports secure access by ensuring devices meet policy before they connect to corporate resources. Why the Answer is Correct: A (asset inventory management) is a fundamental MDM advantage because the platform maintains a real-time inventory of enrolled devices: device identifiers, OS versions, ownership (BYOD vs corporate), compliance state, encryption status, and installed apps. This improves visibility, lifecycle management, and incident response (knowing “what devices exist and what they run”). B (allowed application management) is also a core MDM capability. MDM can enforce application policies such as allowlists/denylists, managed app deployment, app configuration, and restrictions (e.g., blocking unknown sources on Android or preventing unmanaged apps from accessing corporate data). This reduces malware risk and data leakage. Key Features / Best Practices: - Enrollment and device identity: supervised/managed modes, certificates, and device attestation where supported. - Compliance policies: minimum OS version, passcode/biometric requirements, encryption, jailbreak/root detection. - App management: managed app catalogs, required apps, app configuration, per-app VPN, and data separation (managed/unmanaged). - Reporting and automation: inventory reports, compliance dashboards, and conditional access integration (e.g., only compliant devices can access email/VPN). Common Misconceptions: - Confusing MDM with AD Group Policy (GPO): GPO is primarily Windows domain management; MDM uses profiles and device management APIs, not AD GPO. - Assuming MDM manages network infrastructure: routers/switches/firewalls are managed by NMS/NSM tools, not MDM. - “Critical device management” is not a standard MDM advantage/category; MDM focuses on endpoints, not a special class called “critical devices.” Exam Tips: For SCOR, remember MDM/UEM advantages map to endpoint visibility (inventory) and endpoint control (policy/app restrictions). If an option sounds like traditional Windows domain administration (GPO) or network infrastructure management, it’s likely not MDM. Look for keywords like enrollment, compliance, profiles, app allowlisting, remote wipe, and posture/conditional access.
Ingin berlatih semua soal di mana saja?
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.
Under which two circumstances is a CoA issued? (Choose two.)
Incorrect. Adding a new authentication rule on the Policy Service Node changes how future authentication requests are evaluated, but it does not by itself identify specific active sessions that must be reauthorized. Existing sessions generally continue until normal reauthentication, session timeout, or an explicit administrator action occurs. CoA is not automatically issued simply because a new authentication rule was added.
Correct. Deleting an endpoint on the ISE server changes the identity information ISE has for that device and can invalidate the basis on which the current session was authorized. To ensure the active session is re-evaluated or removed rather than continuing indefinitely under stale endpoint data, ISE can issue a CoA. This is consistent with CoA being used when endpoint state in the ISE database changes and immediate enforcement is needed.
Incorrect. Creating a new Identity Source Sequence and referencing it in authentication policy affects how ISE will process subsequent identity lookups. This is a policy configuration change for future authentications rather than an endpoint state change on an active session. By itself, it does not trigger CoA for already connected devices.
Correct. When an endpoint is profiled for the first time, ISE gains new device classification attributes that may map to a different authorization policy. The endpoint may need a different VLAN, dACL, SGT, or other access result than it originally received as an unknown device. CoA allows ISE to force reauthorization immediately so the new profile-based policy is applied to the live session.
Incorrect. Adding a new ISE server with the Administration persona is a deployment management change and has no direct relationship to active RADIUS sessions on NADs. Administration nodes do not perform the dynamic session control function that would cause endpoint reauthorization. Therefore, this event does not constitute a circumstance under which CoA is issued.
Core concept: A Change of Authorization (CoA) in Cisco ISE is used to modify an already active session on the network access device without waiting for the endpoint to reconnect. It is commonly triggered when ISE learns new information about an endpoint or when endpoint state changes in a way that should immediately affect authorization. Typical examples include profiling, posture transitions, guest lifecycle events, or endpoint database changes that require the NAD to re-evaluate the session. Why correct: The correct answers are B and D because both events can change how ISE should treat an already connected endpoint. If an endpoint is deleted from the ISE database, ISE may need the NAD to reauthenticate or remove the current session so the device is no longer treated according to the old stored identity. If an endpoint is profiled for the first time, ISE has new classification data that can alter authorization policy, so CoA is used to apply the updated result immediately. Key features: CoA relies on RFC 5176 dynamic authorization and requires NAD support, proper shared secrets, and reachability to the ISE Policy Service Node. It is often used with dynamic VLAN assignment, downloadable ACLs, SGT changes, guest redirection, and posture state transitions. Cisco ISE uses CoA to avoid waiting for periodic reauthentication timers when endpoint context changes mid-session. Common misconceptions: Not every policy or deployment configuration change causes CoA. Administrative changes such as adding an Administration persona node or creating an Identity Source Sequence affect future processing but do not inherently target active sessions. Likewise, adding a new authentication rule does not automatically mean ISE will issue CoA to all currently connected endpoints. Exam tips: On the exam, associate CoA with events that change the endpoint’s live authorization context rather than backend configuration alone. If ISE learns something new about the endpoint or its identity record changes in a way that should affect access now, CoA is likely. If the change is only to infrastructure or policy definitions for future authentications, CoA is usually not the answer.
What is the difference between deceptive phishing and spear phishing?
Incorrect. This describes a high-value target (C-level), which aligns with “whaling,” a form of spear phishing focused on executives. Deceptive phishing is generally broad and non-targeted, using generic messages sent to many recipients. The defining factor for deceptive phishing is lack of personalization and scale, not the victim’s job title.
Correct. Spear phishing is a targeted phishing attack that is crafted for a specific individual or a narrowly defined set of victims using personalized details and relevant context. Deceptive phishing is generally broader and more generic, with attackers sending the same or similar fraudulent message to many users in hopes that some will respond. While this option is not perfectly worded because spear phishing can also target a small group, it is still the best choice because it reflects the core difference of targeted versus broad phishing. On certification exams, spear phishing is typically contrasted with mass phishing by its personalization and selectivity.
Incorrect. Targeting C-level executives is typically called “whaling,” which is a subset of spear phishing. Spear phishing is broader than executive targeting; it can target any role (HR, finance, IT admins) as long as the attack is tailored and directed. Therefore, this option is too narrow and mislabels the concept.
Incorrect. Hijacking or manipulating DNS to redirect a user to a fake site is “pharming,” not deceptive phishing. Deceptive phishing relies on tricking users via fraudulent messages/links, whereas pharming can redirect users even when they enter a legitimate URL. The mechanism described is DNS-based redirection, not a phishing subtype.
Core Concept: This question tests the distinction between deceptive phishing and spear phishing. Deceptive phishing is the classic broad phishing model in which attackers send generic fraudulent messages to many recipients, hoping some will respond. Spear phishing is more targeted and personalized, using information about a specific individual or a narrowly defined group to make the message more convincing. A related but separate term is whaling, which refers to spear phishing aimed at senior executives. Why the Answer is Correct: Option B is the best answer because it captures the targeted nature of spear phishing compared with broader phishing campaigns. Although spear phishing is not limited strictly to one person and may also target a small, specific group, it is still far more personalized and selective than deceptive phishing. Deceptive phishing generally uses generic lures and is sent at scale, while spear phishing relies on tailoring the message to the intended victim. Key Features: Spear phishing often uses reconnaissance such as social media, company websites, breached data, or business context to craft believable emails. Deceptive phishing usually relies on common themes like password resets, invoices, or delivery notices and is distributed widely with little customization. Whaling is simply a specialized form of spear phishing focused on executives, and pharming is a different attack type involving DNS or redirection manipulation. Common Misconceptions: A frequent mistake is confusing spear phishing with whaling and assuming all targeted phishing against executives defines spear phishing as a whole. Another common error is mixing phishing with pharming, which redirects users through DNS or host manipulation rather than persuading them through a fraudulent message. The key distinction here is targeted personalization versus broad generic messaging. Exam Tips: For SCOR-style questions, map the terminology carefully: deceptive phishing means generic mass phishing, spear phishing means targeted and personalized phishing, whaling means executive-focused spear phishing, and pharming means DNS-based redirection. If an option mentions C-level executives specifically, think whaling. If an option mentions DNS hijacking or redirection, think pharming rather than phishing subtype.
An administrator wants to ensure that all endpoints are compliant before users are allowed access on the corporate network. The endpoints must have the corporate antivirus application installed and be running the latest build of Windows 10. What must the administrator implement to ensure that all devices are compliant before they are allowed on the network?
Cisco ISE with the AnyConnect Posture module is the canonical Cisco NAC posture solution. AnyConnect collects endpoint posture data (AV presence/state, OS version/build, patches) and reports to ISE. ISE then enforces access via authorization results (VLAN/dACL/SGT/redirect) to permit, quarantine, or remediate endpoints before full network access is granted.
Stealthwatch (Cisco Secure Network Analytics) integrated with ISE improves visibility and can trigger responses based on observed network behavior, but it does not perform endpoint posture checks like verifying AV installation or Windows build level prior to admission. It is primarily for detection/analytics rather than pre-access compliance enforcement.
ASA with Dynamic Access Policies (DAP) can enforce posture-like checks for remote-access VPN sessions (often using AnyConnect attributes) and apply per-session policies. However, it is not the best answer for ensuring all endpoints are compliant before access to the corporate network in general (wired/wireless campus). ISE posture is the enterprise NAC approach.
pxGrid is a context-sharing framework that allows ISE to publish identity and session context to other security tools (EDR, SIEM, firewalls) and consume external context. Enabling pxGrid alone does not implement posture assessment or enforce compliance checks; it is an integration mechanism, not the posture engine.
Core Concept: This question tests Network Access Control (NAC) with posture assessment—verifying endpoint compliance (AV present/running and OS version/patch level) before granting network access. In Cisco architectures, this is primarily delivered by Cisco Identity Services Engine (ISE) posture services with an endpoint agent (Cisco AnyConnect Posture / ISE Posture module). Why the Answer is Correct: Cisco ISE provides policy-based access control (typically 802.1X, MAB, or VPN authentication) and can enforce a “pre-admission” posture check. The AnyConnect Posture module (formerly NAC Agent functionality) collects endpoint attributes such as installed security software, running processes/services, registry keys, and OS version/build. ISE evaluates these attributes against posture policies (e.g., “Corporate AV installed and up-to-date” and “Windows 10 build >= required”) and then authorizes access accordingly. Noncompliant endpoints can be quarantined, redirected to remediation resources, or denied. Key Features / How It Works: - ISE Posture Policy: Defines requirements (AV presence, OS build, patch level) and compliance states. - AnyConnect Posture Module: Performs endpoint-side checks and reports posture status to ISE. - Authorization Profiles: ISE can assign VLANs, downloadable ACLs (dACLs), Security Group Tags (SGT), or redirect ACLs for remediation. - Remediation: Web redirection to patch/AV portals, limited network access until compliant. - Best practice: Use 802.1X for wired/wireless, integrate with AD, and apply least-privilege access for “unknown/noncompliant” states. Common Misconceptions: - Stealthwatch (Secure Network Analytics) provides visibility and anomaly detection, not pre-connect posture compliance. - ASA Dynamic Access Policies can check some endpoint attributes for VPN sessions, but it is not the standard enterprise-wide NAC posture solution for all corporate network access (wired/wireless) compared to ISE posture. - pxGrid enables context sharing between ISE and other systems; it does not itself perform posture assessment. Exam Tips: When you see “ensure endpoints are compliant before allowed on the network” and requirements like “AV installed” and “latest Windows build,” think “ISE Posture + AnyConnect (Posture module)” for pre-admission/post-admission enforcement. pxGrid and telemetry tools are complementary but not the primary mechanism for posture compliance enforcement.
An engineer used a posture check on a Microsoft Windows endpoint and discovered that the MS17-010 patch was not installed, which left the endpoint vulnerable to WannaCry ransomware. Which two solutions mitigate the risk of this ransomware infection? (Choose two.)
Incorrect. Cisco ISE posture can evaluate compliance and can redirect or restrict a noncompliant endpoint, but it does not natively install the MS17-010 patch itself. Patch deployment is normally handled by tools such as WSUS, SCCM, Intune, or other endpoint management platforms. The wording makes this option too absolute and assigns remediation capability to ISE that it does not directly provide.
Incorrect. Profiling in Cisco ISE is used to identify and classify endpoints based on observed attributes and network behavior, not to validate whether a specific Microsoft patch or hotfix is installed. Patch-level and security-state validation are posture functions, not profiling functions. Therefore this option confuses two distinct ISE capabilities.
Incorrect in the context of choosing the best two mitigations. ISE posture can check whether an endpoint meets a required patch level and can restrict access if it does not, which is useful as an access-control measure. However, it does not directly remove the vulnerability or block the exploit path as effectively as patching the host or stopping SMB exploit traffic with firewall policies. Since only two answers are allowed, D and E are the more direct mitigations to ransomware infection risk.
Correct. WannaCry spreads by exploiting SMB traffic associated with MS17-010, so endpoint firewall policies that block or tightly restrict SMB ports such as TCP 445 materially reduce the ability of the ransomware to execute and propagate. This is a direct technical mitigation because it interrupts the exploit path and limits lateral movement between hosts. Even if a host is still vulnerable, preventing the exploit traffic from reaching it lowers the immediate infection risk significantly.
Correct. A well-defined endpoint patching strategy is the most fundamental mitigation because MS17-010 fixes the underlying vulnerability that WannaCry exploits. Ensuring critical patches are deployed quickly and consistently across Windows systems removes the attack vector rather than merely containing it. Strong patch governance, prioritization, deployment SLAs, and verification are standard best practices for preventing outbreaks based on known vulnerabilities.
Core concept: WannaCry exploited the MS17-010/EternalBlue vulnerability over SMB, so effective mitigations focus on removing the vulnerability through patching and preventing the exploit traffic from reaching hosts. Why correct: the strongest controls are timely patch deployment and endpoint/network firewall restrictions that block SMB-based exploitation and lateral movement. Key features: patch management eliminates the known vulnerability, while firewall policies can block TCP 445/SMB traffic used by the worm to spread. Common misconceptions: Cisco ISE posture can assess compliance and restrict access, but it is not itself the remediation mechanism and is less direct than patching or blocking exploit traffic. Exam tips: when asked for mitigations to a specific malware outbreak, prefer controls that directly prevent exploitation or propagation over controls that only assess or classify endpoints.
Gateway of last resort is 1.1.1.1 to network 0.0.0.0
S* 0.0.0.0 0.0.0.0 [1/0] via 1.1.1.1, outside
C 1.1.1.0 255.255.255.0 is directly connect, outside
S 172.16.0.0 255.255.0.0 [1/0] via 192.168.100.1, inside
C 192.168.100.0 255.255.255.0 is directly connected, inside
C 172.16.10.0 255.255.255.0 is directly connected, dmz
S 10.10.10.0 255.255.255.0 [1/0] via 172.16.10.1, dmz
access-list redirect-acl permit ip 192.168.100.0 255.255.255.0 any
access-list redirect-acl permit ip 172.16.0.0 255.255.0.0 any
class-map redirect-class
match access-list redirect-acl
policy-map inside-policy
class redirect-class
sfr fail-open
service-policy inside-policy global
Refer to the exhibit. What is a result of the configuration?
Incorrect. It is true that DMZ traffic is redirected, because the DMZ subnet `172.16.10.0/24` falls within the ACL entry `172.16.0.0 255.255.0.0`. However, this option is incomplete because the ACL also explicitly matches the inside network `192.168.100.0/24`. Since both source networks are included in the redirect ACL, DMZ-only is not the best answer.
Incorrect. The inside network `192.168.100.0/24` is explicitly matched by the ACL, so inside traffic is indeed redirected. But this option is also incomplete because the ACL additionally matches `172.16.0.0/16`, which includes the DMZ subnet `172.16.10.0/24`. With a global service-policy, both sets of matching traffic are redirected, not just inside traffic.
Incorrect. The ACL uses `permit ip`, which matches all IP traffic, not just TCP. That means TCP, UDP, ICMP, and other IP-based protocols from the matched source networks are eligible for redirection. Also, the policy does not redirect all TCP traffic universally; it redirects only traffic that matches the ACL/class-map criteria.
Correct. The ACL `redirect-acl` matches traffic sourced from `192.168.100.0/24` and from `172.16.0.0/16` to any destination. The DMZ network shown in the routing table is `172.16.10.0/24`, which is a subnet of `172.16.0.0/16`, so DMZ-sourced traffic also matches the ACL. Because the policy is attached with `service-policy inside-policy global`, matching traffic from any interface is subject to SFR redirection. Therefore, both inside and DMZ traffic that matches these source networks is redirected to the SFR module.
Core concept: This question tests Cisco ASA Modular Policy Framework (MPF) redirection to the FirePOWER/SFR module using an ACL-based class-map. The class-map references an ACL that matches traffic from specific source networks, and the policy-map action `sfr fail-open` redirects those matching flows to the SFR module for inspection. The correct answer is that traffic from both the inside and DMZ networks is redirected, because the ACL includes 192.168.100.0/24 and the broader 172.16.0.0/16 network, which contains the DMZ subnet 172.16.10.0/24. Key features here are the use of a global service-policy, ACL matching on IP traffic rather than TCP only, and fail-open behavior if the SFR module is unavailable. A common misconception is to assume that only traffic from the interface named in the policy or only directly connected routes are affected; in reality, the global policy applies to all traversing traffic that matches the ACL. Exam tip: always evaluate the ACL contents carefully, especially when a summarized network includes a directly connected subnet shown elsewhere in the routing table.







Ingin berlatih semua soal di mana saja?
Dapatkan aplikasi gratis
Unduh Cloud Pass gratis — termasuk tes latihan, pelacakan progres & lainnya.
