
Simulate the real exam experience with 60 questions and a 90-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
What is the result a specifying of QoS rule that has a rate limit that is greater than the maximum throughput of an interface?
Incorrect. Cisco firewall QoS configurations generally are not automatically disabled just because the configured rate exceeds the interface’s maximum throughput. The device typically accepts the policy, but the policer never becomes the limiting factor. Disabling would imply validation logic that treats the value as invalid, which is not the typical behavior tested in SNCF-style questions.
Correct. A policer/rate limit only constrains traffic when the traffic rate exceeds the configured threshold. If the threshold is above the interface’s maximum throughput, the interface capacity (and normal queuing/congestion) limits the traffic first. As a result, the QoS rule does not actively rate-limit matching traffic because the configured limit is unattainable on that interface.
Incorrect. QoS rate limiting is applied to the traffic that matches the specific QoS class/rule, not to all traffic system-wide. Even if a policer were mis-sized, it would not cause the firewall to start rate-limiting unrelated flows. Global rate limiting would require a separate configuration or platform-wide resource constraint, not merely an oversized policer value.
Incorrect. While some platforms may log certain QoS-related issues, configuring a rate limit above interface throughput is not typically treated as an error condition that generates repeated warnings. It is simply an ineffective policy parameter. The more common operational symptom is “no observable policing drops,” not continuous warning messages.
Core Concept: This question tests how Cisco firewall QoS policing/rate-limiting behaves when the configured policing rate exceeds the physical/negotiated capacity of the egress interface. In firewall QoS, a “rate limit” (policer) is an upper bound applied to matching traffic; it cannot increase throughput beyond what the interface can transmit. Why the Answer is Correct: If you configure a QoS rule with a rate limit higher than the interface’s maximum throughput (for example, a 2 Gbps policer on a 1 Gbps interface), the policer never becomes the constraining factor. The interface itself is the bottleneck, so the traffic is effectively limited by the interface speed and normal queuing/congestion behavior, not by the policer. Therefore, matching traffic is not rate limited by that QoS rule in any meaningful way; the configured limit is above what can be achieved, so the rule does not actively drop/mark traffic due to exceeding the policer rate. Key Features / Best Practices: - Policing enforces a maximum rate; shaping/queuing manages bursts and congestion. A policer set above line rate is functionally inert. - Effective QoS design requires setting policers below the physical rate (and often below the real usable throughput after overhead) to create predictable enforcement. - On Cisco firewalls, QoS rules typically apply to specific classes; only those classes are affected, and only when they exceed the configured threshold. Common Misconceptions: - It may seem like the firewall would “disable” an invalid rule (Option A), but most platforms accept the configuration and simply never trigger the policer. - It may seem like the system would rate-limit everything (Option C), but QoS classification is per-rule/per-class, not global unless explicitly configured. - Some expect warnings (Option D), but exceeding interface capacity is not inherently an error; it’s just an ineffective policer value. Exam Tips: When you see “rate limit greater than interface throughput,” think: the interface is already the limiter. The policer won’t engage, so the configured QoS rate limit has no practical effect on matching traffic. For exam questions, distinguish between “configuration accepted but ineffective” versus “configuration rejected/disabled.”
What is the benefit of selecting the trace option for packet capture?
Correct. The trace option is beneficial because it provides disposition/processing context for the captured traffic—helping you determine whether the firewall allowed the packet through or dropped it (and often correlating to the decision path). This is crucial when packets are observed entering an interface but the flow fails, because it distinguishes policy/inspection drops from upstream/downstream network issues.
Incorrect. This describes a routing/path asymmetry or traceroute-like behavior (destination responding via a different path). Packet capture trace does not test network paths or validate return-path symmetry; it focuses on how the firewall processes the packet it sees. Path validation is typically done with routing tables, traceroute, or flow/connection diagnostics, not capture trace.
Incorrect. Limiting the number of packets captured is handled by capture parameters such as packet count, buffer size, file size, duration, or ring buffer settings. The trace option does not primarily control capture volume; it adds processing/decision metadata. While enabling trace may affect performance considerations, it is not a mechanism to limit packet quantity.
Incorrect. Packet capture inherently captures the packet contents (headers and payload up to snap length). “Trace” is not simply “more details of each packet” in the sense of deeper decode; it adds firewall decision/disposition context (allowed/dropped and related processing information). Detailed decoding is typically done by analyzing the pcap in tools like Wireshark.
Core Concept: This question tests Cisco firewall packet capture troubleshooting, specifically the “trace” capability associated with captures on Cisco Firepower/FTD (and conceptually similar to ASA capture with additional metadata). Packet capture normally records raw frames/packets, but troubleshooting on a firewall often requires knowing what the firewall did with the packet (forwarded, dropped, or modified) and why. Why the Answer is Correct: Selecting the trace option adds firewall processing context to the capture so you can determine whether the packet was permitted and forwarded or dropped during inspection/policy evaluation. In practice, “trace” ties the captured packet to the firewall’s decision path (for example, access control/prefilter decisions, inspection outcomes, or drop reasons). This is highly valuable because a packet capture alone may show the packet arriving on an interface, but without trace you may not know if it was subsequently dropped by policy, inspection, routing, NAT, or another feature. Key Features / Best Practices: Trace is used when you need decision visibility, not just packet visibility. It complements other troubleshooting tools such as connection events, intrusion/file/malware events, and (depending on platform) packet-tracer style simulations. Best practice is to enable trace selectively (narrow capture filters, short duration) because adding decision metadata can increase overhead and generate more diagnostic output. Use it when investigating “packet seen inbound but not reaching destination” scenarios. Common Misconceptions: Many assume trace means “more packet details” (like deeper decode) or “more packets,” but packet capture already records packet bytes; trace is about the firewall’s handling/decision. Others confuse it with path testing (traceroute) or with capture limits (packet count/ring buffer), which are separate capture settings. Exam Tips: For SNCF, remember the distinction between (1) capturing traffic and (2) understanding firewall disposition. If the question asks what benefit trace provides, think “drop/allow visibility and decision context.” If it asks about limiting capture size, look for options like packet count, buffer size, or duration—not trace.
What are the minimum requirements to deploy a managed device inline?
Correct. Inline deployment requires defining inline interfaces (pair/set) so traffic can traverse the device, assigning those interfaces to security zones so ACP rules can match zone-to-zone flows, validating MTU to prevent unexpected drops/fragmentation during inspection, and selecting the correct mode (inline) to enable prevention/blocking behavior. These are the minimum practical prerequisites for a functional managed inline deployment.
Incorrect. A passive interface applies to tap/SPAN-style monitoring where the sensor receives a copy of traffic and cannot enforce blocking. Inline deployment specifically needs an inline interface pair/set, not a passive interface. Also, omitting security zones makes policy scoping difficult or ineffective in FMC, so this does not meet minimum inline requirements.
Incorrect. While inline interfaces, MTU, and mode are essential, leaving out security zones is a key gap for a managed FMC deployment because ACP rules typically rely on source/destination zones to match and enforce policy. Without zones, traffic classification and rule design become limited and can lead to unintended default handling.
Incorrect. This mixes passive interface with inline requirements. Passive interfaces are for monitoring-only deployments and do not create a bridging path through the device. Inline deployments require inline interface pairs/sets. Although security zone, MTU, and mode are relevant concepts, the presence of “passive interface” makes this option wrong for an inline deployment question.
Core Concept: This question tests Cisco Firepower managed device deployment prerequisites for inline (Layer 2) operation. Inline mode means traffic physically traverses the sensor/FTD/NGIPS, so the device must be able to bridge/inspect frames between two ports and enforce policy without acting as a routed hop. Why the Answer is Correct: To deploy inline, you must (1) define an inline interface pair (or inline set) so the device knows which two physical interfaces will pass traffic, (2) place those interfaces into security zones so Access Control Policy (ACP) rules can match and enforce based on zone-to-zone traffic, (3) ensure MTU is appropriate so frames are not dropped/fragmented unexpectedly during inspection, and (4) select the correct mode (inline vs passive/tap/inline tap) because the inspection and fail-open/fail-closed behavior depends on it. These are the minimum building blocks to get traffic flowing and policy applied in an inline deployment. Key Features / Best Practices: - Inline interface pair: creates a transparent “bridge” path; commonly used for NGIPS/IPS or FTD transparent deployments. - Security zones: required for meaningful policy; ACP rules typically reference source/destination zones. Without zones, you cannot properly scope rules and may end up with overly permissive defaults or unmatched traffic. - MTU: must align with the connected network (including VLAN tags, QinQ, or jumbo frames). Mismatched MTU is a common cause of drops that look like “policy” issues. - Mode: inline vs passive determines whether the device can block traffic. Inline is required for prevention. Common Misconceptions: Many candidates confuse passive (tap/SPAN) requirements with inline. Passive deployments do not require inline pairs and cannot block, so “passive interface” options are incorrect. Another misconception is thinking zones are optional; while interfaces can exist without zones, zone-less policy is impractical and often fails to meet “minimum requirements” for a functional managed inline deployment in FMC. Exam Tips: - If the question says “inline,” look for “inline interface pair/set” and “security zones.” - If it says “passive/tap,” look for “passive interface” and note that blocking is not possible. - Remember: zones are how FMC policies commonly bind to traffic direction; MTU mismatches frequently cause troubleshooting scenarios in deployment questions.
While configuring FTD, a network engineer wants to ensure that traffic passing though the appliance does not require routing or VLAN rewriting. Which interface mode should the engineer implement to accomplish this task?
Inline set refers to pairing interfaces for inline traffic inspection/IPS-style deployment where the device is in the forwarding path and can drop traffic. While it can be used to insert inspection without changing IP addressing in some designs, it is not the primary FTD interface mode that explicitly avoids routing and VLAN rewriting. The exam cue about routing/VLAN points to transparent (L2 bridge) mode instead.
Passive mode means the sensor/firewall receives a copy of traffic (SPAN/TAP) and analyzes it without being able to enforce blocking on the live traffic path. It certainly avoids routing changes because it is not forwarding traffic at all, but it also does not meet the typical intent of “traffic passing through the appliance” being inspected and controlled. It’s monitoring-only, not an in-path forwarding mode.
Transparent mode is the correct choice because it makes the firewall operate as a Layer 2 bridge using bridge groups/BGIs. Traffic passes through without the firewall acting as a routed hop, so you do not need to change routing or default gateways. It also avoids the need for VLAN rewriting in the common case of bridging within the same VLAN, enabling low-disruption insertion into an existing network.
Inline tap is a deployment where the device is connected in a way that it can see traffic (often both directions) similarly to a tap, but typically it is used for visibility/inspection rather than classic L2 bridging as a firewall mode. It is not the standard FTD mode described by Cisco for eliminating routing requirements; transparent mode is the canonical answer for “no routing/VLAN changes” while still forwarding traffic.
Core Concept: This question tests FTD interface/deployment modes and when you can pass traffic without participating in Layer 3 routing or changing VLAN tags. In Cisco firewall terminology, this is the distinction between routed mode (L3 hop) and transparent mode (L2 bridge). Why the Answer is Correct: Transparent mode is designed to forward traffic at Layer 2 like a “bump-in-the-wire” bridge. The firewall does not act as the default gateway and does not perform routing between subnets. Because it is bridging, it can pass frames without requiring routing changes on adjacent devices. In typical transparent deployments, the firewall also does not require VLAN rewriting; it can bridge within the same VLAN (or between VLANs if you explicitly configure bridging groups/subinterfaces), but the key point is that you are not forced to redesign IP addressing or routing to insert the device. Key Features / Configuration Notes: - Uses Bridge Group Interfaces (BGIs): member interfaces (or subinterfaces) are placed into a bridge group; the BGI provides the management IP for that bridge domain. - Traffic is inspected with Access Control Policies and can use features like IPS, URL filtering, and malware inspection depending on licensing. - Best practice: use transparent mode when you must insert the firewall into an existing network with minimal disruption (no gateway changes), such as between a switch and upstream router/core. Common Misconceptions: - “Inline set” and “inline tap” are intrusion policy deployment concepts (inline vs passive/tap) commonly associated with IPS behavior and interface pairing, not the fundamental firewall forwarding mode that eliminates routing requirements. - “Passive” suggests no impact on traffic flow, but passive/tap means the device observes a copy of traffic and cannot enforce blocking in the forwarding path. Exam Tips: If the question emphasizes “no routing changes,” “no default gateway change,” “bump-in-the-wire,” or “no VLAN rewriting,” think transparent mode (L2). If it emphasizes “acts as a gateway,” “inter-VLAN routing,” or “NAT at L3,” think routed mode. If it emphasizes “monitor only” vs “block,” think passive/tap vs inline IPS concepts.
With Cisco FTD software, which interface mode must be configured to passively receive traffic that passes through the appliance?
ERSPAN is a method to mirror traffic over an IP network using GRE encapsulation (remote SPAN). It describes how mirrored packets are transported from a switch to a collector, not the FTD’s interface mode. FTD can potentially consume mirrored traffic, but the interface mode that defines passive reception on the appliance is TAP, not ERSPAN.
Firewall mode on FTD is an inline deployment where the device is in the forwarding path and can enforce Access Control Policy actions (allow, block, trust), perform NAT, and apply advanced inspections. Because it forwards traffic between interfaces, it is not a passive receive-only mode and does not match the requirement.
TAP mode is designed for passive monitoring. The FTD interfaces receive a copy of traffic from a network tap or SPAN/mirror port, allowing inspection and event generation without being inline. This matches “passively receive traffic that passes through the appliance,” since the original traffic flow continues independently of the FTD.
IPS-only mode is intended for intrusion prevention focus, typically deployed inline so it can take prevention actions (drop/reset) based on intrusion policy. While it may reduce firewall features compared to full firewall mode, it is still generally in-path rather than passive. Therefore it does not meet the “passively receive” requirement.
Core Concept: This question tests Cisco FTD interface/deployment modes that allow the appliance to observe traffic without being in the forwarding path. In FTD terminology, this is the passive monitoring use case, where the device receives a copy of traffic (SPAN/TAP) for inspection, logging, and detection, but does not forward or block the original flow. Why the Answer is Correct: To passively receive traffic that passes through the appliance, FTD must be configured in TAP (passive) mode. In TAP mode, the sensor interfaces are connected to a network tap or switch SPAN/mirror source so the FTD sees a copy of packets. Because the appliance is not inline, it cannot enforce blocking actions on the live traffic path; it primarily provides visibility and detection (for example, intrusion events, file/malware events depending on licensing and policy). This aligns exactly with “passively receive traffic.” Key Features / Configuration Notes: - TAP mode is used when you want monitoring without introducing latency or risk of inline failure. - You typically connect the FTD interfaces to a physical network TAP device or to SPAN/mirror ports. - Policies still apply for inspection and event generation, but enforcement is limited because the device is not forwarding the original traffic. - Best practice: use TAP mode for initial baselining, proof-of-concept deployments, or environments where inline insertion is not permitted. Common Misconceptions: - “IPS-only” sounds like it might be passive, but IPS-only on FTD is generally an inline deployment focused on intrusion prevention (still in the traffic path) rather than passive monitoring. - “ERSPAN” is a traffic mirroring transport mechanism (encapsulated SPAN over IP), not the FTD interface mode itself. - “Firewall” mode is explicitly inline and enforces access control/NAT; it is not passive. Exam Tips: - Remember: TAP = out-of-band/passive copy of traffic; Inline/Firewall/IPS-only = in-path. - If the question emphasizes “passively receive” or “not in the forwarding path,” choose TAP. - If it emphasizes “block/drop/prevent,” it’s an inline mode (firewall or IPS-only).
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.
Which two routing options are valid with Cisco FTD? (Choose two.)
BGPv6 is not considered a valid supported routing option for Cisco FTD in this exam context. Although candidates may assume IPv6 support follows IPv4 BGP support, FTD feature support is more limited and exam questions often test that distinction. The platform supports BGPv4, but not BGPv6 as a selectable valid answer here. For that reason, this option is incorrect.
Cisco FTD supports ECMP with up to three equal-cost paths across multiple interfaces. This allows the firewall to install multiple routes with the same metric and distribute traffic across separate egress links for redundancy and load sharing. The wording in this option matches the documented and commonly tested FTD ECMP capability. Because the question asks for valid routing options, this is one of the correct selections.
This option is incorrect because the tested FTD ECMP capability is specifically up to three equal-cost paths across multiple interfaces. The phrase 'across a single interface' does not match the supported feature statement used in Cisco exam objectives. While multiple next hops may conceptually exist behind one interface in some platforms, that is not the valid FTD routing option being tested here. Therefore this choice is a distractor.
BGPv4 in transparent firewall mode is not valid because transparent mode operates as a Layer 2 bridge rather than a Layer 3 routed device. In transparent mode, the firewall can forward BGP packets between other devices, but it does not establish BGP neighbor relationships itself. Dynamic routing protocol participation is associated with routed mode, not transparent mode. Therefore this option is incorrect.
Cisco FTD supports BGPv4, and BGP nonstop forwarding is a valid supported capability in the exam context. This allows forwarding to continue while the control plane recovers, improving resiliency during certain failover or restart scenarios. The key point is that the feature is tied to BGPv4, which FTD does support in routed mode. Therefore this option is a valid routing feature on Cisco FTD.
Core concept: This question tests knowledge of Cisco FTD dynamic routing and ECMP feature support. On the SNCF exam, you must know which routing protocols and advanced routing behaviors are actually supported by FTD, as well as the limitations around deployment modes and path selection. Why correct: Cisco FTD supports ECMP with up to three equal-cost paths across multiple interfaces, which enables redundancy and load sharing across separate links. It also supports BGPv4 with nonstop forwarding, allowing routing continuity during control-plane events in supported designs. These are recognized routing capabilities of FTD in routed deployments. Key features: - ECMP on FTD can install and use up to three equal-cost routes when they exit different interfaces. - BGPv4 is supported on FTD in routed mode for dynamic route exchange with upstream routers. - Nonstop forwarding is supported with BGPv4 to help preserve forwarding during route processor or adjacency disruptions. - Transparent mode does not support the firewall itself participating in dynamic routing protocols. Common misconceptions: - Candidates often assume BGPv6 is supported anywhere BGPv4 is supported, but FTD exam questions typically distinguish supported IPv4 routing features from unsupported IPv6 BGP capabilities. - Another trap is assuming transparent mode can run routing protocols because it can pass routed traffic; in reality, the firewall does not form routing adjacencies in transparent mode. - ECMP across a single interface is not the tested supported statement; the supported feature is across multiple interfaces. Exam tips: - Remember that routed mode is where FTD participates in dynamic routing. - Watch for transparent mode distractors, since FTD in transparent mode does not act as a Layer 3 routing peer. - For ECMP, focus on the supported path count and whether the paths are across multiple interfaces.
After using Firepower for some time and learning about how it interacts with the network, an administrator is trying to correlate malicious activity with a user. Which widget should be configured to provide this visibility on the Cisco Firepower dashboards?
Current Sessions is used to view active traffic sessions and connection-oriented information in near real time. Although it may show details about ongoing flows, it is not intended to correlate malicious activity over multiple events or provide higher-level investigative linkage to a user. Its purpose is operational monitoring rather than security correlation. Therefore, it does not best satisfy the requirement in the question.
Correlation Events is the dashboard widget designed to display events generated through Firepower correlation logic and policies. These events are intended to connect suspicious or malicious activity with contextual information, which can include user identity when identity data is available in the system. Because the question asks specifically about correlating malicious activity with a user, this widget is the most appropriate and product-aligned answer. It provides focused security visibility rather than a generic or operational dashboard view.
Current Status provides summary or health-oriented information about the system, devices, or overall environment. It is useful for understanding whether components are functioning properly, but it does not provide the event-level investigative context needed to associate malicious activity with a user. The widget lacks the correlation focus implied by the question. As a result, it is not the correct choice.
Custom Analysis is a flexible reporting-style widget that can be tailored to display selected event data and filters. However, the question asks which widget should be configured specifically to provide visibility for correlating malicious activity with a user, and in Firepower terminology that function is more directly associated with Correlation Events. Custom Analysis is broader and more generic, while Correlation Events is the purpose-built option for this use case. For an exam question focused on named dashboard widgets, the explicit correlation widget is the better answer.
Core concept: This question asks which Cisco Firepower dashboard widget provides visibility to correlate malicious activity with a user. In Firepower Management Center, correlation is the mechanism used to tie together security events and contextual information, including user identity when available. The dashboard widget specifically designed to surface these correlated security findings is the Correlation Events widget. Why correct: Correlation Events displays the output of correlation policies and correlated security activity, which is the feature intended to help administrators connect malicious behavior to contextual data such as hosts, users, and related events. When an administrator wants visibility into malicious activity associated with a user, this widget is the most direct fit because it highlights security-relevant correlations rather than generic traffic or health data. Key features: Correlation Events focuses on security event relationships, policy-driven correlation, and contextualized alerts. It is designed for threat visibility and investigation, unlike operational widgets that show status or sessions. It helps surface higher-value findings derived from multiple event sources. Common misconceptions: Custom Analysis is flexible for reporting and ad hoc event views, but it is not the named widget specifically associated with correlation of malicious activity to users on Firepower dashboards. Current Sessions may show active flows, and Current Status shows health or summary information, but neither is intended for this use case. Exam tips: On Cisco security exams, pay attention to product terminology. If the question asks which widget should be configured to provide correlation visibility, the option that explicitly matches the correlation function in FMC is usually the correct choice.
What is a feature of Cisco AMP private cloud?
Incorrect. Avoiding or restricting direct connections to the public cloud may be a deployment reason for choosing AMP Private Cloud, but it is not the best description of a product feature. The option is framed as a blanket disabling behavior, which is more of an architectural policy outcome than a core capability. Exam questions asking for a feature typically target what the platform does, such as malware analysis, rather than the connectivity policy around it.
Incorrect. Security Intelligence filtering is a separate Cisco Firepower feature used to block or monitor traffic based on reputation feeds such as IP addresses, URLs, and domains. It is not a defining feature of Cisco AMP Private Cloud itself. Although both can exist in the same security architecture, Security Intelligence belongs to access-control and reputation filtering rather than private-cloud malware analysis.
Incorrect. Anonymized retrieval of threat intelligence is not the primary feature associated with Cisco AMP Private Cloud. That wording refers more to privacy-preserving cloud lookups or intelligence-query mechanisms than to the private-cloud malware analysis platform. AMP Private Cloud is focused on keeping analysis and disposition functions local, not on anonymized external intelligence retrieval as its defining capability.
Correct. Cisco AMP Private Cloud provides malware analysis capabilities within an on-premises or isolated deployment, and dynamic analysis is a key feature of that environment. This allows suspicious files to be executed and observed in a controlled setting so their behavior can be evaluated without sending them to a public cloud service. In secure or regulated environments, this is valuable because it preserves local control over samples and analysis results while still enabling advanced threat detection.
Core Concept: Cisco AMP Private Cloud provides an on-premises malware analysis and disposition environment for organizations that need AMP capabilities within controlled or isolated networks. Its distinguishing feature is that it can analyze suspicious files locally, including sandbox-style dynamic analysis, without relying on Cisco’s public cloud for that function. Why Correct: The best answer is D because Cisco AMP Private Cloud is designed to bring advanced malware analysis capabilities into private environments. This includes dynamic analysis of files in a controlled environment, which is a core malware-defense function associated with the private cloud deployment. Key Features: AMP Private Cloud supports local file dispositioning and malware analysis for environments with strict data residency or connectivity requirements. It is used when organizations want to keep samples and analysis internal while still benefiting from AMP-style detection workflows. This makes it especially suitable for regulated, classified, or highly restricted networks. Common Misconceptions: Candidates often confuse AMP Private Cloud with general connectivity restrictions and assume its main feature is simply blocking public-cloud access. While private deployments reduce or avoid dependence on public cloud services, that is an architectural outcome rather than the primary feature being tested here. Security Intelligence filtering is a separate Firepower capability, and anonymized retrieval is not the defining function of AMP Private Cloud. Exam Tips: When you see AMP Private Cloud, think local malware analysis and private deployment of AMP services. If an option mentions dynamic analysis, that aligns closely with malware analysis capabilities. If an option mentions Security Intelligence, think reputation-based filtering instead of AMP Private Cloud.
A network administrator notices that inspection has been interrupted on all non-managed interfaces of a device. What is the cause of this?
Incorrect. MTU changes can affect packet forwarding, fragmentation, and path behavior, but they are not the documented trigger for inspection interruption on all non-management interfaces in this context. The question is specifically about a platform behavior tied to inspection being interrupted globally, which aligns with a change to the highest configured MSS value. MTU is a Layer 3 forwarding characteristic, whereas the described event is tied to the inspection engine's TCP stream processing settings.
Correct. In Cisco Firepower/FTD, the highest MSS value configured on any non-management interface is used by the inspection engine as a global TCP stream handling parameter. When that highest MSS value changes, the Snort inspection process must restart or reinitialize so the new value is applied consistently across inspected traffic. Because Snort handles inspection for all non-management interfaces, this causes inspection to be interrupted on all of them, not just on the interface where the change was made.
Incorrect. Associating a passive interface with a security zone changes policy classification and zone-based rule applicability, but it does not trigger a global restart of the inspection engine. Any effect would be limited to traffic matching and access-control behavior for that interface or zone relationships. It would not explain why inspection stopped on all non-management interfaces simultaneously.
Incorrect. Adding multiple inline interface pairs to the same inline interface would be a configuration problem affecting those specific inline deployments. Such a mistake could cause interface-pair conflicts or deployment errors, but it would not produce the specific platform-wide symptom of inspection interruption across all non-management interfaces. The question points to a global inspection-engine parameter change rather than an inline topology issue.
Core concept: This question tests Cisco Firepower Threat Defense behavior when interface TCP MSS settings are changed. In FTD, the maximum TCP MSS configured on any non-management interface is treated as a global inspection-related parameter for the Snort inspection process rather than as a purely local interface tweak. Why correct: If the highest MSS value on any non-management interface changes, the system restarts or reinitializes inspection so the new TCP stream handling limits are applied consistently, which interrupts inspection on all non-management interfaces. Key features: MSS affects TCP normalization and stream reassembly behavior, while management interfaces are excluded because they do not participate in normal traffic inspection. Common misconceptions: MTU changes are often assumed to be the trigger because they affect packet size, but the documented platform-wide inspection interruption is tied to the highest configured MSS value, not MTU. Exam tips: When a question mentions interruption on all non-management interfaces after an interface parameter change, look for a globally applied inspection parameter such as MSS rather than a local zoning or inline-pair configuration issue.
An engineer configures a network discovery policy on Cisco FMC. Upon configuration, it is noticed that excessive and misleading events are filling the database and overloading the Cisco FMC. A monitored NAT device is executing multiple updates of its operating system in a short period of time. What configuration change must be made to alleviate this issue?
Correct. NAT devices and load balancers cause many hosts to appear as a small set of IPs, breaking FMC’s assumption that an IP represents a single endpoint. During frequent OS updates, fingerprint changes and service fluctuations can trigger repeated rediscovery events, rapidly growing the database. Excluding NAT/load balancer devices (and/or their translated networks) is a standard tuning step to reduce misleading discovery data and FMC load.
Incorrect. Leaving default networks does not mitigate the fundamental problem: FMC is collecting discovery data from addresses that are not stable endpoint identifiers due to NAT. Defaults may even worsen the issue if they include broad ranges that capture translated or shared infrastructure IPs. The fix requires narrowing scope and excluding NAT/load balancer components, not keeping the default discovery targets unchanged.
Incorrect. Increasing the number of entries on the NAT device (NAT table size) is unrelated to FMC’s discovery event storm. FMC overload is caused by repeated, misleading endpoint/OS attribution from translated/shared IPs and changing fingerprints during updates. A larger NAT table might improve NAT performance, but it will not reduce FMC database churn or false discovery events.
Incorrect. Changing the discovery method to TCP/SYN may alter how active discovery probes are performed, but it does not solve the core issue of identity ambiguity created by NAT. Even with different probing, FMC will still see multiple internal endpoints (or changing device characteristics during updates) behind the same translated IPs, continuing to generate excessive and misleading discovery events.
Core Concept: This question tests Cisco FMC Network Discovery policy behavior and how active discovery can generate excessive “churn” in host/OS/user identity data when traffic is seen through NAT or load balancers. FMC correlates discovery events (hosts, operating systems, applications, users) based on observed network attributes. When many endpoints are represented by a small set of translated IP addresses, FMC can misattribute changes and repeatedly “rediscover” different OS fingerprints for the same IP, creating misleading events and database load. Why the Answer is Correct: A monitored NAT device performing multiple OS updates in a short time can cause frequent changes in observed OS fingerprints and related discovery attributes. If FMC is monitoring the NAT device’s translated address space (or the NAT device itself as a discovery target), many internal hosts may appear to FMC as a few external IPs. As the NAT device changes behavior during updates (different TCP/IP stack characteristics, services, banners), FMC generates repeated discovery updates and events against the same observed IPs, filling the database and overloading FMC. Best practice is to exclude NAT devices and load balancers from discovery (or exclude the translated networks) so FMC does not attempt to build endpoint identity/OS profiling from addresses that do not uniquely represent endpoints. Key Features / Best Practices: In Network Discovery, use exclusions for infrastructure devices that aggregate or translate traffic (NAT, proxies, load balancers) and avoid monitoring NAT pools as if they were true endpoint networks. Focus discovery on internal, non-translated address ranges where IP-to-host identity is stable. This reduces false positives, event storms, and database growth. Common Misconceptions: It’s tempting to “tune” the discovery method (for example, TCP/SYN) to reduce noise, but the root issue is identity ambiguity caused by NAT, not the probe type. Similarly, changing NAT table size or leaving defaults does not address FMC’s correlation problem. Exam Tips: When you see “excessive/misleading discovery events” plus NAT/load balancer/proxy, think “exclude those devices/networks from discovery” to prevent IP reuse/translation from corrupting endpoint attribution and overwhelming FMC.


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.