
Simulate the real exam experience with 60 questions and a 90-minute time limit. Practice with AI-verified answers and detailed explanations.
AI-Powered
Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.
Which two OSPF routing features are configured in Cisco FMC and propagated to Cisco FTD? (Choose two.)
Incorrect. OSPFv2 is an IPv4 routing protocol; IPv6 uses OSPFv3. The phrase “OSPFv2 with IPv6 capabilities” is a common distractor that conflates the two protocol versions. In FMC/FTD, IPv6 routing would be handled via OSPFv3 configuration rather than an “IPv6-capable OSPFv2” feature.
Correct. Virtual links are supported OSPF constructs used to logically extend Area 0 across a transit area when the backbone is not contiguous. FMC can configure OSPF areas and virtual links and then deploy that configuration to FTD. On the exam, virtual links are a recognizable, standard OSPF feature that is commonly included in FMC-managed OSPF capabilities.
Incorrect. While SHA-based authentication exists in some routing/security designs, OSPF authentication in FMC-managed FTD is typically implemented using simple password or MD5 for OSPFv2. SHA authentication is not a commonly exposed/supported OSPF authentication option in FMC for FTD, making it a likely distractor.
Incorrect. Type 1 LSAs are router LSAs and are fundamental within an area; “Type 1 LSA filtering at an ABR” is not a typical, broadly supported or exposed feature in FMC for FTD OSPF. FMC generally provides core OSPF configuration (areas, interfaces, authentication, summarization where applicable) rather than granular LSA filtering controls.
Correct. MD5 authentication is a standard OSPFv2 security feature used to protect OSPF adjacencies and prevent unauthorized route exchange. FMC supports configuring MD5 authentication parameters and propagates them to FTD during deployment. For exam purposes, MD5 is the most commonly referenced OSPF authentication method in FMC/FTD contexts.
Core concept: This question tests which OSPF features are supported when OSPF is configured centrally in Cisco FMC (Firepower Management Center) and then deployed (propagated) to Cisco FTD devices. In the FMC/FTD architecture, FMC is the policy/configuration authority, but only a subset of “classic IOS/ASA OSPF” capabilities are exposed in the FMC UI/API for FTD routing. Why the answers are correct: Virtual links and MD5 authentication are two OSPF features that FMC can configure and push to FTD. Virtual links are an OSPF mechanism used to logically connect an area to the backbone (Area 0) through a transit area, typically to repair a noncontiguous Area 0 design. FMC supports defining OSPF areas and can include virtual-link configuration where needed, which is then rendered into the FTD’s underlying routing configuration. MD5 authentication is a commonly supported OSPFv2 authentication method for securing OSPF adjacencies. FMC allows you to configure OSPF authentication parameters (including MD5) on interfaces/areas, and those settings are deployed to FTD so that OSPF neighbors must match keys to form adjacency. Key features, configuration, and best practices: - Use OSPF authentication (MD5 in this context) to prevent unauthorized neighbors and route injection. Ensure consistent key IDs/strings and plan key rotation. - Prefer correct Area 0 physical design; use virtual links only as a temporary or last-resort fix because they add operational complexity and can be fragile if the transit area becomes unstable. - Validate adjacency formation and authentication mismatches using FTD show/monitoring tools after deployment. Common misconceptions: - “OSPFv2 with IPv6 capabilities” is misleading: OSPFv2 is for IPv4; IPv6 uses OSPFv3. FMC supports OSPFv3 separately, but it is not “OSPFv2 with IPv6.” - SHA authentication is widely used in other routing/security contexts, but OSPF authentication in many FMC/FTD implementations is centered on MD5 (and/or simple password), not SHA-based OSPF authentication. - Advanced LSA filtering at ABRs (for example, Type 1 LSA filtering) is not typically exposed/supported in FMC for FTD; FMC focuses on core OSPF constructs rather than deep LSA manipulation. Exam tips: For SNCF, remember that FMC-managed routing on FTD supports common, foundational OSPF features (areas, neighbors, authentication like MD5, and certain topology tools like virtual links), but not every IOS/ASA OSPF knob. Watch for distractors that mix OSPFv2/v3 terminology or assume full IOS feature parity.
Network traffic coming from an organization's CEO must never be denied. Which access control policy configuration option should be used if the deployment engineer is not permitted to create a rule to allow all traffic?
Changing the intrusion policy from Security to Balanced only alters IPS inspection behavior and signature tuning. It may reduce false positives, but it does not stop the access control policy from denying traffic, nor does it guarantee IPS will never drop a packet. The requirement is absolute, so a tuning change is insufficient. This option addresses inspection sensitivity, not exemption from enforcement.
A Trust policy is not the same as Firewall Bypass and does not provide the strongest guarantee that traffic can never be denied. Trust is used to fastpath or reduce inspection for selected traffic, but it is still not the exam-best answer when the requirement explicitly says traffic must never be denied. Cisco uses bypass for traffic that must be exempt from the firewall decision path entirely. Therefore, Trust is too weak and too ambiguous for this requirement.
Firewall Bypass is the option used when selected traffic must not be subject to normal firewall enforcement. Matching traffic bypasses the access control policy and associated inspection engines, so it is not denied by later ACP, IPS, or file-policy decisions. That directly satisfies the requirement that the CEO’s traffic must never be denied without creating a broad allow-all rule. Because bypass removes security visibility and control, it should be limited to a very specific host or flow definition.
A NAT policy only changes packet addressing information such as source or destination IP values. It does not create authorization to pass traffic through the firewall, and translated traffic can still be blocked by ACP, IPS, Security Intelligence, or other controls. NAT is a connectivity and translation feature, not a guarantee against denial. Therefore, it cannot satisfy the stated business requirement.
Core concept: This question tests knowledge of Cisco Secure Firewall access control policy options that exempt traffic from normal inspection and enforcement. The requirement is absolute: the CEO’s traffic must never be denied, and the engineer is specifically not allowed to create a rule that simply allows all traffic. In that scenario, the correct mechanism is Firewall Bypass, which causes matching traffic to bypass the access control policy and associated inspection engines entirely. Why correct: Firewall Bypass is designed for traffic that must not be blocked by the firewall policy path. Because bypassed traffic is not evaluated against the normal ACP decision logic, it avoids later denies from access control, IPS, file, or other security inspections. That makes it the only option here that directly satisfies the strict business requirement. Key features: Firewall Bypass is configured for narrowly defined traffic and is intended for exceptional cases where uninterrupted forwarding is more important than inspection. It is stronger than tuning an intrusion policy or using NAT because it removes the traffic from the normal enforcement path. It should be scoped as tightly as possible because it creates a visibility and security blind spot. Common misconceptions: Many candidates confuse Trust with Bypass. Trust can fastpath traffic and reduce inspection overhead, but it is still a policy action and does not provide the same absolute guarantee as bypassing the firewall path. NAT also does not permit traffic; it only translates addresses. Exam tips: Watch for phrases like 'must never be denied' or 'cannot create an allow rule.' Those usually indicate a bypass-style feature rather than a tuning or translation feature. On Cisco firewall exams, distinguish carefully between Trust, Allow, and Bypass because they affect different stages of packet processing and enforcement.
An engineer is tasked with deploying an internal perimeter firewall that will support multiple DMZs. Each DMZ has a unique private IP subnet range. How is this requirement satisfied?
Transparent mode with access control policies can inspect traffic without acting as the primary Layer 3 hop, so it is often used when you want to insert a firewall without changing the existing IP addressing or routing design. However, for an internal perimeter firewall that must support multiple DMZs, the more typical and scalable design is to use routed mode so the firewall terminates each DMZ segment directly. The issue is not that transparent mode cannot ever carry multiple subnets, but that it is not the best fit for this stated deployment requirement.
Routed mode with access control policies is the correct fit for multiple DMZs that each use a unique private subnet. The firewall can have an interface/subinterface in each DMZ VLAN/subnet, route between inside and DMZ networks, and enforce stateful policies per zone. This is the standard multi-DMZ internal perimeter design and does not require NAT unless translation is specifically needed.
Routed mode with NAT configured is not required by the stated requirement. NAT is used for overlapping address spaces, hiding addresses, or publishing/egress scenarios, but multiple DMZs with unique private subnets can be routed and filtered without translation. Adding NAT could complicate troubleshooting and policy design and is only justified if the question explicitly calls for translation.
Transparent mode with NAT configured is not the best answer because NAT is not required by the scenario, and transparent mode is generally chosen to avoid making the firewall the routed boundary. While some Cisco platforms support NAT in transparent mode, that does not make it the preferred design for multiple DMZ segments with unique private subnets. The cleaner and more standard approach is routed mode with policy enforcement, adding NAT only if a separate translation requirement exists.
Core concept: This question tests the difference between routed mode and transparent mode on a Cisco firewall, and when NAT is actually needed. Multiple DMZs with different private IP subnet ranges are most commonly implemented by making the firewall the Layer 3 boundary for each segment and enforcing access control between them. Why the answer is correct: Deploying the firewall in routed mode with access control policies is the best answer because the firewall can terminate each DMZ on its own interface or subinterface, participate in routing for those subnets, and apply security policy between inside, DMZ, and other zones. This is the standard design for an internal perimeter firewall with multiple DMZs. NAT is not required simply because the DMZs use private addressing; unique private subnets can be routed directly as long as routing is in place. Key features: - Routed mode gives the firewall an IP presence on each connected subnet or VLAN. - Multiple DMZs can be supported with physical interfaces or 802.1Q subinterfaces. - Access control policies can be applied per interface, zone, network, application, and service. - NAT is optional and only needed when translation is a stated requirement, such as overlapping addresses or address hiding/publishing. Common misconceptions: A common mistake is assuming NAT is required whenever DMZs use private IP space. In reality, private-to-private segmentation works fine without NAT if the networks are unique and routable. Another misconception is that transparent mode is automatically invalid for multiple subnets; transparent mode can pass traffic for multiple IP networks, but it is not the usual choice when the firewall is intended to serve as the routed boundary for several DMZs. Exam tips: - If the question describes several distinct network segments or DMZs and asks how to deploy the firewall, routed mode is usually the expected answer. - Only choose NAT when the prompt explicitly indicates translation, overlap, or public/private address mapping requirements. - Transparent mode is generally selected when you want to insert the firewall with minimal routing changes, not when you are designing the primary Layer 3 separation point.
When deploying a Cisco ASA Firepower module, an organization wants to evaluate the contents of the traffic without affecting the network. It is currently configured to have more than one instance of the same device on the physical appliance. Which deployment mode meets the needs of the organization?
Inline tap monitor-only mode is not the best answer because it still relies on an inline set rather than a purely out-of-band monitoring design. Although inline tap is less disruptive than full inline enforcement and is often used to observe traffic without dropping it, it remains an inline deployment model conceptually tied to the traffic path. The question emphasizes evaluating traffic without affecting the network, which is more precisely satisfied by passive monitor-only mode. On exams, passive monitor-only is the cleaner answer whenever the requirement is zero forwarding impact.
Passive monitor-only mode is the correct deployment when the organization wants to inspect traffic contents without affecting production forwarding. In passive mode, the Firepower module receives a copy of the traffic, typically from a SPAN, mirror, or similar mechanism, so it can analyze sessions without being part of the packet-forwarding decision. Because it is out of band, the module cannot block or delay traffic, which matches the requirement to evaluate traffic safely. This makes it the best fit for assessment, baselining, and visibility use cases on appliances hosting multiple logical instances.
Passive tap monitor-only mode is not standard Cisco ASA Firepower deployment terminology. Cisco commonly distinguishes between passive deployments and inline or inline tap deployments, but this option combines terms in a way that does not represent the recognized mode name. Because the exam expects official deployment mode terminology, this option should be eliminated. Even though it sounds non-intrusive, the wording itself is a clue that it is not the correct Cisco-defined answer.
Inline mode is used when the Firepower module is intended to enforce policy on live traffic. In this mode, packets are subject to inspection decisions that can permit, block, or otherwise affect forwarding, which introduces the possibility of latency, drops, or operational impact. That directly conflicts with the requirement to evaluate traffic contents without affecting the network. Therefore, inline mode is not appropriate for a non-disruptive assessment deployment.
Core concept: This question is about choosing a Firepower deployment mode on Cisco ASA that allows traffic inspection without introducing any forwarding dependency or enforcement impact. The phrase "evaluate the contents of the traffic without affecting the network" points directly to a monitor-only deployment, and the mention of more than one instance on the same physical appliance refers to ASA multiple-context operation. In this scenario, passive monitor-only mode is the correct choice because it observes copied traffic out of band rather than sitting in the live forwarding path. Key features include analysis without blocking, no risk of traffic interruption from the inspection engine, and suitability for visibility or evaluation use cases. A common misconception is to equate any mode containing the word "tap" with being fully out of band, but inline tap still uses inline sets and is not the same as passive monitoring. Exam tip: when the requirement explicitly says no network effect, prefer passive monitor-only over inline or inline tap unless the question specifically describes an inline set with fail-open behavior.
An organization has a Cisco FTD that uses bridge groups to pass traffic from the inside interfaces to the outside interfaces. They are unable to gather information about neighboring Cisco devices or use multicast in their environment. What must be done to resolve this issue?
Incorrect. Allowing CDP with a firewall rule addresses only one specific protocol and does not resolve the underlying design issue if the firewall is not operating in transparent mode. The question also mentions multicast generally, which points to a broader Layer 2 forwarding requirement rather than a single ACP exception. In Cisco FTD, proper bridge-group behavior is tied to transparent mode. A rule alone is not the primary fix for this scenario.
Incorrect. The question already states that the Cisco FTD uses bridge groups to pass traffic between inside and outside interfaces, so bridge groups are already present. Creating a bridge group again does not address the inability to use neighbor discovery or multicast. The problem is not the absence of a bridge group but the need for the correct firewall operating mode. This option repeats part of the existing configuration rather than fixing the issue.
Correct. Cisco FTD uses bridge groups as part of a transparent firewall deployment, where interfaces forward Layer 2 frames instead of routing packets at Layer 3. Protocols such as CDP depend on Layer 2 adjacency and are not routable, so the firewall must operate transparently for this design to work properly. Multicast and other non-routed traffic types also align with transparent forwarding behavior. Therefore, changing the firewall mode to transparent is the required action to support the intended bridged environment.
Incorrect. Routed mode operates at Layer 3 and does not forward Layer 2 discovery protocols such as CDP between interfaces. It is the opposite of what is needed when the requirement is to preserve Layer 2 adjacency and multicast behavior across firewall interfaces. Moving to routed mode would further separate the segments and prevent non-routable protocols from traversing. Therefore, routed mode would not solve the problem.
Core concept: This question tests the relationship between Cisco FTD bridge groups and firewall operating mode. Bridge groups are a Layer 2 construct associated with transparent firewall deployments, where the firewall forwards frames between interfaces without routing them. If an organization expects Layer 2 adjacency behaviors such as neighbor discovery and multicast handling across bridged interfaces, the firewall must be operating in transparent mode. Why correct: The symptom described—traffic passing via bridge groups but inability to gather information about neighboring Cisco devices or use multicast—indicates the deployment needs true Layer 2 transparent forwarding behavior. Cisco Discovery Protocol and many multicast/broadcast-dependent functions rely on transparent bridging rather than Layer 3 routing. Changing the firewall mode to transparent aligns the FTD behavior with the intended bridge-group design. Key features: - Transparent mode allows the firewall to act as a Layer 2 device while still enforcing security policy. - Bridge groups are configured to bridge interfaces together in transparent deployments. - Layer 2 protocols such as CDP are not routed, so they require transparent forwarding behavior to traverse the firewall path. - Multicast and broadcast traffic handling is tied to the firewall operating as a bridge rather than a router. Common misconceptions: - Simply creating an ACL or firewall rule for CDP does not solve a mode/design mismatch. - Bridge groups are not a substitute for transparent mode; they are part of that deployment model. - Routed mode does not forward non-routable Layer 2 discovery protocols like CDP. Exam tips: When a question mentions bridge groups, Layer 2 forwarding, CDP neighbor visibility, or multicast across firewall interfaces, think transparent mode first. If the issue is architectural rather than policy-specific, changing the operating mode is usually the correct answer rather than adding a rule for one protocol.
Want to practice all questions on the go?
Download Cloud Pass for free — includes practice tests, progress tracking & more.
Which two conditions must be met to enable high availability between two Cisco FTD devices? (Choose two.)
Same flash memory size is not a prerequisite for Cisco FTD HA. The devices need to be supported for HA and run compatible software, but identical storage capacity is not what determines whether failover can be configured. Flash size may matter for image storage or local logging capacity, yet it does not affect HA pairing requirements directly. Therefore this option describes a hardware similarity that is not mandated by the HA feature.
Same NTP configuration is a best practice, not a mandatory condition to enable HA between two Cisco FTD devices. Time synchronization helps with event correlation, certificate validation, and troubleshooting, but HA formation itself does not depend on identical NTP settings. Cisco exams often distinguish between recommended operational settings and actual prerequisites. Because the question asks what must be met, NTP should not be selected.
Same DHCP/PPoE configuration is required because corresponding interfaces on both FTD devices must behave the same way when the active unit fails over to the standby unit. If one peer uses DHCP or PPPoE and the other does not, the replicated interface configuration will not be equivalent and failover behavior will be inconsistent. Cisco HA prerequisites emphasize matching interface configuration characteristics on both units. This is especially important for outside interfaces that obtain addressing dynamically, because the standby device must be able to assume the same operational role after failover.
Same host name is not required for FTD HA. In fact, unique hostnames are commonly used so administrators can distinguish the two appliances in management tools, logs, and troubleshooting output. HA uses failover roles, interface mapping, and replicated configuration rather than hostname matching. As a result, hostname equality is not a condition for enabling HA.
Same number of interfaces is required because FTD HA relies on one-to-one interface mapping between the two peers. Each monitored data interface, as well as failover and state links, must have an equivalent interface on the partner device. If one device has fewer interfaces, some logical roles cannot be mirrored correctly and the HA pair cannot be built consistently. This is why HA deployments typically use identical models or platforms with equivalent interface layouts.
Core concept: Cisco FTD high availability requires the two devices to be closely matched so configuration replication and interface failover work predictably. Why correct: the peers must have equivalent interface layouts and matching interface addressing behavior, including DHCP or PPPoE usage where applicable, so the standby unit can assume traffic handling correctly after failover. Key features: FTD HA depends on compatible hardware/software, one-to-one interface mapping, and consistent interface role behavior across both units. Common misconceptions: NTP synchronization is strongly recommended for logging and troubleshooting, but it is not a hard prerequisite to enable HA; hostname and flash size are also not required to match. Exam tips: when evaluating HA prerequisites, focus on items that directly affect interface pairing and replicated network behavior rather than operational best practices like time sync.
An engineer is setting up a new Firepower deployment and is looking at the default FMC policies to start the implementation. During the initial trial phase, the organization wants to test some common Snort rules while still allowing the majority of network traffic to pass. Which default policy should be used?
Balanced Security and Connectivity is the best default starting point when you need to evaluate common Snort rules while keeping most legitimate traffic flowing. It enables a practical set of signatures with a reasonable tradeoff between detection and false positives. This aligns with a trial phase where the organization wants meaningful IPS testing without widespread disruption or heavy tuning upfront.
Security Over Connectivity prioritizes blocking and stricter enforcement, which can be appropriate for high-risk environments but is more likely to impact legitimate traffic during an initial rollout. It may enable more aggressive rule actions or broader coverage that increases false positives. This conflicts with the requirement to allow the majority of network traffic to pass during a trial.
Maximum Detection is geared toward the highest possible detection coverage and visibility, often resulting in a large volume of events and increased noise. While it can be useful for threat hunting or lab evaluation, it commonly requires significant tuning to avoid false positives and operational disruption. It is not the best choice when the goal is to test common rules but keep traffic impact low.
Connectivity Over Security is optimized to minimize traffic disruption, but it typically does so by enabling fewer intrusion rules or using less aggressive coverage. That means it may not provide enough “common Snort rule” testing value for an evaluation phase. It’s better suited when uptime and avoiding false positives are the primary goals, not when you want meaningful IPS validation.
Core Concept: This question tests knowledge of Firepower Management Center (FMC) default intrusion policies (Snort IPS policies) and how they balance detection coverage versus operational impact (false positives, blocked traffic). Default policies are commonly used as starting points during initial deployments and trials. Why the Answer is Correct: “Balanced Security and Connectivity” is designed specifically to provide meaningful, commonly relevant Snort rule coverage while minimizing disruption to legitimate traffic. In an initial trial phase, organizations typically want to validate visibility and prevention value (triggering on prevalent threats) without causing widespread blocking due to aggressive signatures or strict tuning requirements. This policy aims for a practical middle ground: it enables a curated set of rules that are effective against common attacks and malware activity, while avoiding overly noisy or high false-positive categories. Key Features / Best Practices: Balanced policies are a recommended baseline when you need to “turn on IPS” but keep business traffic flowing. In practice, you would: 1) Deploy Balanced Security and Connectivity first. 2) Run in intrusion “alert” mode initially (or carefully choose “drop” only for high-confidence categories), then transition to more blocking as you validate. 3) Use FMC intrusion events, impact flags, and rule documentation to tune: suppress noisy rules, add network variables correctly, and apply policy layers or exceptions for known-good applications. 4) Combine with access control policy (ACP) and file/malware policies for layered defense. Common Misconceptions: “Connectivity Over Security” sounds like it would allow most traffic (true), but it often enables fewer rules and provides less meaningful testing of common Snort detections—contrary to the requirement to “test some common Snort rules.” “Maximum Detection” and “Security Over Connectivity” can appear attractive for security validation, but they tend to generate more events and potential false positives, increasing the chance of disrupting normal traffic during a trial. Exam Tips: Map the policy name to the business goal: - Want broad testing/coverage but minimal disruption? Choose Balanced. - Want the least disruption and minimal IPS coverage? Connectivity Over Security. - Want aggressive blocking/strictness? Security Over Connectivity. - Want the most signatures/events for hunting/visibility (often noisy)? Maximum Detection. Also remember: default intrusion policies are starting points; real deployments require tuning based on environment, applications, and observed events.
Which feature within the Cisco FMC web interface allows for detecting, analyzing, and blocking malware in network traffic?
Intrusion and file events are FMC monitoring views used to review what the system has detected (intrusion alerts, file downloads, malware dispositions, etc.). They help with analysis and troubleshooting, but they are not the feature that performs malware detection/blocking. The underlying enforcement comes from policies (ACP/file policy) and AMP capabilities, not from the event viewer itself.
Cisco AMP for Networks is the network-based malware protection capability in FMC/FTD. It inspects transferred files, performs cloud-based disposition lookups, can block malicious files, and supports retrospective detection if a file’s reputation changes later. This directly matches “detecting, analyzing, and blocking malware in network traffic,” which is the hallmark of AMP integrated into Firepower.
File policies are configuration objects that define which file types to inspect and what actions to take (block, allow, detect) and whether to perform malware lookup. They are essential to enable file inspection, but by themselves they are not the malware analysis feature. They rely on AMP for Networks to provide file disposition and malware intelligence.
Cisco AMP for Endpoints (now Cisco Secure Endpoint) is endpoint/host-based protection installed on servers and workstations. It provides local detection, EDR-style telemetry, and response on the endpoint, not inline inspection of network traffic through the firewall. While FMC can integrate with endpoint products, this option does not describe the FMC feature for blocking malware in transit.
Core Concept: This question tests Cisco Firepower malware defense capabilities as presented in the Firepower Management Center (FMC) web UI. In Firepower, malware detection/blocking in network traffic is provided by Cisco Advanced Malware Protection (AMP), now commonly referred to as Cisco Secure Malware Analytics/Threat Grid and Secure Endpoint integrations, but in the FMC feature set it is still widely labeled “AMP for Networks.” Why the Answer is Correct: Cisco AMP for Networks is the FMC/FTD capability that inspects files traversing the firewall/NGIPS, computes file disposition (clean/malicious/unknown) using cloud lookups, and can block or allow files based on that disposition. It also supports retrospective security: if a file initially seen as unknown is later convicted as malicious, FMC can generate events and enable response actions. This is the feature explicitly aimed at detecting, analyzing, and blocking malware in network traffic. Key Features / How It’s Implemented: In practice, you enable file inspection in an Access Control Policy (ACP) using File Policies and AMP settings. File Policies define what file types to inspect (by category/MIME), whether to block, and whether to perform malware cloud lookup. AMP for Networks provides the malware intelligence, file trajectory, and disposition-based enforcement. Best practice is to apply file inspection to high-risk protocols (HTTP, SMTP, FTP) and tune by file type to manage performance and false positives. Common Misconceptions: “Intrusion and file events” are event views (monitoring) rather than the protection feature itself. “File policies” are necessary configuration objects, but they are not the malware analysis engine; they are the rule container that leverages AMP. “AMP for Endpoints” is host-based (Secure Endpoint connector) and does not directly represent network traffic inspection within FMC. Exam Tips: For SNCF, map terms to layers: network-based malware control in FMC/FTD = AMP for Networks (configured via File Policy + ACP). Endpoint malware control = AMP for Endpoints/Secure Endpoint. Event pages show results, not the enforcement mechanism. When a question says “detecting, analyzing, and blocking malware in network traffic,” think AMP for Networks tied to file inspection and cloud disposition.
A network engineer is tasked with minimizing traffic interruption during peak traffic times. When the SNORT inspection engine is overwhelmed, what must be configured to alleviate this issue?
IPS inline link state propagation propagates physical/link status changes (for example, bringing an interface down) to help upstream devices detect failure and reroute. It is useful for certain failure scenarios, but it does not alleviate SNORT CPU/inspection overload. In fact, triggering link changes can cause convergence events and interruption, which conflicts with the goal of minimizing disruption during peak traffic.
Prefilter policies can reduce the amount of traffic sent to SNORT by fast-pathing certain flows (for example, trusted tunnels or traffic you choose not to deeply inspect). This can improve performance, but it is not the primary feature intended to keep traffic flowing when SNORT is already overwhelmed. The question asks what must be configured for overload continuity, which is better matched by Automatic Application Bypass.
A Trust ALL access control policy effectively permits traffic broadly and may reduce some inspection decisions, but it is not a proper mechanism for handling SNORT overload and is not a best-practice approach. It can severely weaken security posture and still may not prevent SNORT-related bottlenecks if intrusion/file/malware inspection remains enabled. It’s an unsafe and imprecise answer for the stated requirement.
Automatic Application Bypass is specifically intended to maintain availability when the SNORT inspection engine is under heavy load or cannot keep up. By temporarily bypassing certain deep inspection for eligible traffic, the firewall can continue forwarding and minimize user-visible interruption during peak periods. This directly addresses the scenario of an overwhelmed SNORT engine and aligns with high-availability/continuity goals.
Core concept: This question tests resiliency and performance controls in Cisco Firepower Threat Defense (FTD) when the SNORT inspection engine (the “analysis” path for intrusion, file, malware, and many application detections) becomes resource constrained. In high load, deep inspection can become a bottleneck, risking packet drops, increased latency, or session resets—exactly what you want to avoid during peak traffic. Why the answer is correct: Automatic Application Bypass (AAB) is designed to minimize traffic interruption when the SNORT process cannot keep up. When enabled, the device can temporarily bypass certain deep inspection functions for eligible flows so traffic continues to pass rather than being dropped or causing excessive delay. This is a deliberate “fail-open for performance” behavior for application/inspection processing, helping maintain availability during transient overload conditions. Key features / configuration points: - AAB is a performance and continuity feature: it allows traffic to continue when inspection is overwhelmed, trading some visibility/security depth for uptime. - It is most relevant for inline deployments where the firewall is in the forwarding path and packet loss would directly impact users. - Best practice is to pair AAB with capacity planning (CPU, memory, rule optimization), targeted intrusion policies, and prefiltering to reduce unnecessary SNORT work, but the question asks what “must be configured” to alleviate interruption during overload—AAB is the explicit mechanism. Common misconceptions: - Prefiltering (option B) can reduce SNORT load, but it is not the specific “overwhelmed engine” continuity mechanism; it’s a tuning/optimization step and may not address sudden overload. - Inline link state propagation (option A) is about propagating link status on failure to trigger upstream rerouting; it doesn’t directly relieve SNORT overload and can still cause interruption. - Trust All (option C) is an insecure workaround and not a recognized best-practice control for overload; it also doesn’t specifically address SNORT engine saturation behavior. Exam tips: Look for keywords like “SNORT overwhelmed,” “avoid traffic interruption,” and “during peak times.” Those point to bypass/fail-open style features (AAB) rather than policy tuning or link-state behaviors. Also distinguish between reducing inspection load (prefilter, policy optimization) and maintaining forwarding when inspection cannot keep up (AAB).
In a Cisco AMP for Networks deployment, which disposition is returned if the cloud cannot be reached?
“Unavailable” is the disposition returned when the device cannot obtain a verdict because the AMP cloud is not reachable. It indicates a lookup failure due to connectivity/service reachability issues (DNS, routing, proxy, firewall rules, TLS interception, or temporary cloud outage). This is the correct mapping for “cloud cannot be reached.”
“Unknown” means the cloud was reached and queried successfully, but the cloud does not have a definitive disposition for the file at that time. It is a verdict state about lack of intelligence, not a connectivity failure. Therefore it does not match the condition “cloud cannot be reached.”
“Clean” is a definitive benign verdict returned when the cloud (or local intelligence) determines the file is not malicious. It requires a successful evaluation/lookup and is not used as a fallback when the cloud is unreachable. Choosing “clean” would incorrectly imply the file is safe.
“Disconnected” is not the standard AMP for Networks disposition used to represent an unreachable cloud verdict lookup. While the word suggests connectivity problems, the disposition used for failed cloud reachability is “unavailable.” “Disconnected” may appear in other product status contexts, which can mislead test-takers.
Core concept: Cisco AMP for Networks (often delivered via Firepower/FTD as “File and Malware” inspection) queries Cisco’s cloud (Threat Grid/AMP cloud) to determine a file’s disposition (verdict). Dispositions are the verdict categories returned by the cloud or derived locally, such as clean, malicious, unknown, and states indicating the system cannot obtain a verdict. Why the answer is correct: If the AMP cloud cannot be reached at the time the device attempts to query for a file’s disposition, the returned disposition is “unavailable.” This specifically indicates a connectivity/reachability problem to the cloud service (for example, DNS failure, routing issue, proxy/TLS inspection interference, or the cloud service being temporarily unreachable). In other words, the system is capable of doing the lookup, but the lookup cannot be completed because the cloud is not reachable. Key features, configuration, and best practices: AMP for Networks relies on outbound connectivity from the sensor/FTD to Cisco cloud endpoints over HTTPS (and proper DNS resolution). Best practice is to verify: - DNS and default route from the management interface (or data interface, depending on design) - Any required proxy configuration (if used) - Allowlisting of Cisco AMP/Threat Grid cloud FQDNs and certificate trust (avoid breaking TLS with interception) Operationally, “unavailable” dispositions should trigger troubleshooting of cloud reachability rather than assuming the file is benign. Common misconceptions: “Unknown” is often confused with “unavailable.” Unknown means the cloud was reached, but there is no known verdict for that file (no intelligence yet, or not enough data). “Disconnected” sounds like a connectivity issue, but it is not the standard disposition returned for a failed cloud lookup in AMP for Networks; it may be used in other contexts (for example, device status) but not as the disposition verdict here. “Clean” is a positive verdict and is never used to represent a failed lookup. Exam tips: On SNCF-style questions, map each disposition to its meaning: clean/malicious = verdict known; unknown = verdict not known but lookup succeeded; unavailable = lookup could not be performed due to cloud reachability. When you see wording like “cloud cannot be reached,” immediately think “unavailable,” and then pivot to connectivity checks (DNS, routing, proxy, TLS inspection, and allowlists).


Want to practice all questions on the go?
Get the free app
Download Cloud Pass for free — includes practice tests, progress tracking & more.