A | B |
3 parts needed to implement CNA | 1. PFC - Priority Flow Control - used for non-drop flow control on Ethernet 2. DCB - data center bridging - used for feature negotiation and exchange among devices building unified fabric 3. FCoE intialization protocol (FIP) used during FCoE initialization |
Cisco Fabricpath | using NX-OS can build scalable layer 2 mutlipath networks without the Spanning Tree Protocol (STP) |
Cisco OTV | Overlay Transport Virtualization - extends layer 2 applications across distributed data centers |
Cisco FEX-Link | Cisco Fabric Extender Link - technology enables data center architects to gain new design flexibility while simplifying cabling infrastructure and management complexity - uses Cisco |
VNTag | virtual network tag provides advanced hypervisor switching as well as high-performance hardware switching - the VNTag architecture provides virtualization-aware networking and policy control |
Data Center Bridging (DCB) and FCoE | unified Fabric provides the flexibility to run Fibre Channel, IP-based storage such as network attached storage (NAS) and Internet Small Computer System Interface (iSCSI) or FCoE or a combination of both technologies |
vPC or Virtual Port Channel | enables the deployment of a link aggregation from a generic downstream network device to individual and independant Cisco NX-OS devices (vPC peers) provides both link redundancy and active-active link throughput scaling high-performance |
Cisco VM-FEX | extended FEX to the VM |
3 capabilities of VM-FEX | 1. each VM included a dedicated interface on parent switch 2. all VM traffic is sent directly to the dedicated interface on the switch 3. the software-based switch in the hypervisor is eleminated |
4 capabilities of UCS P81E | 1. FCoE PCI Express PCIe 2.0 x8 10-Gb adapater 2. designed for use with UCS C-Series 3. can support up to 128 PCIe virtual interfaces configured so both their interface type (NIC or HBA) and identity (MAC address and world wide name (WWN)) are established using just in time provisioning 4. supports network interface virtualization and VM-FEX technology |
UCS 6100 and 6200 | are fabric interconnects with Nexus 2200 support |
Nexus 4000 | Blade server (4000i is IBM) 1. supports gig and 10 gig auto-negotiation 2. supports FCoE 3. IEEE DCB |
3 WAAS Benefits for Data Center Environment | 1. Comprehensive WAN optimization from Data Center to Branches 2. 5x the performance upto 2 gbps optimized throughput 3. 3 times the scale with 150,000 TCP connections |
4 main areas WAAS is focused on | 1. Advanced Compression 2. Transport File optimizations 3. Common internet File System (CIFS) caching services 4. printer services |
Nexus 1000V | vm switch that is intelligent software operating inside VMWare ESX Hypervisor and support the Cisco Virtual Network Link (Cisco VN-Link) |
Nexus 1000V Provide these 3 services | 1. policy based VM connectivity 2. Mobile VM security and network policy 3. non-disruptive ooperation model for server virtualization and networking teams |
Nexus 1010V 6 characteristics | Virtual Services Appliance 1. supports 1000V VSM 2. support 1000V NAM VSB (virtual server blad) 3. no need for ESX license 4. can manage Cisco Virtual Security Gateway (VSG) 5. Virtual Wide Area Application Services (vWAAS) 6. Virtual Service Blades (VSB) |
5500 Switches Offerings 12 of them | 1. Unified Port Technology 2. High-Availability and High-density 3. Non-blocking line-rate performance 4. Low Latency 5. Single Stage Fabric 6. Congestion management 7. FCOE 8. NIV architecture 9. IEEE PTP (precision time protocol) 10. Cisco FabricPath and Trill 11. Layer 3 12. Hardware I/O Consolidation |
4 types of 5500 Congestion Management | 1. Virtual Output Queue (VOQ) 2. Separate Egress Queues for Unicast and Multicast (8 for each) 3. Lossless Ehternet with Priority Flow Control (PFC) 4. Explicit Congestion Notification (ECN) Marking |
4 Nexus 5000 expansion modules | 1. Ethernet Module providing 6 10 gig ethernet and FCOE ports using SFP+ 2. Fibre Channel plus Ethernet Module providing four 10 G E and FCoE ports using SFP+ amd 4 ports of 4,2,1G native Fibre Channel Connectivity using SFP interfaces 3. Fibre Chnnel module that provides 8 ports of 4,2 or 1 G native Fibre Channel using SFP interfaces for transparent connectivity to existing Fibre Channel Networks 4. Fibre Channel module that provides six ports of 8,4,2 or 1 G native Fibre Channel using SFP+ interfaces for transparent connectivity with existing Fibre Channel Networks |
3 7K Supervisor Features | 1. Continuous System Operation 2. Upgradeable Architecture 3. Superior Operational Effeciency 4. |
4 parts to the continuous system operation on 7k Sup | 1. Active/Standby Operation 2. Segmented and Redundant OOB provisioning and management paths 3. Virtualization of the management plane 4. Integrated Diagnostics and Protocol Decoding with an embedded control plane packet analyzer |
4 parts to the upgradeable architecture of 7K Supervisor | 1. Fully decoupled control and data plane with no hardware forwarding on the module 2. Distributed Forwarding Architecture allowing independent upgrades of the Supervisor and the Fabric 3. Cisco Unified Fabric-ready 4. Transparent Upgrade capacity and capability, designed to support 40 and 100 G Ethernet |
2 parts to Superior operating effeciency | 1. System locator and beacon LEDs for simplified operations 2. Dedicated OOB management for lights out management |
Internal EOBC ethernet out of band channel | ethernet connectivity is provided via an onboard 24-port Ethernet switch on a chip, with a 1 G uplink from each SUP to each I/O module and each SUP to each switch Fabric Module (up to 5) and between the two Supervisors - 2 G Links connect to each sup as well |
M-2 Series with XL Option Features | 1. Comprehensive L2 and L3 functionality including L3 routing protocols 2. MPLS - multi-protocol label switching forwarding in Hardware 3. OIR (online insertion or removal) 4. Cisco TrustSec Solution in particular Security Groups Access List (SGACL) and MAC Security (IEEE 802.1AE) using advanced encryption standard (AES) |
6 Nexus OS Features | 1. Flexibility and Scalability 2. Availability 3. Serviceability 4. Manageability 5. Traffic Routing, Forwarding and Mangement 6. Network Security |
5 Parts of Flexibility and Scalability Feature of NX-OS | 1. Software Compatibility 2. Common software throughout the Data Center 3. Modular Software Design 4. VDCs 5. Support for Cisco Nexus 2248TP GE Fabric Extender |
8 Parts of Availability for Nexus OS Features | 1. Continuous System Operation 2. Cisco In-Service Software Upgrade 3. Quick Development of Enhancements and Problem Fixes 4. Process Survivability 5. Stateful Supervisor Failover 6. Reliable interprocess communication 7. Redundant Switched EOBCs 8. network-based availability |
7 Aspects to Serviceability for Nexus OS Features | 1. Troubleshooting and Diagnostics 2. Switch Port Analyzer (SPAN) 3. Ethanalyzer 4. Smart Call Home 5. Cisco GOLD 6. Cisco IOS Embedded Event Manager (Cisco IOS EEM) 7. Cisco IOS Netflow |
7 Aspects for Manageability for Nexus OS | 1. Programmatic XML Interface 2. SNMP 3. Configuration Verification and Rollback 4. Port Profiles 5. Role-Based Access Control (RBAC) 6. Cisco Data Center Network Manager (Cisco DCNM) 7. CMP Support |
8 Aspects for Traffic routing, forwarding and management | 1. Ethernet Switching 2. Cisco Overlay Transport Virtualization (Cisco OTV) 3. Ethernet Enhancement 4. Cisco FabricPath 5. IP Routing 6. IP Multicast 7. QoS 8. Traffic Redirection |
12 Aspects for the Network Security for Nexus OS | 1. Cisco TrustSec Solution 2. Data Path intrusion detection system (IDS)) for protocol conformance checks 3. Control Plane Policing (CoPP) 4. Dynamic ARP inspection 5. DHCP snooping 6. IP Source Guard 7. AAA authentication, authorization and accounting (AAA) and TACACS+ 8. SSHv2 9. SNMPv3 10. Port Security 11. IEEE 802.1X authentication and Radius Support 12. Layer 2 Cisco Network Admission Control (NAC) LAN Port IP |
3 Critical Core Infrastructure Services to Provide overall High Availability | 1. System Manager 2. Persistent Storage Service (PSS) 3. Message and transaction Service |
4 NX-OS Extentisibility Features | 1. vPC - supports load balancing and redundancy 2. OTV - allows MAC-transparent LAN extension to other enterprise sites 3. Cisco FabricPath implements IS-IS solution in layer 2 domain removing the need for STP and offers load balancing over parallel paths 4. LISP - mobile access based on mapping a local address to globally reachable address |
3 NxOS security features | 1. Trustsec Link Layer Cryptography 2. Classic L2 and L3 features 3. Trustsec |
TrustSec Data Link Layer Cryptography | encrypts packets on egress and decrypts on ingress |
10 Parts to L2 and L3 Security | 1. uRPF 2. Packet Sanity Checks 3. DHCP snooping 4. DAI 5. IP Source Guard 6. Port Security 7. Control Plane Protection 8. CoPP 9. Control and Data Plane Separation 10. Authenticated Control Protocols |
3 Efficiency Mechanisms of NX-OS | 1. Cisco NPV 2. Cisco Fabric Extender Link (FEX-Link) 3. Built in Protocol Analyzer |
3 Virtualization Features | 1. VDCs 2. FEXs 3. MPLS vrf (only on 7K) |
5 Layer 2 High Availability Nexus | 1. STP and its enhancements - BPDU guard, loop guard, root guard, BPDU filers and Bridge Assurance 2. UDLD 3. LACP (802.3ad) 4. vPC 5. Cisco Fabricpath - provide ECMP and path convergence without use of STP - requires a separate license for Cisco FabricPath on F-Series Modules |
3 Layer 2/3 High Availability | 1. HSRP 2. VRRP 3. GLBP |
3 Layer 3 High Availability | 1. BFD (bidirectional forwarding detection) 2. Graceful restart provide NSF 3. SPF Optimizations such as LSA pacing and incremental SPF |
4 Layer 3 Routing Protocols Supported by Graceful Restart Extensions | 1. OSPFv2 2. OSPFv3 3. IS-IS 4. EIGRP 5. BGP |
IS-IS Provides these benefits to Fabricpath | 1. no ip dependency 2. easily extensible 3. provides SPF routing |
VRRP | 1. IEEE 2238 and 3768 in 2005 2. VRRP virtual router representing group of routers is known as VRRP group 3. Active router is referred to as the master virtual router 4. The master virtual router may have the same IP as the virtual router group 5. multiple routers can function as backup routers 6. mullticast 224.0.0.18 7. VRRP can track objects cannot directly track interface status 8. default times are shorter for VRRP 9. VRRP supported encryption in RFC 2238 but not in current RFC 3768 |
3 GLBP fiunctions | 1. Active Virtual Gateway (AVG) - members of GLBP group elect one gateway to be AVG for Group - AVG assigns a VMAC to each member of GLBP groups 2. Active Virtual Forwarder - AVF each gateway assumes responsibility for forwarding packets sent to the VMAC address that is assigned to that specific gateway by the AVG 3. GLBP communication - members communicate between each other through hello messages sent every 3 seconds to 224.0.0.102 UDP port 3222 |
4 Features of GLBP | 1. Load Sharing 2. Multiple Virtual Routers 3. Pre-emption 4. Efficient resource utilization |
NSF Or Graceful Restart 3 reasons | 1. stateless restart 2. graceful restart on switchover 3. Graceful restart on routing process failure |
4 Traits of High Availability Switchover | 1. it is stateful (nondisruptive) control traffic is not affected 2. does not disrupt data traffic because the switching modules are not affected 3. switching modules are not reset 4. does not reload CMP |
command to ensure the system is ready to accept a SWITCHOVER | show system redundancy stratus |
Features of 7K power supply | 1. multiple inputs providing redundancy if one fails 2. universal input flexibility 3. compatibility with Future 7K chassis 4. hot swappable 5. Temperature sensor and insturments that shut down when thresholds are reached 6. Internal Fault monitoring so will be shut down in the instance of a short circuit 7. ability to power cycle using the CLI remotely 8. Real time power draw showing real-time actual power consumption 9. Variable fan speed allowing reduction in fan speed for lower power usage in well-controlled environments |
7K Power redundancy | 1. Combined - no redundancy 2. Power Supply redundancy (N+1) - guards against the failure of one of the power supplies - power available to the system is the sum of the 2 least rated power supplies 3. input source redundancy (grid redundancy) - guards against the failure of one input circuit aka grid - each input on the power supply is attached to an independent AC feed and power available to the system is the minimum power from either of the input sources (grids) 4. power supply and input source redundancy (complete redundancy) - is the system default - this mode guards against failure of either one power supply or one AC grid. The power available is always the minimum of input source and power supply redundancy |
4 Ways CMP deliver remote control | 1. dedicated processor 2. its own memory 3. it own bootflash 4. a separate ethernet management port |
8 Features of CMP | 1. dedicated OS 2. Monitoring of Supervisor status and initiation of resets 3. system resets while retaining out of band ethernet connectivity 4. capability to initiate a complete system power shutdown and restart 5. login authentication 6. access to supervisor logs 7. control capability 8. dedicated front panel LEDs |
3 Availability Management Tools | 1. Cisco Generic Online Diagnostics (Gold) 2. Cisco IOS embedded event manager (EEM) 3. Smart-call home |
2 Cisco GOLD | 1. facilitates stateful failover in case of detection of critical errors, service restart errors, kernel errors or hardware failure 2. subsystem and additional monitoring processes on the supervisor |
4 Parts to Cisco IOS Embedded Event Manager (EEM) | 1. consists of event detectors, the event manager and an event manager policy engine 2. takes specific actions when the system software recongizes certain events through the event detectors 3. set of tools to automate many network management tasks 4. can improve availability,event collection and notification |
3 Parts to Cisco Smart Call Home | 1. combines Cisco GOLD and Cisco EEM capabilities 2. provides email based notification of critical system events 3. method variety - pager, email, XML and direct case to Cisco TAC |
Cisco IOS ISSU Performs this task | 1. verifies location and integrity of new software image files 2. verifies operational status and current software versions of both supervisors and all switching modules to ensure that the system is capable of a Cisco IOS ISSU 3. Forces a Supervisor Switchover 4. Brings up the originally active supervisor with a new image 5. performs a non-disruptive upgrade of each switching module - one at a time 6. upgrades the CMP |
5 Virtualization Types of VDC | 1. Control Place 2. Data or Forwarding PLane 3. Management PLane 4. Software Partitioning - modular software can be grouped in partitions 5. Hardware Components |
3 VDC Deployment Scenarios | 1. Split-Core or Dual Core Topology - good for migrations used to split to Data Centers 2. Multiple Aggregation Blocks - separate by business unit or functions 3. Service Insertion - VRFs are used to create a L3 Hop that separates servers in access network from services in service chain as well as aggregation layer |
4 VDC Critical Roles | 1. Systemwide Parameters - CoPP policy - VDC resource allocation - NTP 2. Licensing of the switch for software features 3. Software installation occurs here all VDC must run same software version 4. reloads of the entire switch |
3 Types of Resource Allocation | 1. Global Resources - set used or configured globally for all VDCs - boot image, switch name, NTP Servers, CoPP configuration, SPAN sessions 2. Dedicated Resources are allocated to a particular VDC such as physical switch ports 3. shared resources - resources shared between VDCs such as OOB management port |
4 RBAC Roles | 1. Network-admin - complete control over default VDC 2. Network-operator - read-only rights 3. vdc-admin - complete control over THAT VDC 4. vdc-operator - read only of that VDC |
3 creating a new VDC | 1. VDC RED 2. allocate interface e2/1 - puts it in that VDC 3. switchto - switches you to that vdc |
3 Verification or Show commands for VDC | 1. show vdc [detail] (in admin) shows all VDCs 2. show vdc membership 3. show vdc resource [detail] |
vdc limit configuration | 1. vdc red 2. limit-resource vlan minimum 32 max 4094 - NOTE F1 and M1 only set in this way |
vdc resource template configuration | 1. vdc resource template PRODUCTION 2. limit resource vlan minimum 32 max 256 - limit-resource vrf minimum 32 maximum 64 |
applying vdc resource template | 1. vdc red 2. template PRODUCTION |
Single Supervisor VDC HA Config | 1. Bringdown - put VDC in failed state - to recover you must reload the VDC or physical device 2. Reload - reloads the supervisor module 3. Restart - deletes the VDC and recreates it by using startup config |
Dual Supervisor VDC HA Config | 1. Bringdown - put VDC in failed state - to recover you must reload the VDC or physical device 2. Switchover - Initiates supervisor module switchover 3.. Restart - deletes the VDC and recreates it by using startup config |
Types of Interfaces in 7K | 1. Physical - Ethernet (10/100/1000/10G) 2. Logical - PC, Loopback, Null, SVI, tunnel, subinterface 3. In-Band - Sup-eth0, Sup-core0 4. Management - management, Connectivity Management Processor (CMP) |
Dedicated Port | first port is dedicated the other ports are disabled |
Shared | share the 10G of throughput |
udld config | 1. feature udld 2. udld [enable|aggressive|disable] |
port profile verification | show port-profile name WEB-SERVERS 2. show port-profile expand-interface 3. show port-profile usage |
CNA | Converged Network Adapter - Converges LAN and SAN networks |
3 Types of FEX Configs | 1. Straight-through using static pinning 2. straight-through with dynamic pinning 3. Active-Active |
Static Pinning | each downlink server port on the FEX is statically pinned to one of the uplinks between the FEX and the switch - traffic to and from the switch always uses that uplink |
Dynamic Pinning | a port channel is used and the uplink is determine on the fly so to speak - only one supported on the 7K |
Active-Active using VPC | dual home to 2 Nexus switches vPC used on link between the FEX and the switches |
fex verification command | show fex 111 |
fex configuration options | 1. L2 access interface 2. L2 Trunk Interface 3. L3 interface 4. L3 Subinterface |
what command enables bridge assurance | 1. it is on by default 2. spanning-tree port type network |
port channel load balancing config | port-channel load-balance source-dest-port - port-channel load-balance source-dest-ip-port-vlan module 4 |
verifying port channel load balancing | show port-channel load-balance |
command to allow L3 vPC peers to allow arp synchronization | ip arp synchronize |
Vpc domain configuration | 1. feature vPC 2. vpc domain (domain ID) - unique in a contiguous L2 domain (1-1000) |
2 vpc verification commands | 1. show vpc role 2. show vpc peer-keepalive |
vpc peer keepalive configuration | peer-keepalive destination ip-address source ip-address vrf [name|management is default] |
enhanced vpc supported topologies | 1. single home to single fex 2. dual-home server connected by a port-channel to single fex 3. dual-homed server that is connected by a port channel to a pair of FEXs. This topology allows connection to any 2 FEXs that are connected to the same pair of switches in the vPC domain - static port channel and LACP based port channel are both supported 4. dual-homed server that is connected by active/standby NIC teaming to pair of FEXs |
non-recommended VPC topologies | 1. dual homed server that is connected to a pair of FEX that connect to single switch 2. multihomed server connected by a port channel to more than 2 FEXs |
4 steps to enhanced vpc configuration | 1. Enable and configure vPC on both switches - domain - peer keepalive link - peer link 2. configure port channels from the first FEX - fex fabric mode on ports connecting to FEX - vPC number - associated ports with the channel group 3. configure port channels from the 2nd FEX (as above) 4. configure a host port channel on each FEX |
5 main benefits of IS-IS | 1. replaces STP as control-plane protocol - Link-state protocol with support for Layer 2 ECMP 2. Exchanges reachability of switch IDs and builds forwarding trees SPF routing 3. no ip dependency - no need for IP reachability to form adjacencies 4. Easily extensible - custom TLVs can exchange various information 5. Minimal IS-IS knoweldge required - no user configuration - maintains plug and play of layer 2 |
IS-IS Extensions for Cisco FabricPath | 1. Cisco FabricPath has a single IS-IS area with no hierarchial L1/L2 routing as prescribed within the IS-IS standard all devices within Cisco FabricPath in single L1 area 2. system uses a MAC address different from the MAC address for L3 IS-IS 3. system adds new sub-TLV that carries switch ID info not standard to IS-IS - this feature allows L2 info to be exchanged through existing IS-IS implementation 4. within each FabricPath IS-IS instance each device computers its shortest path to every other device using SPF - populates up to 16 ECMP links 5. FabricPath IS-IS introduces modifications to support broadcast and multicast tress (identified by Forwarding Tags or FTags) - system constructs 2 loop-free trees for forwarding multidestination traffic |
Spine and Leaf relationship in FabricPath | 1. Edge (or leaf) devices - have ports connected to classic ethernet devices and ports connected to FabricCloud or FabricPath switches - edge devices map a MAC to destination switch ID 2. Spine Devices: these devices exclusively interconnect edge devices - Spine devices swithc exclusively based on the destination switch ID which is explained later |
4 MAC Learning Rules | 1. Ethernet frames received from directly connected access or trunk port, switch unconditionally learns source MAC address as a local MAC entry like normal switch 2. unicast Frames recieve with Cisco FabricPath encapsulation switch learns source MAC address of the frame as a remote MAC address entry only if destination MAC address matches already learned local MAC entry - only learns if is bidirectional unicast - unknown unicasts frames may not trigger learning on edge switches 3. broadcast doesn't trigger learning on edge switches - However, broadcast are used to update existing MAC address entries in table 4. multicast frames trigger learning on edge switches since several critical lan protocols use them |
8 Steps to FabricPath Configuration (last 2 are optional) | 1. Install Cisco FabricPath feature set in the default VDC - need enhanced Layer 2 License 2. Enable feature set in any VDC including default 3. Set FabricPath Switch ID 4. Configure STP priorities on FabricPath Edge devices - set to lower value in L2 network should become root - priorities should match - recommended value is 8192 5. Configure FabricPath interface - on 7K FP and CE edge ports must be on an F module 6. Define FabricPath VLANs 7. Configure virtual switch ID for VPC+ (optional) 8. Tune load balancing hash functions (optional) |
4 Steps to FabricPath Verification | 1. Verify basic FabricPath Parameters - FabricPath feature set - component services - fabricpath switch ID - FabricPath VLANs 2. Examine FabricPath MAC address table 3. View FabricPath routing table - FabricPath IS-IS routes - FabricPath routes 4. Verify VPC+ - MAC address table - FabricPath routing |
5 traits Nexus 5500 and 7000 Routing Protocol Support | 1. 5500 supports all except ISIS which 7K supports 2. Graceful Restart is available and enabled by default for OSPF, EIGRP, IS-IS and BGP 3. Routing Process Support for both Ipv4 and Ipv6 4. You should not have an external device forming and adjacency with 7K or 5K over VPC peer link 5. BFD for fast convergence fo OSPF, EIGRP, IS-IS and BGP |
Enabling Routing Protocols and Licensing for 7K and 5K | 1. 7k - no license only RIPv2 2. Unlimited L3 - Enterprise Services License 2. 5500 Layer 3 Base (included) connected, static, RIPv2, OSPF (restricted), EIGRP stub, HSRP, VRRP, IGMPv2/3, PIMv2, RACLs and uRPF - unlimited layer 3 - all base+ full EIGRP, unrestricted OSPF, BGP and VRF-lite |
5 traits of BFD | BiDirectional Forwarding Detection 1. uses frequent link hellos 2. Provides fast reliable detection of link failure 3. Useful for link failures not detectable through L1 4. BFD can be tied to L3 protocols - BGP, OSPF, EIGRP, IS-IS, HSRP and PIM 5. more effecient than hellos in individual protocols |
2 Traits of BFD on 7K | 1. Runs in distributed manner 2. offloads BFD processing to CPUs and I/O modules |
4 BFD configuration steps | 1. Disable address identical IDS check - allows switch to accept BFD echo packets - BFD echo packets use local IP addresses as source and destination 2. Enable BFD feature 3. disable ICMP redirects on any interfaces that use BFD 3. Disable ICMP redirects on any interfaces that use BFD 4. Enable BFD for the required L3 protocol - OSPF, EIGRP, all HSRP groups on interface |
VRF | Virtual Routing and Forwarding is a L3 virtualization mechanism - virtualizes IP routing control and data plane functions and separates logical entities inside a router or L3 switch - is used to build L3 VPNs |
VRF consists of the following 4 items | 1. Subset of router interfaces 2. a routing table or RIB 3. Associated forwarding data structures or FIB 4. associated routing protocol instances |
Nexus 5500 and VRF | 1. supports VRF-lite relase 5.0(3)N1(1B) - L3 Lan Enterprise License - VRF interfaces Physical and Logical |
Nexus 7000 and VRF | 1. Supports full VRF functionality - Enterprise Services Module - can run MPLS on it for example |
4 Steps VRF configuration procedure (2 optional) | 1. Create VRF - in default or nondefault VDC (Cisco 7K) - vrfs in different vdcs are completely indepedent 2. assign L3 interface to VRF 3. Configure VRF static routes (optional) in VRF config mode 4. enable a routing process for VRF (optional) - associate vrf with routing protcol |
Policy Based Routing (PBR) | 1. Normal unicast routing is destination-based 2. PBR allows routing decisions to be based on different characteristics of the packets such as - source ip address - tcp or udp port numbers - packet length 3. PBR can be used to implement routing policies 4. available on Cisco Nexus 5500 and 7000 switches |
IPv6 and Nexus Switches | 1. Data Plane - distributed forwarding of Ipv6 packets through the forwarding engines on I/O modules including access list and QoS processing 2. Control Plane: support for static routing OSPFv3, EIGRP, BGP, PBR for IPv6 including VRF support 3. management plane: support for Ipv6 based management services such as SSH, syslog, SNMP, AAA and Netflow |
5 parts to Ip multicast on Nexus | 1. Ip multicast distributes data from multicast sources to a group of multicast receivers 2. sources are unaware of the client population 3. clients announce their interest in multicast groups to first-hop routers using IGMP (ipv4) or MLD (ipv6) 4. Router build a distribution tree to deliver the multicast data from the source to the receivers 5. switches can snoop IGMP/MLD messages to optimize L2 forwarding of IP multicast traffic |
Multicast Group Membership | 1. IGMPv2/MLDv1 2. IGMPv3/MLDv2 |
multicast intradomain routing | 1. PIM Sparse Mode 2. PIM BiDIR 3. PIM SSM |
multicast interdomain routing | 1. MSDP 2. MBGP (7000 only) |
Multicast Licensing for 5500 and 7K | 1. 5500 Layer 3 Base License 2. 7K Enterprise Services Licenses |
4 7K Hardware Considerations | 1. F Series Module is L2 only 2. F Series requires M series in same VDC 3. packets entering through F series module automatically forwarded on M series module 4. Interfaces on M series module perform egress replication for L3 multicast packets |
VPC and multicast | VPC supports only Any Source Multicast (ASM) PIM |
4 PIM and PIM6 Scenarios | 1. PIM and PIM6 with static RP - ASM with manually configured RP 2. PIM and PIM6 bootstrap router - ASM with dynamically distributed RP address using BSR mechanism - standards based 3. PIM with auto-RP (IPv4 only) - ASM with dynamically distributed RP address using auto-RP - Cisco Proprietary 4. Source Specific Multicast (SSM) - SSM groups do not use RP |
In all scenarios for Multicast consider these 4 things | 1. ensure required licenses installed - 5500 L3 Base 7K - Enterprise Services 2. Enable PIM sparse mode on all ip interfaces with connected receivers 3. ip interfaces with connected sources 4. ip interfaces between L3 devices |
PIM Bootstrap Router (BSR) 4 traits | 1. single BSR elected out of multiple candidate BSRs 2. Candidate RPs send candidacy announcements to BSR - unicast transport - BSR stores all candidate-RP announcements in RP set 3. BSR periodically sends BSR messages to all routers - with entire RP set and IP address of BSR - flooded hop by hop throughout the network 4. all routers select the RP from the RP set |
Auto-RP 4 traits | 1. IPv4 only mechanism 2. candidate RPs advertise themselves using announcements 3. mapping agents act as relays - receive RP announcements - store the Group-to-RP mapping in cache - elect Highest C-RP address as RP for group range - advertise RP discovery - messages 4. all cisco routers join discovery group and receive discovery messages |
Source Specific Multicast 4 traits | 1. Solution for Well-known sources 2. also referred to as Single Source Multicast 3. Immediate shortest path from Source to receivers - last-hop router sends (S,G) join directly to source - 1st hop router responds to receiver initiated join requests - no shared tree 4. typically combines IGMPv3/MLDv2 |
IGMP Snooping 3 traits | 1. Switch examines content of IGMP message to determine which ports need to receive multicast traffic for the group - examines IGMP membership reports to determine interested receivers - examines IGMP leave messages to remove receivers 3. switch forwards multicast traffic more efficiently at Layer 2 4. Does not require L3 multicast routing on the switch - supported also on Nexus 5000 |
Distributed Data Center 4 Goals | 1. Seamless workload mobility between multiple data centers 2. distributed applications closer to end users 3. pool and maximize global compute resources 4. ensure business continuity with workload mobility and distributed deployments |
3 Tradition Data Center Infrastructure Solutions | 1. Ethernet over MPLS (EoMPLS) 2. Virtual Private LAN Services (VPLS) 3. Dark Fiber |
4 Limitations of Tradition Data Center Solutions | 1. Complex deployment and management 2. transport dependency 3. Ineffecient use of bandwidth 4. Failure from one can affect other data centers |
Overlay Transport Virtualization (OTV) | definition: mac in ip method extends L2 connectivity across a transport network infrastructure 1. Overlay - technique independent of the infrastructure technology and services 2. Transport - transporting services for L2 Ethernet and IP traffic 3. Provides virtual stateless multi-access connections |
2 Benefits of OTV over Traditional DCI | 1. Dynamic encapsulation 2. Protocol Learning |
4 Parts to Dymanic Encapsulation benefit of OTV over traditional DCI | 1. no psuedo-wire maintenance 2. optimal multicast replication 3. multipoint connectivity 4. point-to-cloud model |
4 Parts to Protocol Learning benefit of OTV over traditional DCI | 1. preserve failure boundary 2. built-in loop prevention 3. automated multihoming 4. site independent |
OTV Edge Device | Encapsulation/Decapsulation between L2 and OTV and all OTV functions |
OTV internal interfaces | connects to the VLANS that are to be extended |
Join Interfaces | joins an overlay network |
Overlay Interface | encapsulated L2 frames in IP packets |
OTV and STP | otv is site transparent for STP 1. each site maintains its own STP topology 2. an OTV edge device only send and receives BPDUs on internal interfaces 3. this mechanism is built into OTV and requires no additional config |
OTV and SVI | OTV and layer 3 vlan interface cannot be configured for the same VLAN - solutions 1. configure OTV in a separate VDC (recommended) 2. move the SVI to another devices (increases complexity) |
AED Tasks | 1. Forwarding L2 unicast, multicast, and broadcast traffic between the site and overlay and vice versa 2. advertising MAC address reachability information to remote edge devices |
4 steps to configuring OTV | 1. configure OTV join interface - IP routing and IGMP 2. configure internal interfaces - L2 configuration 3. enable OTV and configure site VLAN 4. Configure the Overlay Interface |
cisco OTV join interface | a routed interface that connects to the Layer 3 transport network - link must be PTP IGMPv3 needs to be configured on the join interface |
internal OTV interface | are regular L2 interfaces |
otv site-vlan [vlan-id#] | commands defines OTV site vlan used between Cisco OTV edge devices in the same site |
otv site-identifier [id] | command configures the site identifier - same site identifier should be configured on all local Cisco OTV edge devices - site id should be different across different sites |
7K License Needed for OTV | Transport Services |
join interface | interface is used to join the overlay - the join interface must connect to the transport network 0 the can only be one join interface per overlay but you can use a port channel |
multicast control group | ASM multicast group communicates with remote Cisco OTV edge devices - should be the same on all edge devices participating in the overlay 3. multicast data groups - ssm multicast groups are used to encapsulate multicast data from the site and transport it across the overlay to the remote edge devices |
extended vlans | these are vlans extended across the overlays |
3 MPLS VPNs | 1 allows separation of customers/departments into VPNs 2. Similar to virtual circuits in Frame Relay world 3. Allows L2 or L3 VPNs |
3 Functional areas of the architecture for network virtualization | 1 access control - authenticating, authorizing and gaining access to network and server resources 2. Transport of traffic over isolated paths to access vlan and resources to service edge 3. service edge - providing isolated application environment and access to isolated and shared services |
6 Critical Factors compelling business to consider virtualization | 1. cost reductions an consolidation 2. simpler operation, administration and maintenance (OAM) 3 end to end compliance and security 4. High availability 5. Service Provisioning 6. Data Center Interconnect connecting L2 and L3 across data centers |
MPLS and LSR | Label Switch Router a device that forwards packets primarily based on labels |
MPLS and Edge LSP | a device that primarily labels packets or forwards IP packets out of an MPLS domain |
LDP definition | label distribution protocol runs in router control plane and is reponsible for labe allocation, distribution and storage - is responsible for advertisement and redistribution of MPLS labels between MPLS routers |
LSP definition | label-switched path - LSPs are unidirectional that means that return traffic uses different LSPs |
FIB definition | forwarding information base consists of destination networks, next hops, outgoing interfaces and pointers to L2 addresses |
LFIB definition | label forwarding information base contains incoming locally assigned and outgoing received from next hop labels LDP is responsible for exchanging labels and storing in LFIB |
FIB is populated by | 1. Routing table which is populated by routing protocol 2. MPLS label is added to the FIB by LDP |
LFIB populated by | LDP |
MPLS Layer 3 VPNs | 1. Customers connected to service provider via IP 2. Service provider uses MPLS to forward packets between edge routers 3. Service provider enables any-to-any connectivity between sites belonging to the same VPN 3. Service provider uses virtual routers to isolate routing information 4. Customers can use any addressing inside their VPN |
VRF and MPLS | 1. representation of VPN customer inside MPLS network 2. Each VPN is associated with at least one VRF 3. VRF configured on each PE and assciated with PE-CE interface(s) 4. VRF-aware routing protocols (static, RIP, BGP, EIGRP, IS-IS, OSPF) 5. no changes needed at CE |
VRF Table Data structure encompasses an ip routing table identical in function to the following | 1. global IP routing table in Cisco IOS software 2. Cisco Express Forwarding table that is identical in function to the global Cisco Express Forwarding Table FIB 3. Specifications for routing protocols running inside VRF instance |
Other MPLS VPN attributes associated with a VRF table are as follows | 1. Route Distinguisher (RD) which is prepended (for example, RD + IP address) to all routes exported from the VRF into the global VPNv4 (also called VPN IPv4) BGP table 2. a set of export route targets RTs attached to any route exported from the VRF 3. set of import RTs, which is used to select VPNv4 routes that are to be imported into VRF |
MP-BGP 2 labels | 1. top label or outer label that points to the egress router 2. Inner label that identifies the egress VRF |
MPLS L3 VPN in Data Center 5 traits | 1. Scalable segmentation for IPv4 and V6 traffic 2. simpler security management 3. Dedicated paths between Data center locations 4. consolidation and access to shared services 5. predictable QoS Provisioning through the network |
benefits of Using LISP | 1. Reduction of BGP table 2. Effecient Multihoming 3. Ease of renumbering 4. mobility |
LISP Header | outer IPv4 or Ipv6 header |
ITR | ingress tunnel router - router that accepts IP packets and sends LISP encapsulated packets |
ETR | egress tunnel router - receives LISP packet - de-encapsulates and delivers local EID at the site |
PROXY ITR/ETR | ITR/ETR between lisp and non-lisp sites |
Map Resolver (MR) | receives map request from ITR and forwards to ALT topology |
MS (Map Server) | receives map requests voa ALT and encapsulates map requests to registered ETRs |
ALT | Alternate Topology - advertises EID prefixes |
EID | endpoint identifier - used in the source or destination address fields in the inner LISP header of the packet |
RLOC | routing locator - IPv4 or v6 address of the customer router that de-encapsulates the LISP packet leaving the network - considered as a location address of ETR to which the packet is destined - considered as a location address of ETR to which packet is destined |
PI | Provider Independent address - address block assigned from a pool from the ISP |
PA | Provider Assigned Address - block of IP addresses assigned to a site by each service provider to which the site is connected - it is typically aggregated in the routing system LISP uses only aggregatable addresses for RLOC Identifiers |
4 QoS traits on Nexus switch | 1. goal is desirable flow of traffic through the network 2. QoS features are enforced using QoS Policies 3. QoS policies are definted and applied using MQC method 4. QoS policies are applied on interfaces in inbound or outbound direction |
4 goals of QoS | 1. bandwidth 2. delay 3. Jitter 4. packet loss |
Networks must provide | 1. enough bandwidth for critical apps in some network must guarantee it 2. low and constant delay for delay sensitive applications - important delay-sensitive packets must be sent 1st or before best effort traffic 3. low or no packet loss - some applications can adapt to packet loss by slowing down the transmit rate, some applications do not care and some of them require no packet loss |
5 QoS Features in NX-OS software | 1. classification 2. marking - marks based on classification metering or both 3. mutation - changing of packet header QoS fields to all incoming or outgoing packets 4. Policing used to enforce a rate limit by dropping or marking down packets 4. queuing and scheduling - queuing and congestion mechanisms allows to control the bandwidth allocated to traffic classes |
5 Sequences of QoS actions on inbound | 1. queuing and scheduling 2. mutation 3. classification 4. Marking 5. Policing |
5 Sequences of QoS actions on outbound | 1. classification 2. marking 3. policing 4. mutation 5. queuing and scheduling |
MQC and Nexus 3 benefits | 1. great scability of this method 2. uniform CLI 3. separates classification engine from the policy |
3 Steps to configure QoS using MQC | 1. define traffic classes using class-maps 2. define policies for traffic classes using policy-maps 3. apply service policy on interface (inbound or outbound) using service-policy command |
MQC 3 command types defining classes and policies | 1. class-map - class of traffic based on match criteria 2. table-map - define one set of packet field values to another set of field values 3. policy-map - defines set of policies to be applied on class-maps |
Class-map and policy-map 3 object types | 1. network qos - defines CoS properties across switches and VDCs 2. QoS - used for marking, mutations, ingress port trust state and policing 3. queuing - MQC objects used marking, queuing and shaping |
3 parts to Classification | 1. identifying and categorizing traffic into different classes 2. without classification all packets are treated the same 3. should be performed close to traffic edge |
3 parts to Marking | 1. "coloring" packets using different descriptors 2. easily distinguish the marked packet belonging to specific class 3. commonly used markers - CoS, DSCP, QoS group |
8 Levels of Classification (3 bits) | 1.) 7 - network 2.) 6 - internet 3.) 5 Critical 4.) 4 flash-override 5.) 3 flash 6.) 2 immediate 7.) 1 priority 8.) 0 routine |
5 Parts to Policiing | 1. Incoming and outgoing directions 2. out-of-profile packets dropped 3. dropping causes TCP retransmits 4. supports packet marking or re-marking 5. less buffer usage - shaping requires additional shaping queuing system) |
5 parts to Shaping | 1. Outgoing direction only 2. out-of-profile packets are queued until buffer is full 3. buffering minimizes TCP retransmits 4. marking or re-marking not supported 5. shaping supports interaction with Frame Relay congestion indication |
2 Classes of QoS tools used to manage the BW and load | 1. congestion management - prioritizing certain traffic on the interface 2. congestion avoidance - monitoring network traffic loads and drop packets in order to avoid congestion |
RED | Random Early Detection - mechanism that randomly drops packets before the queue is full - increases drop rate as the average queue size increases |
RED 3 Results | 1. TCP sessions slow down to the approximate rate of the output-link bandwidth 2. Average queue size is small (much less than the max queue size) 3. TCP sessions are desynchronized by random drops |
RED Modes | 1. No drop - when the average queue size is between 0 and minimum threshold 2. random drop - when the average queue size is between the minimum and maximum threshold 3. tail drop - when the average queue size is at the max threshold or above |
WRED | Weighted Random Early detection - drops less important packets more aggressively than important packets - each profile is identified by minimum and max threshold |
3 Congestion Management Techniques | 1. Bandwidth - allocate part of interface BW to queues - queue is reserved for each class min BW allocated to class during congestion 2. Priority - allows delay sensitive data to be dequeued and sent first - strict priority queue us configured - only one level of priority on an egress policy queue 3. Shaping - configured on an egress queue to limit at a maximum rate - queues packets as opposed to policing - requires some buffer space |
3 characteristics of DHCP snooping | 1. Feature that filters untrusted DHCP messages 2. acts like a firewall between untrusted hosts and trusted DHCP servers 3. builds DHCP snooping database with information of untrusted hosts with leased ip addresses |
3 Functions of DHCP Snooping on Nexus | 1. assigns level of trust to switch port 2. Validates messages that are received from untrusted sources and discards invalid messages 3. builds and maintains a DHCP snooping binding database of untrusted hosts with leased IP addresses and uses this database to validate messages received from untrusted hosts |
Dynamic Arp inspection | DAI function ensures that only Valid ARP requests and responses are relayed |
3 characterstics of IP Source Guard | 1. per interface security mechanism 2. permits traffic when both IP and MAC source addresses match - DHCP snooping binding table - static configuration entries 3. configured on Nexus per-interface |
IP source guard and DHCP Binding 5 traits | 1. DHCP snooping binding database is fully supported in hardware without using ACL TCAM (ternary content addressable memory) 2. database supports dynamic and static DHCP binding 3. hardware-based MAC-IP binding check has a higher priority over the ACL result 4. binding of the same IP address to multiple MAC addresses is not supported 5. ip source-guard doesn't work on trunks |
Unicast Reverse Path forwarding 3 traits | urpf - 1. reduces problems caused by malformed IPv4 and v6 packets 2. forwards only packets consistent with routing table 3. ensures that source IP and source interface appear in FIB table |
Strict uRPF | checks to verify source address and ingress interface through which the packet is received matches one of the uRPF interfaces in the FIB match - if check fails packet is dropped - packet flows must be symmetrical |
Loose uRPF | this mode checks the packet source address in the FIB and returns a match - if the FIB lookup result indicates that the source is reachable through at least one real interface the packet is forwarded - the ingress interface through which the packet is received is not required to match any interfaces in the FIB result. |
Traffic Storm control | occurs wieh packets flood the LAN - negative impact - excessive traffic - poor network performance |
What does traffic storm control monitor | 1. unicast 2. broadcast 3. multicast |
Port Security 3 ways MAC address can be learned | 1. Statically 2. dynamically - by flooding mechanism 3. sticky - enabling switch to learn MAC and ports combo and store to NVRAM |
3 Actions to take when violation of Port Security occurs | 1. shut down 2. restrict - drop ingress packet and increment violation counter 3. protect drop the ingress packet |
7 7K copp enhancements | 1. multicast traffic 2. ARP packets 3. L2 broadcast packets 4. ip unicast with DMAC 5. certain packets redirected to CPU 6. Matching of packets generating exceptions and redirections 7. configurable policy map for packets per second |
TrustSec | establishes cloud of trusted network devices - every device in the cloud is authenticated by its neighbors |
4 major trustsec components | 1. authentication - verification of a device before permitting it to join Cisco TrustSec network 2. authorization - level of access the device has to the Cisco TrustSec network resources 3. Access Control - access policies that are applied on a per packet basis using the source tags on each packet 4. secure communication - encryption integrity and data-path reply protection for packets flowing over links within Cisco trustSec network |
3 entities with Cisco TrustSec | 1. supplicants - devices attempting to join trustsec environment 2. authenticators - devices already a part of the trustsec network 3. authorization server - device provides authentication info, authorization, information or both |
SGT | security group tag - allows the network to enforce acls by enabling the endpoint device to act on the SGT for filtering purposes |
3 Radius downloads environmental data from authentication server | 1. server list - list of servers that the client can use for future RADIUS requests 2. Device SGT - security group to which the device belongs 3. Expiry timeout - how often the Cisco TrustSec Device should refresh environmental data |
3 steps between supplicant and authenticator | 1. authentication using 802.1X 2. Authorization in which each side obtains policies such as SGT and ACLs applied to link 3. Security Association Protocol (SAP) negotiation where the Extensible Authentication Protocol over LAN (EAPOL) key exchange is used between supplicant and authenticator to negotiate a cipher suits, exchange security parameter indexes and manager keys NOTE: when these 3 succeed and SA is established |
Scsi Multidrop Topology has the following 5 characteristics | 1. Data Bits are sent in parallel on separate wires 2. control signals are sent on separate set of wires 3. only one device at a time can transmit - the device must have exclusive rights to the bus 4. special circuit called a terminator must be installed at end of the cable - cable must be terminated to prevent unwanted electrical effects from corrupting signal 5. multidrop has limitation - parallel transmission of data bits allows more data to be in a given time period but complicates receiver transmitter sync - incorrect device termination can cause issues |
SCSI design 3 traits | 1. each devices has series of jumpers to determine SCSI ID or can be software configurable 2. each device must have a unique ID 3. the ID determines priority on the BUS |
Fibre Channel | industry standard used for storage networking |
3 Mapped to Various Protocols | 1. Fibre Channel Protocol 2. SCSI 3. FICON |
Fibre Channel Ports are intelligent interface points 3 types of devices | 1. Embedded in I/O adapter 2. Embedded in an array or tape controller 3. embedded in a Fabric switch |
N Port | Fibre Channel Node Port connects a node to the Fabric |
F Port | Fibre Channel Fabric Port Switch Port to which the N Port Attaches |
E Port | Fibre Channel Expansion Port Connection between two Fabric Switches |
FL Port | Fibre Channel Fabric Loop Port |
NL Port | Fibre Channel Node Loop Port |
FC-AL | Fibre Channel Arbitrated Loop Port - enables devices to be connected in a one way loop ring topology |
Word | Fibre Channel Framing - smallest unit of Data 32 bits encoded into 40-bit form |
Frame | Fibre Channel Framing - words are packaged into these equivalent to IP packet |
Sequence | Fibre Channel Framing - unidirectional series of frames |
Exchange | Fibre Channel Framing - series of sequences between 2 nodes |
FLOGI 3 parts to it | Fabric Login - 1. N Port must log into its attached F Port - Fabric Login (FLOG) - 2. N Port must login to its taget N port - port login (PLOGI) 3. N Port must exchange ULP support information with its target N Port - ensure that the initiator and target process can communicate known as process login (PLRI) |
Fibre Channel Flow Control | Uses a credit based strategy - transmitter does not send a frame until the receiver tells the transmitter the receiver can accept another frame - receiver is always in control |
Benefits to Fibre Channel Flow Control | 1. prevents loss of frames due to buffer overruns 2. maximizes performance under high loads |
2 Types of Fibre Channel Flow Control | 1. Buffer to Buffer - port to port 2. End - to End (source to destination) |
FC WWNs | 1. Every FC has a hard-coded address called a world wide name - allocated to manufacturer by IEEE - Coded into each device when manufactured 3. 64 or 128 bits (128 most common today) |
nWWN | uniquely identify devices - every HBA, array controller, switch, gateway and Fibre Channel Disk has one |
pWWN | uniquely identify each port in a device these identify each port on a device so a dual ported HBA has one nWWN and a pWWN for each port |
Fibre Channel Address Format | 1. Domain ID - 8 bits max 239 (cisco max is 80) 2. Area ID may be used to identify groups of ports within a domain - areas can be used to group ports within a switch - they may also be used to uniquely ID fabric attached arbitrated loops - each fabric attached loop has a unique area ID - 3. port ID is used to ID the individual devices on the port |
3 traits of Virtual SAN | VSAN 1. include ports to create isolated virtual fabrics 2. Isolate Fabric Services 3. Limit fabric disruption |
VSAN 6 features | 1. dynamic provisioning and resizing 2. improved port utilization 3. non-disruptive reassignment 4. shared EISL bandwidth through trunking 5. Statistics gathered per VSAN 6. port are added and removed non-disruptively to and from VSANs |
Each VSAN provides its own Fibre Channel fabric services which include these 5 items | 1. FLOGI server 2. Distributed name server 3. Distributed zoning 4. Fabric Shortest Path First (FSPF) routing protocol 5. management server 6. Services run, managed and configured independently |
TE Ports 7 | 1. Carries tagged frame from multiple VSANs 2. trunks all VSANs (1-4093) by default 3. VSAN allowed list defines which frames are allowed 4. can be optionally disabled for E port operation 5. Has native VSAN assignment for E port operation 6. do not confuse with port channel 7. Default to assigned VSAN of 1 |
EISL | 1. link created by connecting two TE ports 2. enhanced ISL functionality 3. also carries per-VSAN control protocol info 4. FSPF distributed name server zoning updates |
4 Port-based VSANs | 1. VSAN membership based on physical switch port 2. switch-wode configuration 3. reconfiguration required when server or storage moves to another switch 4. Switch belongs to VSAN |
4 WWN-based VSANs | 1. VSAN membership based on pWWN of server or storage 2. Fabric-wide distribution of configuration using Cisco Fabric Services 3. No reconfiguration required when a host or storage moves 4. device belongs to VSAN |
4 VSAN Tagging | 1. Traffic isolation - control over each incoming and outgoing port 2. each frame in the Fabric is uniquely identified - Labelled with a VSAN_ID header on the ingress port - vsan ID stripped away across E Ports - maintained across TE ports 3. SAN and priority in header for QoS 4. FC-ID can be re-used across multiple VSANs |
4 Inter VSAN Routing | IVR 1.IVR allows selective routing between specific members from 2 or more VSANs 2. IVR process is stateful 3. Most Fibre Channel control traffic is blocked and cannot pass the VSAN boundary 4. Route (Domain ID) redistribution from one VSAN to the other enables IVR. |
7 Zone Rules | 1. a zone set can be activated or deactivated as a single entity across all switches in the fabric 2. only one zone set can be activated at any time 3. a zone can be a member of more than one zone set 4. a zone consists of multiple zones members - members in a zone can access each other; members in different zones cannot access each other 5. zones can overlap (permit by Fibre Channel Standards) 6. Zones typically do not cross VSAN boundary - zones are contained within a VSAN - IVR zones can cross the VSAN boundary 7. zones are per-VSAN significant - Zone A in VSAN 2 is different and separate from zone A in VSAN 3. |
Soft Zoning | implemented in switch software and enforced by name server - name server responds to discovery queries only with devices found in the zone or zones of requester |
Hard Zoning | enforced by ACLs in port ASIC - applied to all data path traffic |
Zone Membership types include | 1. pWWN, fWWN, FCID, interface with sWWN 2. Domain ID and port number 3. Ip address 4. symbolic node name (such as iSCSI-qualified name) 5. Fibre Channel or device alias |
N-Port Virtualization | Extension to NPIV - allows the blade switch or ToR Fabric device to behave as an NPIV-Based HBA to the core Fibre Channel Switch - NPV device aggregates the locally connected host ports (N Ports) into one or more uplinks (psuedo-interswitch links) to the core switches |
Minimum FCOE Requirements | 1. Jumbo Frames - are enabled by default 2. Fibre Channel IDs must be mapped to Ethernet MAC Addresses 3. Lossless Delivery of Fibre Channel Frames 4. Minimum of 10 G/s ethernet platform |
3 Types of links and traffic in this scenario | 1. link between the server CNA and the F1 module port is an FCoE link and carries 2 types of traffic: network and Fibre Channel 2. link between the F1 module port and the MDS switch is an FCoE link but carries only Fibre Channel Traffic - Cisco Nexus 7000 does not provide any modules with Native Fibre Channel Ports 3. The lin between the Cisco MDS switch and the disk array is a native Fibre Channel link that carries only Fibre Channel Traffic |
FIP Process 5 Parts | 1. Host solicitation 2. Switch provides the fabric unique FC-MAP 3. Host performs FLOGI 4. FCF provides FC-ID 5. Host uses the FPMA for subsequent transmissions |
Single Hop FCoE 5 | 1. Direct Attached 2. FCoE with FEX 2232 3. Remote Attached 4. FIP Snooping 5. vPC 6. FCoE NPV |
Multihop FCoE | same options for the link between host and FCF as with single-hop FCoE |
DCB Ethernet Enhancements 4 | IEEE-based enhanced to classical ethernet 1. priority groups - virtualizes links and allocates resources per traffic classes 2. Priority flow control by traffic class 3. End-to-end congestion management and notification 4. Layer 2 multipathing |
DCB Benefits 4 | 1. Eliminates transient and persistent congestion 2. lossless fabric - no drop storage links 3. deterministic latency for HPC clusters 4. enables a converged ethernet fabric for reduced cost and complexity |
Priority Flow Control 5 | 1. Defined as 802.1Qbb for FCoE 2. Enables lossless Ethernet using PAUSE based on 802.1p COS 3. Link is congested, CoS assigned to "no-drop" will be paused 4. Other traffic continues and relies on upper layer protocol retransmission 5. Solution not limited to FCoE traffic |
DCBX Protocol 5 | 1. Defined in IEEE 802.1QaZ 2. PTP per link discovery 3. Negotiates Capabilities - PFC, ETS, Applications (FCoE) 4. Enables distribution of parameters from one node to another 5. Responsible for logical link up/down signaling - ethernet int and FC int |
DCBx Negotiation 4 | 1. Discovery of peer DCB capabilities 2. Misconfiguration detection 3. Peer confiiguration a. administered parameters - provisioned to peer device b. operational parameters - informational purposes only c. local parameters - not exchanged 4. DCBx negotiation failures result in a. per priority pause not enabled on CoS values b. vfc not coming up when DCBX is being used in FCoE environment |
7 steps Cisco Nexus 5K FCoE Config Procedure | 1. make sure that correct license is installed 2. Enable FCoE 3. Configure FCoE interfaces for trunking and flow control (Nexus 5k only) - portfast (port type edge trunk) - FCoE vlan allowed 4. disable LAN traffic on an FCoE link (optional) - switch send a BDBx LAN logical link status (LLS) message to CNA - brings down all VLANs on the interface that are not enabled for FCoE 5. Configure FCoE MAC address Prefix (FC-MAP) optional - provided to hosts in FIP advertisements - switch discards the MAC addresses that are not part of the current fabric 6. Configure the fabric priority advertised by the switch (optional) - used by CNA to determine the best switch to connect to 7. set the advertisement interval (optional) |
7 parts to Cisco Nexus 7k FCoE Config Procedure | (Default VDC) 1. License each module used for FCoE 2. Install FCoE feature set and enable features - requires LLDP - optional - LACP 3. Enable FCoE QoS 4. Configure FCoE interfaces for trunking 5. Configure storage VDC and allocate interfaces (STORAGE VDC) 6. Enable features in storage VDC 7. Configure optional FCoE a. configure the FC Map b. configure the fabric priority c. set the advertisement interval d. disable LAN traffic on selected non-shared FCoE links |
5 parts to Cisco Nexus 5000 FCoE VLAN and Virtual Interfaces Configuration Procedure | 1. Configure a dedicated VLAN for each Virtual Fabric (VSAN) 2. Map the VLAN to the specified VSAN 3. Configure a virtual Fibre Channel Interfaces - switchport modes - virtual F (VF), virtual E (VE), N-Port Virtualization (NPV) - default mode - virtual fabric (VF) 4. Bind the virtual Fibre Channel interface to physical interface - to interface - to MAC address - for use of FIP devices 5. associate the virtual Fibre Channel interface to an appropriate VSAN |
9 Parts to Cisco Nexus 7k FCoE VLAN and Virtual Interfaces Config Procedure | 1. Make sure configuration is in Storage VDC 2. Configure a Dedicated VLAN for each VSAN 3. Map the VLAN to the VSAN 4. Configure a Virtual Fibre Channel interface 5. Bind the virtual Fibre Channel interface to physical interface 6. Configure virtual FC port channel (optional) 7. places virtual FC interface into VSAN 8. Place virtual Fibre Channel port channel to VSAN (optional) 9. Configure VE loopback (optional) - by default VFID check verifies that the VSAN configuration is correct on both ends of a VE link - VE loopback turns off the VFID check for VE Ports |
FCoE Interfaces | 1. Can be ethernet of port channel interfaces 2. can be connected to FIP snooping bridges 4. must be configured as PortFast - FCOE is not supported on private VLANs |
FCoE VLAN mapped to VSAN | 1. Must be in the allowed VLAN list 2. cannot be the native VLAN of the trunk port 3. should carry FCoE traffic only 4. should not be default VLAN (VLAN1) |
Cisco Nexus 7000 FCoE Guidelines 4 parts | 1. Supports Nexus 7K supports Gen2 or newer CNAs only 2. QoS policy must be the same on all Cisco FCoE switches in the network 3. storage VDC 4. shared interfaces |
5 parts of Storage VDC | 1. should provide only storage related features 2. FCoE feature set can be enabled in only one VDC 3. FCoE vlans configured in the FCoE allicated VLAN range 4. uses resources from an F-Series module 5. does not support rollback |
2 items of shared interfaces | 1. can be shared with only one other VDC 2, do not support certain features, such as SPAN, private VLANs, port channels, access mode, mac-packet-classify |
4 Parts tp Cisco Adapter Fabric Extender FCoE Channel | 1. Identified by a unique channel number 2. channel scope limited to the physical switch 3. connects a server vNIC with a switch Vethernet interface 4. uses tagging with VNTag Identifiers |
FCoE Switch Side 2 | 1. Nexus 5500 2. nexus 5500 connected to 2232 |
FCoE server-side 2 | 1. UCS P81E viirtual interface card for UCS C-Series 2. 3rd party adapters that support the VNTag Technology - example Broadcom BCM57712 Convergence NIC |
Cisco MDS 9000 * Port FCoE Module 5 Parts | 1. 8 port line Rate FCOE module 2. multihop FCoE 3. support 10 GE SFP+ SR/LR 4. Bridge Unified Fabric to FC SAB 5. Enabling storage services to Unified Fabric |
Fibre Channel Interface Config Procedure 6 Parts to it | 1. Configure port mode 2. configure interface speed (1 g, 2g , 4g, 8g or auto) 3. config max receive buffer size default is 2112 - can be 256 to 2112 4. Configure BB credit - defaults assigned per port capabilities 1 - 64 5. Configure bit error handling - bit error threshold is when 15 error bursts occur in 5 minute period - by default switch disabled the interface when threshold is reached 6. configured global attributes for Fibre Channel Interfaces |
VSANs 6 Parts | 1. Each VSAN has its own principle switch and domain ID allocation policy 2. principal switches for different VSANs can reside on different physical switches 3. Each switch has a separate domain ID for each active VSAN 4. Domain IDs can overlap between VSANs 5. Domain ID and FC-ID allocation policy, static or dynamic 6. All ports are originally in VSAN 1 |
VSAN configuration procedure 6 parts | 1. Create a VSAN and specify its ID (VSAN 1 default) 2. Configure VSAN name (optional) 3. Configure load balancing method (optional) - based on source and destination ID - based on source, destination, and originator exchange ID (default) 4. Configure static VSAN interface membership 5. Assign VSANs based on the device WWN - referred to as a dynamic port VSAN membership (DPVM) - cisco Nexus 5000 switches do not support DPVM 6. Suspend, activate or delete a VSAN (optional) |
Show VSAN 6 parts | 1. VSANs created 2. VSAN name 3. Administrative State (active and suspended) 4. Interoperability Setting 5. Load-balancing scheme 6. Operational State (up or down) |
vsan trunking 4 parts | 1. no special support required by end nodes 2. VSAN tagged header (VSAN_ID) is added at ingress point indicating membership 3. EISL trunk carries tagged traffic from multiple VSANs 4. VSAN header is removed at egress point |
san port channels 5 parts | 1. increase the aggregate bandwidth 2. balance traffic across multiple links 3. maintain optimum bandwidth utilization 4. provided fault tolerance on an ISL 5. can include up to 16 ISLs |
Cisco Fabric Services on MDS 4 parts | 1. Cisco Fabric Services is a facility for synchronizing and distributing application configurations to all switches in a fabric 2. both in-band and out-of-band protocol 3. single point of configuration ensures fabric-wide consistency 4. discovery of cisco fabric services-capable switches and applications |
Cisco Fabric Services Regions 3 parts | 1. SANS may have several administrators requiring different application requirements - example: apply different call home profiles on different sets of switches to alert the correct administrator 2. Cisco Fabric Services regions provide the ability to support distribution islands within a physical fabric 3. cisco fabric services regions apply only to applications that work in the physical scope |
Logical | distribution is limited to a VSAN |
Physical | distribution spans the physical fabric |
6 Parts to Cisco Fabric Services Implementation | 1. Ensure Cisco Fabric Services distribution in global configuration mode - enabled by default - can be disabled and re-enabled 2. configure a give application for Cisco Fabric Services - on all switches in the fabric - commit the pending database changed 3. configure distribution over IPv4 or v6 via one of 2 methods (optional) - ip multicast - static peer address 4. configure regions (optional) 5. view applications support by Cisco Fabric Services (optional) 6. Display Cisco Fabric Services-capable siwtches in the Fabric - independent of individual application registrations - local switch is indicated as (local) |
NPV Mode 3 Parts | 1. NPV is an extension to NPIV 2. NPV edge switch aggregates the locally connected host ports (N Ports) into one or more uplinks to the core switches 3. allows blade and top-of-rack switches to behaver as an NPIV-nased HBA to the core Fibre Channel Switch |
NPV Initialization 4 parts | 1. NP Port becomes operational 2. Edge switch logs itself into the core switch using a FLOGI request - using the pWWN of the NP port 3. Edge switch reigsters itself with the name server on the core switch - using symbolic port name of the NP port and IP address of the edge switch 4. subsequent FLOGIs from servers are converted to fabric discovery (FDISC) messages |
NPV Traffic Distribution 3 parts | 1. different NP ports can be connected to different core switches 2. traffic automatically distributed from servers to NP port uplinks 3. it can be manually distributed using traffic maps |
2 types of NPV traffic map feature provides the following benefits | 1. facilitates traffic engineering by allowing configuration of a fixed set of NP port uplinks for a specific server interface (or range of server interfaces 2. Ensures correct operation of the persistent FCID feature, because a server interface will always connect to the same NP port uplink (or one of a specified set of NP port uplinks) after an interface reinitialization or switch reboot |
When you deploy NPV traffic management these 4 apply | 1. use NPV traffic management only when automatic traffic engineering does not meet your network requirements 2. Server interfaces configured to use a set of NP port uplink interfaces cannot use any other available NP port uplink interfaces, even if none of the configured interfaces are available 3. when disruptive load blancing is enabled a server interface may be moved from one NP port uplink to another NP port uplink - moving between NP port uplink interfaces requires NPV to relogin to the core switch, causing traffic disruption 4. to link a set of servers to a specific core switch, associate the server interfaces with a set of NP port uplink interfaces that all connect to that core switch |
The regular fibre channel switch mode has these 5 characteristics | 1. all fibre channel services are provided - FLOGI, name server, zoning, domain server, FSPF, management - FSPF, zoning and name server databases are distributed among connected switches 2. local switching is enabled 3. the interswitch link (ISL) between switches becomes a path within the FSPF routing table 4. Up to 15 ISLs may belong to a single port channel 5. each switch consumes a domain ID |
The NPV mode has these 5 features | 1. most fibre channel services are switched off 2. the NPV-enabled switch noew becomes a multiplexor 3. The NPB-enabled switch does not use a domain ID, which is the reason for the domain ID limitation 4. there are a smaller number of switches to manage 5, it eliminates the need for server administrators to manage the SAN |
NPV Configuration 5 parts to procedure | 1. Enable NPV mode 2. Configure VSAN assignment 3, configure F and NP port interfaces 4. configure traffic maps (optional) 5. enable disruptive load balancing (optional) 6. enable NPIV on core switch |
FCoE NPV 5 Features | 1. Extension to NPV 2. Enhanced form of FCoE initiation Protocol (FIP) snooping 3. Secure connection of FCoE hosts to FCoE forwarder switch 4. Can work together with NPV on the same edge NPV switch 5. Support no Cisco Nexus 5K series in NX-OS release 5.0(3)N2(1) and later. |
FCoE NPV has the following 3 Benefits | 1. FCoE NPV does not have the management and troubleshooting issues that are inherent to managing hosts remotely at the FCoE forwarder 2. FCoE NPV implements FIP snooping as an extension to the NPV function while retaining the traffic-engineering, VSAN management, administration and trouble-shooting aspects of NPV 3. FCoE NPV and NPV together allow communication through Fibre Channel and FCoE ports at the same time - this provides a smooth transition when moving from Fibre Channel to FCoE topologies 4. when you enable FCoE NPV the switch does not reload - this feature requires an extra license |
FCoE NPV Edge Switch 2 notes | 1. Performs proxy functions to load balance logins from the hosts evenly across the available FCoE forwarder uplink ports 2. on the VNP port, an FCoE NPV bridge emulates an FCoE capable host with multiple ENodes each with a unique end node MAC address. |
VSAN to VLAN 4 parts to mapping | 1. each server VSAN needs a dedicated VLAN 2. VLAN carries FIP and FCoE traffic from the mapped VSAN 3. VLAN-VSAN mapping must be consistent in entire fabric 4. Cisco Nexus 5000 Series switches support 32 VSANs. |
4 things of VF Ports of FCoE interfaces | 1. must be bound to a VLAN trunk ethernet interface or a port channel 2. FCoE VLAN must not be the Native VLAN 3. A port VSAN must be configured for the VF Port 4. interface must be adminned up |
6 Parts to VNP Ports of FCoE interfaces | 1. must be point-to-point links 2. individual ethernet interfaces or members of an Ethernet port channel 3. for each FCF a VFC interface must be bound to the Ethernet interface 4. binding to MAC address is not supported 5. by default the VNP port is enabled in trunk mode and carries multiple VSANs 6. Have the STP automatically disabled in the FCoE VLAN |
FCoE NPV feature parity 5 parts | 1. Retains the traffic engineering, VSAN-management administration, and troubleshooting aspects of NPV 2. automatic traffic mapping 3. static traffic mapping 4. disruptive load balancing 5. FCF in the FCoE NPV bridge 6. FCoE frames recieved over VNP ports are forwarded on if the L2_DA matches one of the FCoE MAC addresses assigned to hosts on the VF ports. |
4 Limitations to VNP ports configured as vPC topologies between FCoE NPV bridge and an FCF | 1. vPC spanning multiple FCFs in the same SAN Fabric is not supported 2. For LAN traffic, dedicated links must be used for FCoE VLAN between the FCoE NPV bridge and the FCF that is connected over a vPC 3. FCoE VLANs must not be configured on the interswitch vPC interfaces 4. VF Port binding to a vPC member port is not supported for an interswitch vPC. |
4 Unsupported FCoE over FEX topologies | 1. Cisco Nexus 5010 Switch or Nexus 5020 Switch as an FCF connecting to the same FCoE NPV bridge over multiple VF ports 2. A 10 GB Fabric extender connecting to the same FCoE NPV bridge over multiple VF Ports 3. Cisco Nexus 5K series switch as an FCoE NPV bridge connecting to a FIP snooping bridge or another FCoE NPV switch 4. VF Port trunk to hosts in FCoE NPV mode |
FCoE NPV feature upgrade and downgrade 3 additional limitations | 1. cannot perform a Cisco ISSD to Cisco NX-OS release 5.0(3)N1(1) or earlier if FCoE is enabled and VNP ports are configured 2. Warning is displayed if an ISSD is performed to NX-OS release 5.0(3)N1(1) or earlier when FCoE NPV is enabled but VNP ports are not configured 3. before performing a Cisco ISSU on an FCoE NPV bridge, use the disable-fka command to disable the timeout value check (FKA check) on the core switch |
FCoE NPV configuration 7 step Procedure 2 optional | 1. Enable FCoE NPV mode using one of two methods - enable FCoE then enable NPV, enable FCoE NPV 2. Enable default FCoE QoS Policy (best practice) 3. Configure VSAN-to-VLAN mapping 4. Configure server interfaces 5. Configuring uplinks 6. Configure traffic maps (optional) 7. Enable disruptive load balancing (optional) |
Cisco Prime DCNM for SAN 6 Main Featurees | 1. Real time fabric and Network Health monitoring 2. VM-aware automated discovery and VM-Path analysis 3. VM-Aware performance monitoring 4. Detailed fabric topology views of datacenter infrastructure 5. Comprehensive Fibre Channel over Ethernet (FCoE) management, including provisioning, discovery, and operation monitoring 6. custom reporting |
Prime DCNM for SAN 8 Basic Components | 1. DCNM-SAN Server 2. DCNM-SAN Client 3. DCNM-SAN web client 4. Device manager 5. Performance Manager 6. Cisco Traffic Analyzer 7. Network Monitoring 8. Performance Monitoring |
DCNM SAN Scope 7 Items | 1. Cisco MDS 9500 Series Switches 2. Cisco MDS 9200 Series Switches 3. Cisco MDS 9100 Series Switches 4. Cisco Nexus 7000 Series Switches 5. Nexus 5000 Series Switches 6. Nexus 3000 series switches 7. UCS 6100/6200 Fabric Interconnects |
DCNM SAN 2 Editions | 1. Essentials 2. Advanced edition |
8 Key Features of CMP | 1. Dedicated operating environment 2. Monitoring of Supervisor Status and initiation of resets 3. system reset while retaining OOB ethernet connectivity 4. Capability to initiate a complete system power shutdown and restart 5. Login authentication 6. Access to supervisor logs 7. Control Capability 8. Dedicated Front Panel LEDs |
4 reasons why CMP can deliver remote control | 1. Dedicated processor 2. its own memory 3. its own bootflash memory 4. Separate Ethernet management port |
7 Best Practices for Dual Sup 1 | 1. connect 4 ethernet cables to the system 2. each supervisor requires 2 cables - one for the CMP port and the other for the management port 3. the CMP ports should be connected to the OOB management network 4. Cisco Nexus 7000 Series system requires 3 IP addresses 5. Assign a unique IP address for each CMP 6. One ip address is shared by 2 supervisors 7. active supervisor owns the IP address |
CMP access methods 3 | 1. Control Processor 2. SSH (enabled by default) 3. Telnet |
AAA Service 7 Configuration Options | 1. User telnet or ssh login authentication 2. console login authentication 3. Cisco Trustsec authentication 4. 802.1X authentication 5. EAPoUDP authentication for Network Access Control (NAC) 6. User management session accounting 7. 802.1X accounting |
4 parts to Configuring AAA | 1. to use remote Radius, TACACS+ or LDP servers for authentication configure the hosts on NX-OS switch 2. configure console login athentication methods 3. configure default login authentication methods for user logins 4. configure default AAA accounting default methods |
10 Steps to Radius Configuration | 1. enable Fabric services configuration for Radius 2. enable radius server connections to NX-OS switch 3. Configure Radius secret keys for the Radius Servers 4. If needed configure radius server groups with subsets of the RADIUS servers for AAA authentication methods 5. If needed configure any of the following optional parameters 6. dead-time interval 7. Radius Servier specification that is allowed at user login 8. Timeout interval 9. TCP Port 10. Optional if Radius distribution is enabled commit the radius configuration to the FABRIC |
4 Login and Console Authentication methods | 1. Global Pool of Radius Servers 2. Named subset of Radius, TACACS+ or LDAP Servers 3. Local Database on NX-OS (default method) 4. Username only |
SSH 4 traits | 1. Nexus Switch can be either a SSH server or client 2. Supports SSH key algorithms - SSHv2 (RSA) and (DSA) 3. Support digital certs 4. SSH configuration and operation are local to the VDC |
General information about user accounts 2 things | 1. max of 256 user accounts 2. reserved words cannot be used to configure users |
7 Characteristics of Strong Passwords | 1. Minimum of 8 characters 2. Doesn't contain many consecutive characters 3. does not contain many repeating characters 4. does not contain dictionary words 5. does not contain proper names 6. contains both upper and lower case characters 7. contains numbers |
7 RBAC config guidelines | 1. Can have 64 defined user roles (in addition to 4 in default and 2 in non-default) 2. can add up to 256 rules to a user role 3. can add up to 64 user-defined feature groups to a VDC in addition to the default feature group 4. Can configure up to 256 users in a VDC 5. can assign a max of 64 user roles to a user account 6. local account takes precedence over AAA account 7. RBAC not supported for traffic between F1 and M1 module ports in same VLAN |
User Accounts have 4 attributes | 1. Username 2. Password 3. Expiry Date 4. User Roles |
Cisco Fabric Services 8 Parts | 1. Cisco Fabric Services is a Cisco Proprietary feature that distributes data, including configuration changes, to all Cisco NX-OS devices in a Network 2. Fabric Services distributes configuration changes for applications 3. Radius 4. TACACS+ 5. User and Admin Roles 6. Call Home 7. NTP 8. Cisco Fabric Services region is a user-defined subset of devices for a given feature or application |
4 Default Fabric Services Parameters | 1. Cisco Fabric Services distribution on the device is enabled 2. Cisco Fabric Services over IP is disabled 3. IPv4 multicast address 239.255.70.83 4. IPv6 multicast address is ff15::efff:4653 |
3 Traits of NTP | 1. NTP Synchs time of day among a set of distributed time servers and clients 2. NTP must be configured in the default VDC on the Cisco Nexus 7K 3. NTP uses the default VRF if you do not configure a specific VRF for the NTP Server and NTP peer |
9 Guideline and Limitations of NTP | 1. you should have a peer association with another device only when you are sure your clock is reliable (have reliable NTP server) 2. a peer configured alone takes on the role of a server and should be used as a backup. If you have 2 servers you can configure several devices to point to one server and the remaining devices to point to another. you can then configure a peer association between these 2 servers to create a more reliable NTP configuration 3. if you have only one server you should configure all the devices as clients to that server 4. you can configure up to 64 NTP entities (servers and peers) 5. if you configure NTP in a VRF ensure that the NTP servers and peers can reach each other through the configured VRFs. 6. if a cisco fabric services is disabled for NTP, then NTP does not distribute any configuration and does not accept a distribution from other devices in the network 7. after cisco fabric services distribution is enabled for NTP the entry of an NTP configuration command locks the network for NTP configs until commit command is entered - during the lock no changes can be made to NTP config by anyone except lock initiator 8. if you use Cisco Fabric Services to distribute NTP all devices in the Network should have the same VRFs configured as you use for NTP 9. you must manually distribute NTP authentication keys on the NTP server and Cisco NX-OS devices across the network |
PTP 6 items | Precision Time Protocol 1. PTP is a time synchronization protocol for distribution of time information throughout the entire network 2. provides greater accuracy than other time synchronization protocols 3. PTP can work on both PTP and non-PTP devices 3. PTP devices are - ordinary clock - boundary clock - transparent clock 5. non-PTP devices - ordinary network switches - routers - any other infrastructure equipment 6. NX-OS supports multiple instances of PTP, one instance per vdc |
7 Guidelines and limitations | 1. NX-OS 5.2 only operates in boundary clock mode 2. only one PTP process can control all of the port clocks through the clock manager 3. PTP supports only multicast communication - negotiated unicast communication is not supported 4. PTP is limited to a single domain per network 5. PTP can be enabled only on F1 and F2 series module ports 6. all management messages are forwarded on ports which PTP is enabled handling management is not supported 7. PTP supports transport over UDP - transport over ethernet is not supported |
3 EEM Overview | 1. In-box monitoring of different components of the system via a set of software agents (event detectors) 2. event detectors notify EEM when an event of interest occurs; based on this an action can be taken 3. EEM policy consists of an event statement and one or more action statements |
3 Advantages of EEM | 1. ability to take proactive actions based on configurable events 2. build automation directly into the device 3. reduce network bandwidth by doing local event monitoring |
EEM 3 major components | 1. event statements 2. action statements 3. policies |
8 Guidelines and Limitations | 1. Max number of configurable EEM policies is 500 2. Action statement with your user policy or overriding policy should not negate each other or adversely affect the associated system policy 3. an override policy that consists of an event statement and no action statement triggers no action and no notification of failures 4. an override policy without an event statement overrides all possible events in the system policy 5. EEM event correlation is supported only on the supervisor module, not on individual line cards 6. EEM event correlation is supported 7. EEM event correlation does not override the system default policies 8. default action execution is not supported for policies that are configured with tagged events |
10 EEM ACtions | 1. Execute CLI commands 2. Update a counter 3. Log an exception 4. Force shutdown of any module 5. reload device 6. Shutdown specified modules because power is over budget 7. generate syslog message 8. generate a call home event 9. Generate SNMP notification 10. use the default action for the system policy |
Netflow Overview | 1. Netflow uses flows to provide statics for accounting, network monitoring and network planning 2. Cisco NX-OS supports the Flexible Netflow feature that enabled enhanced network anomolies and security detection 3. you can export the data that Netflow gathers for your flown by using an exporter and export this data to a remote netflow collector 4. Flow has a unidirectional sequence of packets that share 7 values |
7 values netflow can share | 1. ingress interface (SNMP ifIndex) 2. Source ip address 3. destination ip address 4. ip protcol 5. source port for UDP or TCP, 0 for other protocols 6. Destination port for UDP or TCP, type and code for ICMP or 0 for other protocols 7. ip type of service |
Netflow v9 - 3 benefits | 1. variable field specification format 2. support for IPv6, L2 and mpls 3. more effecient network utilization |
Netflow v5 - 3 limitations | 1. fixed field specifications 2. 16-bit representation of 32-bit interface index used in Cisco NX-OS 3. No supports for IPv6, L2, or MPLS |
9 guidelines and limitations of Netflow | 1. you must configure a valid record name for every flow monitor 2. Use v9 export to see the full 320bit SNMP ifIndex vlaues at the netflow connector 3. Max number of supported Netflow entries is 512K 4. Cisco Nexus 2K fex supports bridged netflow (for flows within a VLAN) 5. F1 series ports do not suport bridged Netflow 6. Netflow is not supported on F2 Series modules 7. Only L2 Netflow is applied on L2 interfaces and only L3 Netflow is applied on L3 interface 8. a rollback will fail if you try to modify a record that is programmed in the hardware during a rollback 9. you must configure a source interface - if you do not configure a source interface the exporter will remain in a disabled state |
Cisco Smart Call Home | 1. Automatic execution and attachment of relevant CLI command output 2. multiple message format options such as the following - short text - pagers or printed reports - full text - human reading - XML - machine readable format uses XML and adaptive messaging language (AML) XML schema definitions (XSD) 3. multiple concurrent message destinations - you can configure up to 50 email destination addresses for each destination profile |
Cisco Smart Call Home Destination Profiles | 1. One or more alert groups - the group of alerts that traffic a specific Cisco Smart Call home message if the alert occurs 2. one or more e-mail destinations - the list of recipients for the Cisco Smart Call Home messages generated by alert groups assigned to this destination profile 3. message format - format for the Cisco Smart Call Home Message (short text, full text or XML) 4. message severity level - severity level alert must reach to fire off call home |
Supported Alert groups | 1. Cisco TAC 2. Configuration 3. Diagnostic 4. EEM 5. Environmental 6. Inventory 7. License 8. Line Module Hardware 9. Supervisor Hardware 10. Syslog port group 11. System 12. Test |
Scheduler 5 Periodic Mode Times | 1. Daily 2. Weekly 3. Monthly 4. Delta - begins a specified start time and then at specified intervals 5. ONE TIME MODE |
3 Reasons why scheduler would fail | 1. if the license has expired for a feature at the time when job for that feature is scheduled 2. if a feature is disabled at the time when job for feature is scheduled 3. removing a module from the slot and a job for that slot is configured |
SPAN definition | Analyzes all traffic between source ports by directing the SPAN session traffic to a destinatrion port with an external analyzer attached to it - you can define the sources and destinations to monitor in a SPAN session on the local device. |
Source of SPAN session can be | 1. Ethernet ports 2. port channels 3. inband interface to the control plane CPU - you can monitor the inband interface only from the default VDC 4. VLANs - when a VLAN is specified as a SPAN source all supported interfaces in the VLAN are 5. Fabric port channels connected to the Nexus 2K |
Destination of SPAN session can be | any port on which external analyzer is attached to |
5 traits to ERSPAN | 1. ERSPAN source session 2. Routable GRE encapsulated traffic 3. ERSPAN destination session 4. Can be configured on different switches 5. Cannot look at information from supervisor. |
ERSPAN Sources can be one of the following 4 | 1 ethernet ports and port-channels 2. inband interface to the control plane CPU - only from default VDC but monitors all inband traffic - 3. VLANs 4. Fabric port channels connected to the Nexus 2000 FEX |
ERSPAN Destinations can be one of these 6 | 1. Ethernet Ports or port-channel interfaces in either access or trunk mode 2. a port configured as a destination port cannot also be configured as a source port 3. Destination port can be configured in only one ERSPAN session at a time 4. Destination ports do not participate in any spanning tree instance or any L3 Protocols 5. Ingress and ingress learning options are not supported on monitor destination ports 6. F1 and F2 series module core ports, Fabric extender host interface (HIF) ports, HIF port channels |
XML based network Config Protocol | NETCONF protocol allows you to manage devices and communicate over the interface using an XML management tool |
4 Configuration Methods | 1. CLI 2. XML API management interface 3. Cisco DCNM client 4. User defined GUI |
DCNM for LAN 3 things | 1. Operational monitoring of data center infrastructure - proactive monitoring - performance and capacity - topologic views 2. Data Center resource management - automated discovery - configuration and change management - template based provisioning 3. image management - integration with enterprise systems - web services APIs - event forwarding |
DCNM for LAN supported on these 5 Nexus platforms | 1. 7000 2. 5000 3. 4000 4. 2000 5. 1000V and 1010 |
9 Licensed Features | 1. VPC 2. VDC 3. 802.1X 4 GLBP, object tracking and key chain 5. HSRP 6. Ciscio integrated security features 7. Port Security tunnel interface 8. Configuration change control (arch, roll-back and diff) 9. operating system image management |
Fault Management 5 things on DCNM for LAN | 1. Industry standard event browser 2. Event collection and normalization 3. Per network feature correlation 4. Noise filtering for root cause isolation 5. event propagation |
Cisco NX-OS image management 5 things on DCNM on LAN | 1. Wizard-based installation of Cisco NX-OS images on multiple devices simultaneously 2. Performs validation before installation - verifies the switches flash memory space availability - verifies compatibility between currently running network services and the new image 3. allows for time based deployment 4 Fully leverages NX-OS ISSU transparent software upgrade that has no impact to network traffic (no service disruptions - zero packet loss) 5. detects installation failure and automatically initiates recovery action 6. images can be installed using FTP/TFTP/SFTP |
VDC Management DCNM for LAN | 1. VDC are transparently handled through the application wizard-based configuration - interface allocation across VDC - resource limit enforcement with templates - resource consumption monitoring - ipv4 and ipv6 capable 2. VDC aware fault and performance monitoring 3. VDC-aware RBAC 4. Topology representation 5. VDC per chassis 6. VDC-to-VDC connectivity 7. Real-time or delayed discovery |
Cisco VIC Adapter | 1. CNA designed for both single-OS and VM-based deployments 2. replaces software-based switching on the server with hardware switching in the fabric interconnect 3. supports static or dynamic virtual interfaces 4. offers up to 128 vNICS - eg. M81KR VIC |
VM-Fex High Performance Mode and 3 benefits | traffic to and from the VM bypasses the DVS and hypervisor - traffic travels directly between the VMs and the VIC adapter - benefits of this mode are 1. increases I/O performance and throughput 2. decreases I/O latency 3. Improves CPU utilization for virtualized I/O intensive applications |
High performance mode operation involves there 2 stages and 4 parts | 1. two VMs are attached to a VIC in high-performance mode 2. when vMotion migration begins on one VM, that VM transitions to standard mode 3. the VM migrates to the other host and the standard mode is established 4. the VM transitions back to high-performance mode |
vCenter High-Level Configuration Overview | 1. Configure datacenter attributes 2. configure a distributed virtual switch 3. Add ESX hosts to the DVS 4. Migrate ESX hosts to pass-through switching 5. setup virtual machines (VMs) on the server 6. configure hosts with common shared storage (datastore) for vMotion (optional) 7. Reserve all guest memory on the VMs (optional for VM-Fex in high performance mode) 8 specify the port profiles and VMWarePassThrough ethernet adapter policy - must reference the elements that you have previously configured in Cisco UCS manager must be specified - optional for VM-Fex in high |
EPLD Upgrade Procedures | 1. Determine whether to upgrade EPLD images 2. download images 3. upgrade EPLD images using one of these 2 methods - all of the modules installed in your switch - specific modules in switch 4. verify the EPLD upgrade |