Understand Cisco Nexus 1000v deployment and other components – Part 1
Pls read this part if you want to get short update about components and to see my test environment, if you do not need to such information pls switch part 2
Cisco Nexus 1000v is NX-OS software which run on hypervisor, its support VN-Link server virtualisation tech.Cisco Nexus 1000v provide L2 switching,advanced networking and common management model.
Each ESXi node (also it can be Windows 2012 node) become a line card. Two components together make up Cisco Nexus switch.(VSM can be installed independent of the VEM,VEM itself provide package-forwarding capability)
VSM(Virtual Supervisor Module)
Use Cisco NX-OS Software
Control all VEMs
VEM(Virtual Ethernet Module)
Alternatif for VMware Distributed Switch
Activate advanced network capability too Hypervisor
Provider dedicated switch port to VMs
Each ESXi node can have only one VEM installation, not more
Cisco Nexus 1000v use virtual chassis module, Slot of 1 and 2 always assigned to VSM, VEMs are between slot 3 to 66
Port Profile is virtual boundary between server and networking and Cisco Nexus 1000v provide Network Policy Management with port-profile
Switch# show port-profile name Basic-VM
switchport mode access
switchport access vlan 53
VSM provide management plane much like Cisco 7000 series Supervisor module, single point of management and coordinating configurations and functions across VEMs. Different thing is VSM is software not a hardware thing. Installable via OVF and support HA installation.
VSM do not deal about data flow, all data move via VEM.
VSM Interface ;
SVS(Software Virtual Switch)
VSM is a virtual machine and require 3 vNICs which should be Intel 1000e driver.
Interface adı mgmt0, generally not used for communication between VSM and VEM, its provider connectivity between vCenter and VSM. SVS L3 mode management could be communication interface between VSM and VEM. Generally this interface should be setter as a Network Adapter 2 on VSM VM config.
Control interface used from inter-communication between VSM and VEM when SVS mode set L2 also this interface used for communication between VSMs when HA mode activated. When SVS mode set L3 still VSM HA used this interface for communication, very important and maybe you can set priority for this. Generally this interface should be setter as a Network Adapter 1 on VSM VM config.
its mostly used for internal communication of VEMs like CDP , IGMP protocols. The last vNIC is for packet interface and for L3 deployment all packet communication happen via L3 interface between VSM tp VEM.
Domain ID is kind of tag and used for intercommunication between VSM and VEM, if one of VEM get request from VSM with different ID then ignore it.
After integrate VSM with vCenter, to communicate between VSM and VEM , VSM send some data which named Opaque Data like below
Among other content, the vswitch data contains:
• Switch domain ID
• Switch name
• Control and packet VLAN IDs
• System port profiles
VEMs are like a line card but they are all independent switch from a forwarding perspective, its tightly integrated with ESXi and kernel component.
Max 64 VEM can be managed by one VSM pair.
Switch Port Interfaces ; vEth (Virtual Ethernet) ,ETH , Po (Port Channel)
vEth is new concept and describe switch port which connected VM vNIC or vmknic/vswif. Another point of view vEth is a port like physical switch which server connected it via wired 😀 . Its unique, no problem at vMotion time and statically bind when VM is created, its not unassigned when VM is suspended or down but when VM is deleted related beth wait next VM provisioning to assign VM.
Eth is link to pNIC of ESXi host.
Po is used for link aggregation.
Each VEM keeps its own mac table and do not know something about other VEM, just forward to up if related MAC is not in its table.
No spanning tree, its use loop-preventation strategy.
VEM to VSM Communication ;
L2 mode, VSM and VEMs should be on same L2 domain.
svs mode L2
L3 mode, VSM and VEMs could be any ip network , they can communicate each other and can maintain with easy ping command
svs mode L3 interface (mgmt0 or control0)
Layer 3 mode encapsulate packet and control frames via UDP, a new or existing l3control enabled port-group should be and vmkernel interfaces on ESXi node should be move on this port-group.
Node restart or such issues do not effect slot number of VEMs
VSM has a heartbeat with assigned VEMs. Every second VSM send a request every second and if it can not get response from VEMs, its remove the VEM from line card via VSM. VEM worked with last known state and continue forwarding. When connectivity come back and configuration changed then need to be VEM reprogrammed and this cause 1-15 second downtime on related VEM or ESXi node.
VSM to VEM communication is encrypted.
To deploy VSM;
- You can use ESXi managment vmkernel interface for communication between VSM-VEM, then no need additional interface on node
- Or you can seperate ESXi and VSM to VEM communication interfaces
- End you can use the control interface for it but ESXi Managment VMKernel vlan and Control vlan should be on different vlans.
Port profile needed to apply network policy and after its provisioned via VSM its appear on vCenter after creation now VMs can connect their vNICs to port group. Port profil are not static and can be changed any time when the system is running.
When newly created VM powered on, vEth created on Nexus 1000v switch and its appointed to port profile. Related vEth number is not deleted or used buy other VM, until the VM is deleted its vEth always will be exist.
Note for vmware users, a port profile and ethernet/uplink profiles icons are not changed for nexus 1000v on vCenter, same illustrations are already used.
Very special meanings it has. its like a read ahead or quick start like solution, try to up VSM and VEM communication quick as possible when ESXi starts. The control and packet VLANs must be defined as system VLANs. The service console interface and VMware vmkernel iSCSI or NFS interface should also be defined as system VLANs.
My Test Environment
A standalone vSphere 5.1 ESXi server which will host vCenter and Update Manager servers also VSM will run on it
For guests machines vSphere Standart Switch will be used
* A single VMKernel Port for ESXi management
* Two virtual switches , vSwitch0 and vSwitch1
* Two vlan ‘770’ and ‘773’
* Two port group ‘VM Network’ and ‘V770’
I planed to keep VSM on different network and VSM guest machine will be on vLAN 770, vCenter and Update manager on vLan 773
Two additional vSphere 5.1 ESXi servers which are already joined cluster under related datacenter we need, this nodes have ;
* Two VMKernel Port for ESXi management and iSCSI (For Now)
* Two virtual switches , vSwitch0 and vSwitch1
* Two vLan for Management of ESXi and for iSCSI trafik
* Additional one or more, in my configuration only one extra ethernet interface needed for add it Cisco Nexus 1000v switch for uplink needs of VM traffic, between VMs or internet traffic
Before installations pls get ready ;
Pls go to the link and search section write down 1000v and select NX-OS System Software link and download. You also need to install java on where VSM Installer will run 😀 download from this link and choose the relate version and supported OS. I saved all downloads and installed Java JRE on vCenter VM.
Posted on 12/01/2013, in Cisco Nexus 1000v and tagged 1000v communication issues between VSM and VEM, Cisco, Cisco Nexus 1000v Distributed Switch, Cisco Nexus 1000v installation, Communication issue between VSM and VEM, Nexus 1000v, VEM, VSM, VSM Cannot See VEM. Bookmark the permalink. Leave a comment.