http://searchnetworking.techtarget.com/tip/vSphere-vSwitch-primer-Design-considerations
Virtual switches (vSwitches) are the core networking component on a vSphere host, connecting the physical NICs (pNICs) in the host server to the virtual NICs (vNICs) in virtual machines. As managed Layer 2 switches, the vSphere vSwitch emulates the traits of a traditional Ethernet switch, performing similar functions, such as VLAN segmentation. Yet the vSphere vSwitch has no routing function and must rely on either virtual routers or physical Layer 3 routers.
With this in mind, there are many ways to design vSwitches in vSphere. In planning vSwitch architecture, engineers must decide how they will use physical NICs (pNICs) and assign port groups to ensure redundancy, segmentation and security. The more pNICs one has, the more options there are for segregation, load balancing and failover, for example. With fewer pNICs, options are limited, and trying to balance security, performance and redundancy among vSwitches can be difficult.
Three kinds of vSphere vSwitches
Engineers must start by choosing the right kind of vSwitch for their environment. Three types of vSwitches can be used with vSphere: vNetwork Standard vSwitch (vSS), vNetwork Distributed vSwitch (vDS) and the Cisco Nexus 1000v.
Standard vSwitches are easy to use and work for smaller environments. A vSphere host may have up to 248 vSSs configured with up to 512 port groups per vSwitch for a maximum of 4,088 total vSwitch ports per host. They must be configured individually on each host, so in larger environments it can be time consuming to maintain them. A vSS lacks any of the advanced networking features that are present in the vDS and 1000v.
Distributed vSwitches are very similar to standard vSwitches, but standard vSwitches are configured individually on each host; vDSs are configured centrally using vCenter Server. You may have up to 32 vDSs per vCenter Server, and each host may connect up to 16 of these switches. While vDSs are created and maintained using vCenter Server, they are not dependent on vCenter Server for operation.
Cisco Nexus 1000v is a hybrid distributed vSwitch developed jointly by Cisco and VMware. The Nexus 1000v adds even more intelligence and features for better management to protect virtual machine traffic.
Which vSphere vSwitch should you choose? Without a vSphere Enterprise Plus license the answer is simple: Standard vSwitches are your only choice. If you do have an Enterprise Plus license, the vDS or Nexus 1000v become possible. However, where the vDS is included with vSphere, the 1000v is an additional license cost per host.
Using a vSphere vSwitch for 802.1Q VLAN tagging
Virtual switches support 802.1Q VLAN tagging, which allows for multiple VLANs to be used on a single physical switch port. This capability can greatly reduce the number of pNICs needed in a host. Instead of needing a separate pNIC for each VLAN on a host, you can use a single NIC to connect to multiple VLANs.
Tagging works by applying tags to all network frames to identify them as belonging to a particular VLAN. There are several methods for doing this in vSphere, with the main difference between the modes being where the tags are applied. Virtual Machine Guest Tagging (VGT) mode does this at the guest operating system layer; External Switch Tagging (EST) mode does this on the external physical switch; and Virtual Switch Tagging (VST) mode does this inside the VMkernel. The VST mode is the one that is most commonly used with vSphere.
vSphere vSwitch design considerations
The main influencer on vSwitch design is the number of pNICs on a host. This will determine how much redundancy and separation of traffic types can be implemented using vSwitches.
- Redundancy: There must be enough pNICs so a vSwitch can survive a pNIC failure.
- Load balancing: There must be enough pNICs so traffic can be spread across multiple pNICs.
- Separation: Sensitive VM traffic types must be physically separated.
- Function: Varying host traffic types must be physically separated.
Assigning ports and port groups in vSwitch design
For security and performance reasons, it is not recommended to throw all ports and port groups together on a single vSwitch with assigned pNICs. In creating vSwitches there are several types of ports and port groups that can be added for separation, security and management:
- Service Console: This is a critical port that is unique to ESX hosts. It is the management interface for the Service Console VM, also known as a VSWIF interface. Every ESX host must have a service console, and a second can be created on another vSwitch for redundancy.
- VMkernel: With ESXi this port serves as the interface for the management console. For both ESX and ESXi hosts this port is also used for vMotion, Fault Tolerance Logging and connections to NFS and iSCSI. data stores. VMkernel ports can be created on multiple vSwitches for redundancy and to isolate the different traffic types onto their own vSwitches.
- Virtual Machine: These are the port groups connected to the vNICs of a VM. Multiple VM port groups can be created for each VLAN supported on a vSwitch. VLAN IDs can be set to route traffic to the correct port group.
Service Console and VMkernel traffic is critical to the proper operation of the host and should always be separated from VM traffic. This traffic also contains sensitive data and not all traffic (i.e., vMotion) is encrypted. Therefore you want to split your pNICs into multiple vSwitches that are dedicated to specific functions and types of traffic.
More on virtualization networking
Protecting virtual environmentswith DMZ networks
vCloud and VEPA both fail in virtualization networking
High availability for virtualization is a false promise
The ideal virtualization networking solution may not exist yet
Vswitch design for redundancy
If only one pNIC is used in a vSwitch and a failure were to occur, all the VMs would lose network connectivity. This is especially important for the Service Console and VMkernel ports. The heartbeat that each host continually broadcasts is the trigger for the High Availability (HA) feature and is sent via the Service Console (ESX) and VMkernel (ESXi) ports. If those ports become unavailable for longer than 12 seconds because of a network failure, HA is triggered and the VMs on the host are shut down and started on other hosts. Therefore it is important to have redundancy to prevent false alarms from occurring due to a single network port failure.
To achieve redundancy in a vSwitch there must be at least two pNICs assigned to it, providing redundancy at the host level but not at the path level. To achieve further resilience, each pNIC must connect to a different physical switch. That way a total switch failure still ensures that a path is intact with the network. pNICs can be set up in Active or Standby mode. Active means that a pNIC actively functions in the vSwitch. Standby means that the pNIC doesn’t participate until a failure of an active pNIC occurs. In most cases a pNIC would be in Active mode to be fully used. A pNIC can function as both Active for one port group and Standby for another port group on the same vSwitch. This can be useful in certain situations when there are limited pNICs and engineers want to dedicate certain traffic types to each pNIC and still have redundancy.
The following example shows a vSwitch that contains both a Service Console and VMkernel port group. Instead of creating a vSwitch for each, which would require four pNICs total for redundancy on each vSwitch, one vSwitch can be created for both port groups with two pNICs operating in Active/Standby mode:
- vSwitch0 – Service Console port group – vmnic0 (active) – vmnic1 (standby)
- vSwitch0 - VMkernel port group – vmnic1 (active) – vmnic0 (standby)
In this configuration, each of the critical port groups has a dedicated pNIC, but if a failure occurs, the other pNIC can stand in and serve both port groups.
来源:https://www.cnblogs.com/jjkv3/archive/2013/05/08/3066468.html