Excelente documentação de vmware para datacenter

Vale muito a pena dar uma conferida nesta documentação!
É o projeto da vmware para o datacenter da cisco!
Download pdf completo: VMware Infrastructure 3 in a Cisco Network Environment

Veja uma previa abaixo:

ESX Server Network and Storage Connectivity

VMWare networking is defined per ESX host, and is configured via the
VMware VirtualCenter Management Server, the tool used to manage an
entire virtual infrastructure implementation. An ESX Server host can
run multiple virtual machines (VMs) and perform some switching internal
to the host’s virtual network prior to sending traffic out to the
physical LAN switching network.




ESX Server Networking Components


Figure 1 VMware Networking is Defined per ESX Host


vmnics, vNICs and Virtual Ports

The term “NIC” has two
meanings in a VMware virtualized environment; it can refer to a
physical network adapters (vmnic) of the host server hardware and it
can also refer to a virtual NIC (vNIC), a virtual hardware device
presented to the virtual machine by VMware’s hardware abstraction
layer. While a vNIC is solely a virtual device, it can leverage the
hardware acceleration features offer by the physical NIC.

Through VirtualCenter, you can see the networking configuration by
highlighting the ESX host of interest (on the left of the interface,
see
Figure 1). Within the Configuration tab (on the right side of the interface), you can find the association between the VM’s vNICs (VM_LUN_0007 and VM_LUN_0005 in Figure 1)
and the physical NICs (vmnic0 and vmnic1). The virtual and physical
NICs are connected through a virtual switch (vSwitch). A vSwitch
forwards the traffic between a vNIC and a vmnic, and the connection
point between the vNIC and the vSwitch is called a virtual port.

Clicking the Add Networking button opens the Add Network Wizard, which guides you through the creation of new vSwitches or new Port Groups, a feature used to partition an existing vSwitch.

Figure 2 shows the provisioning of physical and VM adapters in an ESX host.


Figure 2 ESX Server Interfaces


.
.
.

Figure 58 VMotion Process

VMotion is not a full copy of a virtual disk from one ESX host to another but rather a copy of “state”. The .vmdk
file resides in the SAN on a VMFS partition and is stationary; the ESX
source and target servers simply swap control of the file lock after
the VM state information synchronizes.

Deploying a VMotion-enabled ESX Server farm requires the following:


VirtualCenter management software with the VMotion module.

ESX farm/datacenter (VMotion only works with ESX hosts that are part of the same data center in
the VirtualCenter configuration). Each host in the farm should have
almost-identical hardware processors to avoid errors after migration
(check the compatibility information from VMware)

Shared SAN, granting access to the same VMFS volumes (.vmdk file) for source and target ESX hosts.

Volume names used when referencing VMFS volumes to avoid WWN issues between ESX hosts.

VMotion “connectivity” (i.e., reachability of VMkernels from originating to
target ESX host and vice versa). It may be desirable to have Gigabit
Ethernet network for state information exchange, although VMotion will
work just fine on a VLAN.

The ESX originating host and the ESX target host need to have the same Network Label configured with the same Security Policy configuration.

Note Regular
migration (i.e., non-VMotion migration) is the migration of a powered
off VMs. This type of migration does not present any special challenge
in that there is no memory replication, and if a SAN is present the
VMFS volumes are already visible by the ESX hosts in the same data center. In case a relocation is involved (i.e., in the case where the .vmdk disk file needs to be moved to a different datastore) the MAC address of the powered-on VM may change.

VMotion Migration on the same Subnet (Flat Networks)

The most common deployment of VM migration requires Layer 2 adjacency between the machines involved (see Figure 59).
The scope of the Layer 2 domain is for the most part limited to the
access layer hanging off of the same pair of aggregation switches, or
in other words typically either within the same facility (building) or
at a maximum across buildings in a campus, and typically involves 10 to
20 ESX hosts at a maximum due to the requirements of the host to be
part of the same data center for migration purposes and of the same cluster for DRS purposes.

A Layer 2 solution for a VMware cluster satisfies the requirements of
being able to turn on a machine anywhere within the cluster as well as
migrating an active machine from an ESX Server to a different one
without noticeable disruption from the user (VMotion).

Figure 59 VMware Layer 2 Domain Requirements

VMotion is better explained starting from a real example. Imagine a server farm deployment such as the one shown in Figure 60.
ESX Server Host 1 is in Rack 1 in the data center. ESX Server Host 2 is
in Rack 10 in the same data center. Each rack provides Layer 2
connectivity to the servers (design approach referred to as top of the
rack design). A pair of Layer 3 switches interconnects the racks which
may very well be several rows away from each other. The goal of the
implementation is to be able to move VM4 from ESX Server Host 2 in Rack
10 to ESX Server Host 1 in Rack 1.

Figure 60 VMotion Migration on a Layer 2 Network

For
this to happen, you need to provision the network to carry VMkernel
traffic from ESX Server Host 2 to ESX Server Host 1 and you need to
make sure that VM4 can be reached by clients when running in ESX Server
Host 1.

A solution that meets these requirements is the following:

Provisioning a VLAN for the VMkernel

Trunking this VLAN from ESX Server Host 2 all across the LAN network to ESX Server Host 1

Provisioning a VLAN for VM public access

Trunking this VLAN from ESX Server Host 2 all across the LAN network to ESX Server Host 1

Making sure that the VMkernel VLAN and the VM VLANs are separate (although they may share the same physical links)

The ESX host configuration would look like Figure 61.
The ESX host would have a vSwitch with its own dedicated NIC for the
VMkernel. The VMkernel VLAN would be trunked from the aggregation
switch to the access switches in Rack 1 all the way to the vSwitch in
ESX Server Host 2.

Similarly, the VM4 VLAN and the Network Label would be identical on vSwitch2/ESX Server Host 2 as in vSwitch2/ESX Server Host1.

Figure 61 VM Mobility and VLAN Assignment

ESX HA Cluster

As previously explained, a
VMware ESX HA cluster differs from regular HA clusters—it does not
provide application availability, but the capability to restart VMs
that were previously running on a failed ESX host onto a different ESX
host.

An HA agent runs in each
ESX host and is used to monitor the availability of the other ESX hosts
that are part of the same VMware HA cluster. ESX hosts network
monitoring is achieved with unicast UDP frames that are exchanged on
the Service Console VLAN, or VLANs if multiple Service Consoles are
configured. The agents use four different UDP ports such as ~8042. UDP
heartbeats are sent every second. No heartbeats are exchanged on the
production VLANs. Note that ESX HA requires the Domain Name Service
(DNS) for initial configuration.

When an ESX host loses
connectivity via the heartbeats to other ESX hosts, it starts pinging
the gateway to verify if it still has access to the network or whether
it has become isolated. In principle, if an ESX host finds out that it
is isolated, it will power down the VMs so that the lock on the .vmdks file is released and the VMs can be restarted on other ESX hosts.

Note that the default settings will shutdown VMs to avoid split-brain
scenarios. ESX assumes a highly available and redundant network that
makes network isolations highly rare. HA recommendations are as
follows:

Step 1 Configure two Service consoles on different networks.

Step 2 Configure each Service Console with two teamed vmnics with Rolling Failover = Yes (ESX 3.0.x) or Failback = No (ESX 3.5).

Step 3 Ensure the teamed vmnics are attached to different physical switches.

Step 4 Ensure
there is a completely redundant physical path between the ESX hosts
with no single points of failure that could cause a split-brain
scenario.

Just like with VMware for a VM to restart on a different ESX host, there needs to be a Network Label where the VM connects to when powered-on. The following failure scenarios help explaining the behavior of the VMware HA cluster.

For more information on VMware ESX HA, refer to the VMware HA: Concepts and Best Practices technical paper at:

http://www.vmware.com/resources/techresources/402

Maintenance Mode

In this first example, esxserver1 is put into Maintenance Mode. The VM is VMotion migrated to esxserver2 as it is shown in Figure 62. Note that from a network’s perspective, the only requirement for a VM to move to esxserver2 is that the same Network Label exists. For the clients to keep working on the migrating VM, this Network Label must be the same VLAN as esxserver1 and there must be vmnics trunking this VLAN to the Cisco Catalyst switches.

Note Apart from the naming of the Network Label , VirtualCenter and the HA cluster software do
not verify either the VLAN number or the presence of vmnics on the
vSwitch. It is up to the administrator to ensure that VLAN
configurations and vmnics are correctly configured before hand.

Figure 62 HA Cluster and ESX Host Maintenance Mode



ESX Host Disconnected from Production, Management and Storage

In this example, the ESX Server Host 1 is connected via a single vmnic to the LAN switching network (see Figure 63).
One VLAN provides network management access (10.0.2.0), one VLAN
provides access to the production network (10.0.100.0), and one VLAN
provides access to the SAN via iSCSI (10.0.200.0).

Figure 63 HA Cluster and NIC Failure

When the NIC vmnic0 gets disconnected, ESX1 connectivity is lost on all
VLANs. The HA agent on ESX2 determines that ESX1is not reachable via
the Service Console network, and it tries to reserve the VMFS volumes
to be able to start VM1 and VM2. Because ESX1 is not only isolated but
also lost control for the iSCSI disks, the lock eventually times out
and ESX2 can reserve the disks and restart VM1 and VM2. The total time
for the VMs to be restarted is approximately 18 to 20 seconds.

Note Duringthis process, ESX2 tries to reach ESX1 several times before declaringit failed and it also verifies that it can reach the gateway.


Note If
you happen to watch the failure of ESX1 from the VirtualCenter, do not
expect to see that vmnic0 is marked as failed. Losing vmnic0 means that
you also lost network management connectivity, so VirtualCenter cannot
collect status information from ESX1 any longer.



Overall,
having two vmnics configured for NIC teaming provides a faster
convergence time than having to restart VMs in a different ESX host.

Lost Connectivity on Service Console Only

Imagine now a network similar to Figure 63
modified in a way that the vmnic for the Service Console is separate
from the other vmnics used for production, VMkernel, and iSCSI traffic.
If the Service Console vmnic fails on ESX Host 1, the ESX Host 1
appears as disconnected from the VirtualCenter, due to the lack of
management communication, but the ESX Host 1 is still up and running
and the VMs are powered-on. If ESX Host 2 can still communicate with
ESX Host1 over a redundant Service Console (which may be not routed),
the VMs do not need to be powered off and restarted on ESX Host 2.


A redundant Service Console is recommended. The software iSCSI
initiator is implemented using the VMkernel address. The iSCSI uses the
Service Console for authentication; therefore, it must have Layer 3 or
Layer 2 connectivity to the iSCSI VLAN or network.


Lost Connectivity on Production Only

Consider Figure 63
and imagine that the NIC on ESX1 is not disconnected, but that for some
reason the path on the production network is not available (most likely
due to a misconfiguration). The HA cluster is not going to shutdown the
VM on ESX1 and restart it on ESX2, because ESX1 and ESX2 can still
communicate on the Service Console network. The result is that VMs on
ESX1 are isolated.

Similarly, assume that ESX1
has two vmnics, one used for the production network and one used for
the Service Console, VMkernel and iSCSI. If the first vmnic fails, this
is not going to cause VMs to be restarted on ESX2. The result is that
VMs on ESX1 are isolated.

In order to avoid this
scenario, the best solution is to not dedicate a single vmnic to
production traffic alone, but to use vmnics configured for NIC teaming.


Anúncios

2 comments so far

  1. Ricardo on

    nicely explained.

  2. Kris on

    Hey! This is kind of off topic but I need some advice from an established blog. Is it very difficult to set up your own blog? I’m not very techincal but I can figure things out pretty quick. I’m thinking about making my own but I’m not sure where to begin. Do you have any points or suggestions? Cheers


Deixe um comentário

Preencha os seus dados abaixo ou clique em um ícone para log in:

Logotipo do WordPress.com

Você está comentando utilizando sua conta WordPress.com. Sair / Alterar )

Imagem do Twitter

Você está comentando utilizando sua conta Twitter. Sair / Alterar )

Foto do Facebook

Você está comentando utilizando sua conta Facebook. Sair / Alterar )

Foto do Google+

Você está comentando utilizando sua conta Google+. Sair / Alterar )

Conectando a %s

%d blogueiros gostam disto: