Archive for novembro \29\UTC 2008|Monthly archive page

Script para criar vários usuarios no Active Directory.

Uma vez eu cheguei a fazer um vbs cheio de sacanagem para fazer isso! Importava usuários de uma planilha excel.
Bichu isso é muito mais simples. Veja abaixo criando usuários com o useradd usando um for.
Esse é o arquivo bat.
for /F “eol=; tokens=1,2,3,4* delims=,” %%i in (CadastroGeral.txt) do dsadd user “cn=%%i,ou=Temp,dc=meudominio,dc=local” -samid %%j -display %%k -pwd %%l -disabled yes -mustchpwd yes
Veja na imagem

Depois crie um arquivo texto com várias colunas com as informações que quer inserir, separe as colunas com virgula veja abaixo.

Meu nome,meulogin,”Meu nome completo”,minhasenha

Veja como o windows entende os parâmetros

O arquivo deve ter o mesmo nome que está entre parenteses no meu caso foi CadastroGeral.txt.

Muito show!
Entenda que você pode adcionar mais parâmetros do dsadd.
Quem me passou esse script foi o Ronil

SonicWALL Phishing and Spam IQ Quiz

Muito legal!
Parece coisa boba, mas dá para ficar em dúvida!
http://www.sonicwall.com/phishing/index.html

Quem me deu esta dica foi o Fabio Renault

Como apresentar uma LUN para uma maquina virtual

Senhores para usar cluster em vms você tem duas opções:1 – Criar um disco de quorum e edita-lo deixando a propriedade dele com shared.

2 – Usar uma LUN da storage e apresentar um disco para a vm.

Vamos usar a segunda opção (depois que testar a primeira eu posto no blog):

1

– Crie uma LUN e apresente a mesma ao servidor fisico onde esta instalado a HBA.

– Acesse o VI cliente.
– Verifique se a LUN já foi detectada pelo SO navegando nas HBAs em :aba configuration > Storage Adapters.

– Caso não apareça clique em Rescan…
– Após detectar a LUN vá para a opção storage
– No meu caso eu já tenho dois datastores apresentados pela storage e um terceiro do disco local, mas a LUN de 500 Gb não aparece.-Só por curiosidade clique em “Add Storage”
– Marque a opção “DISK / LUN” > clique em Next

– Você vai verificar que a LUN aparece para ser configurada. Clique em “Cancel”, pois não queremos que esta LUN seja formatada com o sistema de arquivos vmfs.

– Vá até a vm que você quer apresentar a LUN > Clique com o botão direito > Edit settings
– Clique em “ADD”.

– Selecione “Hard Disk” > Clique em “Next”

– Escolha a opção “Raw Device Mappings”

-Escolha a LUN de sua preferência (no meu caso só existe uma).

-Marque a opção “store virtual machine”

– Marque a opção “physical”

-Deixe padrão

– Clique em “finish”

– Clique em OK

– Inicie sua vm e veja os discos instalados:

-Lembrando que para disco de quorum a MS recomenda um disco de no máximo 500 Mb.

habilitando ssh no esxi

Por padrão o ssh vem desabilitado no vmware esxi. Para habilitar este serviço você deve:
1 -Logar console do esxi e apertar alt + F1
2 – Na tela que aparecer deve digitar “unsupported”,não vai aparecer nada, apertar enter.
3 – Digite a senha de root
4 – Edite o arquivo “inetd.conf” em /etc/ (caminho compelto /etc/inetd.conf ).
5 – Na linha que tiver #ssh, remova o #.
6 – Reinicie o host ou tente o comando kill -HUP ‘ps | grep inetd’. (comingo não funcionou, tive que reiniciar).

fonte: Vmware communities

Valeu Fabão (German from hell)

Meu amigo klOnes me mostrou outras opções para não reiniciar o S.O.

kill -HUP `ps | grep inetd | cut -c1-6`

Kl0nes – OHUP vai indicar ao processo do inetd para que ele faça um reload dos arquivos de configuracao (no caso o inetd.conf que voce acabou de editar)

Kl0nes Mesmo assim, se nao der certo, voce pode alterar a sinalizacao do processo para KILL, ficando:

kill -9 `ps | grep inetd | cut -c1-6`

Kl0nes – E executar novamente o inetd

# inetd

Kl0nes ahhh

Kl0nes tem um outro jeito mais bacana

kill -HUP `cat /var/run/inetd.pid`

# cat /var/run/inetd.pid


Virtualization for Dummies – special edition

Fonte: NTPRO.NL

Virtualization for Dummies – Sun and AMD Special Edition is now available!
Published by the same folks who create all the “Dummies” books – this
special edition version showcases Sun and AMD virtualization offerings,
how they work together, and how they can benefit businesses. Learn
about the latest virtualization technologies with this brief and easy-to-read booklet.
Via Virtualization.info.

Download

Renomeando arquivos vmdk no esxi

Senhores
É boa prática na virtualização criarmos templates de maquinas somente com a instalação do SO para não termos o trabalho de instalar do zero.

Quando gerei o template (instalei o SO), e salvei em outra
pasta, tudo ok. Porém quando criei uma vm no esxi e quis renomear o arquivo
vmdk para deixar o arquivo com o nome da maquina recebi o seguinte alerta: “At
the moment, VI client does not support the renaming of virtual disks as the
machines that use this disk may loose access to the disk.”

Veja imagem:

Segue solução encontrada no forum da vmware:

1 – Acesse a console do esxi (local ou via ssh).

2 – Navegue até a estrutura de pastas onde estão localizadas as maquinas virtuais.
Eu usei o “df -h” para saber qual é a minha partição onde estão as vms.

3 – Acesse a pasta onde está a maquina que deseja renomear o vmdk.

4 – Digite o seguite comando: vmkfstools -E nomeoriginal.vmdk novonome.vmdk.

5 – Saia da console.

Segue command syntax di vmkfstools

Free virtualization tools for tough economic times


Fonte: Eric Siebert Blog

Free hypervisors:
VMware Server – Version 2.0 has lots of new features and can be installed on several versions of Windows, Linux and almost any hardware.

VMware ESXi
– The entry-level edition of VMware’s enterprise-class hypervisor; the
installable version installs bare metal on a variety of supported and
unsupported hardware.

VMware Player – A great tool for starting up virtual machines without installing a full hypervisor on your system.

Free appliances:
The VMware appliance marketplace has hundreds of free appliances that span a variety of categories. Appliances range from simple firewalls to enterprise monitoring systems to full-blown Web and database packages (LAMP). You can run these appliances with VMware Player or import them into ESX/Server/Workstation and run them there.

Free management and reporting tools:
Embotics v-Scout – A free, agentless tool for tracking and reporting on virtual machines in VMware VirtualCenter-enabled environments.

Hyper-9
– This soon-to-be-released free search-based reporting tool is a great
addition to every administrator’s toolbox. Watch for its release around
the end of the year. If you are interested in participating in a beta
version of this tool, drop me an email. Not all beta requests will be approved and the company is looking for feedback if you do participate.

RVTools – A handy little tool that displays a multitude of information about your virtual machines.

Solarwinds VM Monitor – A free management tool that monitors ESX hosts and virtual machines.

Snaphunter and Snapalert
– Utilities that can report all running snapshots on ESX hosts,
including name, size and date. They can also automatically email
reports and optionally commit snapshots.

Visio Stencils – Some free Visio stencils from Veeam, VMGuru and the Visio Café to help you document your environment.

VMotion Info
– A free utility that gathers system and CPU information from your
hosts and puts it in a single overview to check for VMotion
compatibility.

VM ExplorerA management tool that eases management, backup and disaster recovery tasks in your VMware ESX Server environment.

MCS StorageView
– A utility that displays all of the logical partitions, operating
systems, capacity, free space and percent free of all virtual machines
on ESX 3.x or Virtual Center 2.x .

ESX HealthCheck – A script that collects configuration information and other data for ESX hosts and generates a report in HTML format.

Free administration tools:
Putty – A must-have utility for every administrator to remotely SSH into their ESX hosts.

Veeam FastSCP – A great SSH file transfer utility application.

WinSCP – Another speedy SSH file transfer utility application.

KS QuickConfig
– Designed to reduce the time needed to deploy and configure VMware ESX
servers as well as eliminate inconsistencies that can arise with manual
operations.

VP Snapper – A free utility that lets you revert to multiple VM snapshots at once rather than one-by-one.

VMware Converter – VMware’s free application that lets you perform physical-to-virtual and virtual-to-virtual operations.

vmCDconnected
– A handy utility that scans all virtual machines in your
infrastructure and shows if they have a CD connected to any of them.
After scanning you can disconnect all of the CDs with a click of a
button.

CPU Identification Utility – VMware’s free utility that displays CPU features for VMotion compatibility, EVC and 64-bit VMware support.

VMTS Patch Manager – A great ESX host-patching application for those who don’t have Update Manager.

Free backup utilities:
VISBU
– A free backup utility that runs from the Service Console and provides
VMDK-level backups of any VM in storage that is accessible by the host.

VM Backup Script – A backup script to perform hot backups of your virtual machines.

Free storage utilities:
Openfiler – A free, open
source, browser-based storage appliance that supports NFS and iSCSI. It
can be downloaded as an ISO file to install on a server or as a VMware
appliance to import to an ESX host. A great way to get more shared disk
in your environment by turning physical servers into network-attached
storage servers or turning the local disk on your ESX hosts into shared
disk when using the appliance.

Xtravirt Virtual SAN
– A free solution that turns local disk space on your ESX hosts into
shared VMFS volumes to avoid purchasing costly storage area network
disk space.

Free security tools:
Tripwire ConfigCheck
– A free utility that rapidly assesses the security of VMware ESX 3.0
and 3.5 hypervisor configurations compared to the VMware Infrastructure
3 Security Hardening guidelines.

Configuresoft Compliance Checker
– A free tool that provides a real-time compliance check that can
analyze multiple VMware ESX host servers at a time. Also provides
detailed compliance checks against both the VMware Hardening Guidelines
and the CIS benchmarks for ESX.

Olha que legal

Apesar de ser administrador de ambientes microsoft, sempre é bom ver coisas bem feitas, como a migração de 40.000 usuários de Active Directory para open-ldap.
Fonte: OpenLDAP Brasil

Resumo da Palestra

Desde maio de 2007, as mais de 1300 APS (Agências da Previdência Social) já
utilizam a dobradinha Debian/Samba para prover login de acesso aos mais
de 50.000 Desktops Windows. O objetivo é eliminar adminstração de
usuários/senhas que é feita em cada um desses servidores das APS,
através da autenticação centralizada no OpenLDAP. Por considerar o
Diretório um serviço estratégico dentro da empresa, utilizamos o
conceito de Multi-Master do AD (Active Directory) disponível na série
2.4 do OpenLDAP através do Mirror-Mode. Essa implementação, juntamente
com um switch de conteúdo, garante a alta-disponibilidade do mesmo.

A administração dessa base de dados armazenadas no OpenLDAP é realizada
por ferramenta WEB desenvolvida (WeMOiP – Gerenciador Web de
Diretório), pois nenhuma ferramenta testada (web ou client/server)
incorpora as “regras de negócio” internas da Previdência Social.

Outro ponto problemático são os perfis dos usuários Windows que precisaram
ser convertidos para o novo domínio único. Um script que realiza a
conversão automática desses perfis foi desenvolvido.

Paralelamente, implementamos o wpkg – Windows Packager que é um add-on que ajuda a
distribuir softwares, correções e patches para vários clientes de forma
centralizada.

O Download da palestra pode ser feita no link: Palestra LatinoWare

Valeu pela dica Renault

Excelente documentação de vmware para datacenter

Vale muito a pena dar uma conferida nesta documentação!
É o projeto da vmware para o datacenter da cisco!
Download pdf completo: VMware Infrastructure 3 in a Cisco Network Environment

Veja uma previa abaixo:

ESX Server Network and Storage Connectivity

VMWare networking is defined per ESX host, and is configured via the
VMware VirtualCenter Management Server, the tool used to manage an
entire virtual infrastructure implementation. An ESX Server host can
run multiple virtual machines (VMs) and perform some switching internal
to the host’s virtual network prior to sending traffic out to the
physical LAN switching network.




ESX Server Networking Components


Figure 1 VMware Networking is Defined per ESX Host


vmnics, vNICs and Virtual Ports

The term “NIC” has two
meanings in a VMware virtualized environment; it can refer to a
physical network adapters (vmnic) of the host server hardware and it
can also refer to a virtual NIC (vNIC), a virtual hardware device
presented to the virtual machine by VMware’s hardware abstraction
layer. While a vNIC is solely a virtual device, it can leverage the
hardware acceleration features offer by the physical NIC.

Through VirtualCenter, you can see the networking configuration by
highlighting the ESX host of interest (on the left of the interface,
see
Figure 1). Within the Configuration tab (on the right side of the interface), you can find the association between the VM’s vNICs (VM_LUN_0007 and VM_LUN_0005 in Figure 1)
and the physical NICs (vmnic0 and vmnic1). The virtual and physical
NICs are connected through a virtual switch (vSwitch). A vSwitch
forwards the traffic between a vNIC and a vmnic, and the connection
point between the vNIC and the vSwitch is called a virtual port.

Clicking the Add Networking button opens the Add Network Wizard, which guides you through the creation of new vSwitches or new Port Groups, a feature used to partition an existing vSwitch.

Figure 2 shows the provisioning of physical and VM adapters in an ESX host.


Figure 2 ESX Server Interfaces


.
.
.

Figure 58 VMotion Process

VMotion is not a full copy of a virtual disk from one ESX host to another but rather a copy of “state”. The .vmdk
file resides in the SAN on a VMFS partition and is stationary; the ESX
source and target servers simply swap control of the file lock after
the VM state information synchronizes.

Deploying a VMotion-enabled ESX Server farm requires the following:


VirtualCenter management software with the VMotion module.

ESX farm/datacenter (VMotion only works with ESX hosts that are part of the same data center in
the VirtualCenter configuration). Each host in the farm should have
almost-identical hardware processors to avoid errors after migration
(check the compatibility information from VMware)

Shared SAN, granting access to the same VMFS volumes (.vmdk file) for source and target ESX hosts.

Volume names used when referencing VMFS volumes to avoid WWN issues between ESX hosts.

VMotion “connectivity” (i.e., reachability of VMkernels from originating to
target ESX host and vice versa). It may be desirable to have Gigabit
Ethernet network for state information exchange, although VMotion will
work just fine on a VLAN.

The ESX originating host and the ESX target host need to have the same Network Label configured with the same Security Policy configuration.

Note Regular
migration (i.e., non-VMotion migration) is the migration of a powered
off VMs. This type of migration does not present any special challenge
in that there is no memory replication, and if a SAN is present the
VMFS volumes are already visible by the ESX hosts in the same data center. In case a relocation is involved (i.e., in the case where the .vmdk disk file needs to be moved to a different datastore) the MAC address of the powered-on VM may change.

VMotion Migration on the same Subnet (Flat Networks)

The most common deployment of VM migration requires Layer 2 adjacency between the machines involved (see Figure 59).
The scope of the Layer 2 domain is for the most part limited to the
access layer hanging off of the same pair of aggregation switches, or
in other words typically either within the same facility (building) or
at a maximum across buildings in a campus, and typically involves 10 to
20 ESX hosts at a maximum due to the requirements of the host to be
part of the same data center for migration purposes and of the same cluster for DRS purposes.

A Layer 2 solution for a VMware cluster satisfies the requirements of
being able to turn on a machine anywhere within the cluster as well as
migrating an active machine from an ESX Server to a different one
without noticeable disruption from the user (VMotion).

Figure 59 VMware Layer 2 Domain Requirements

VMotion is better explained starting from a real example. Imagine a server farm deployment such as the one shown in Figure 60.
ESX Server Host 1 is in Rack 1 in the data center. ESX Server Host 2 is
in Rack 10 in the same data center. Each rack provides Layer 2
connectivity to the servers (design approach referred to as top of the
rack design). A pair of Layer 3 switches interconnects the racks which
may very well be several rows away from each other. The goal of the
implementation is to be able to move VM4 from ESX Server Host 2 in Rack
10 to ESX Server Host 1 in Rack 1.

Figure 60 VMotion Migration on a Layer 2 Network

For
this to happen, you need to provision the network to carry VMkernel
traffic from ESX Server Host 2 to ESX Server Host 1 and you need to
make sure that VM4 can be reached by clients when running in ESX Server
Host 1.

A solution that meets these requirements is the following:

Provisioning a VLAN for the VMkernel

Trunking this VLAN from ESX Server Host 2 all across the LAN network to ESX Server Host 1

Provisioning a VLAN for VM public access

Trunking this VLAN from ESX Server Host 2 all across the LAN network to ESX Server Host 1

Making sure that the VMkernel VLAN and the VM VLANs are separate (although they may share the same physical links)

The ESX host configuration would look like Figure 61.
The ESX host would have a vSwitch with its own dedicated NIC for the
VMkernel. The VMkernel VLAN would be trunked from the aggregation
switch to the access switches in Rack 1 all the way to the vSwitch in
ESX Server Host 2.

Similarly, the VM4 VLAN and the Network Label would be identical on vSwitch2/ESX Server Host 2 as in vSwitch2/ESX Server Host1.

Figure 61 VM Mobility and VLAN Assignment

ESX HA Cluster

As previously explained, a
VMware ESX HA cluster differs from regular HA clusters—it does not
provide application availability, but the capability to restart VMs
that were previously running on a failed ESX host onto a different ESX
host.

An HA agent runs in each
ESX host and is used to monitor the availability of the other ESX hosts
that are part of the same VMware HA cluster. ESX hosts network
monitoring is achieved with unicast UDP frames that are exchanged on
the Service Console VLAN, or VLANs if multiple Service Consoles are
configured. The agents use four different UDP ports such as ~8042. UDP
heartbeats are sent every second. No heartbeats are exchanged on the
production VLANs. Note that ESX HA requires the Domain Name Service
(DNS) for initial configuration.

When an ESX host loses
connectivity via the heartbeats to other ESX hosts, it starts pinging
the gateway to verify if it still has access to the network or whether
it has become isolated. In principle, if an ESX host finds out that it
is isolated, it will power down the VMs so that the lock on the .vmdks file is released and the VMs can be restarted on other ESX hosts.

Note that the default settings will shutdown VMs to avoid split-brain
scenarios. ESX assumes a highly available and redundant network that
makes network isolations highly rare. HA recommendations are as
follows:

Step 1 Configure two Service consoles on different networks.

Step 2 Configure each Service Console with two teamed vmnics with Rolling Failover = Yes (ESX 3.0.x) or Failback = No (ESX 3.5).

Step 3 Ensure the teamed vmnics are attached to different physical switches.

Step 4 Ensure
there is a completely redundant physical path between the ESX hosts
with no single points of failure that could cause a split-brain
scenario.

Just like with VMware for a VM to restart on a different ESX host, there needs to be a Network Label where the VM connects to when powered-on. The following failure scenarios help explaining the behavior of the VMware HA cluster.

For more information on VMware ESX HA, refer to the VMware HA: Concepts and Best Practices technical paper at:

http://www.vmware.com/resources/techresources/402

Maintenance Mode

In this first example, esxserver1 is put into Maintenance Mode. The VM is VMotion migrated to esxserver2 as it is shown in Figure 62. Note that from a network’s perspective, the only requirement for a VM to move to esxserver2 is that the same Network Label exists. For the clients to keep working on the migrating VM, this Network Label must be the same VLAN as esxserver1 and there must be vmnics trunking this VLAN to the Cisco Catalyst switches.

Note Apart from the naming of the Network Label , VirtualCenter and the HA cluster software do
not verify either the VLAN number or the presence of vmnics on the
vSwitch. It is up to the administrator to ensure that VLAN
configurations and vmnics are correctly configured before hand.

Figure 62 HA Cluster and ESX Host Maintenance Mode



ESX Host Disconnected from Production, Management and Storage

In this example, the ESX Server Host 1 is connected via a single vmnic to the LAN switching network (see Figure 63).
One VLAN provides network management access (10.0.2.0), one VLAN
provides access to the production network (10.0.100.0), and one VLAN
provides access to the SAN via iSCSI (10.0.200.0).

Figure 63 HA Cluster and NIC Failure

When the NIC vmnic0 gets disconnected, ESX1 connectivity is lost on all
VLANs. The HA agent on ESX2 determines that ESX1is not reachable via
the Service Console network, and it tries to reserve the VMFS volumes
to be able to start VM1 and VM2. Because ESX1 is not only isolated but
also lost control for the iSCSI disks, the lock eventually times out
and ESX2 can reserve the disks and restart VM1 and VM2. The total time
for the VMs to be restarted is approximately 18 to 20 seconds.

Note Duringthis process, ESX2 tries to reach ESX1 several times before declaringit failed and it also verifies that it can reach the gateway.


Note If
you happen to watch the failure of ESX1 from the VirtualCenter, do not
expect to see that vmnic0 is marked as failed. Losing vmnic0 means that
you also lost network management connectivity, so VirtualCenter cannot
collect status information from ESX1 any longer.



Overall,
having two vmnics configured for NIC teaming provides a faster
convergence time than having to restart VMs in a different ESX host.

Lost Connectivity on Service Console Only

Imagine now a network similar to Figure 63
modified in a way that the vmnic for the Service Console is separate
from the other vmnics used for production, VMkernel, and iSCSI traffic.
If the Service Console vmnic fails on ESX Host 1, the ESX Host 1
appears as disconnected from the VirtualCenter, due to the lack of
management communication, but the ESX Host 1 is still up and running
and the VMs are powered-on. If ESX Host 2 can still communicate with
ESX Host1 over a redundant Service Console (which may be not routed),
the VMs do not need to be powered off and restarted on ESX Host 2.


A redundant Service Console is recommended. The software iSCSI
initiator is implemented using the VMkernel address. The iSCSI uses the
Service Console for authentication; therefore, it must have Layer 3 or
Layer 2 connectivity to the iSCSI VLAN or network.


Lost Connectivity on Production Only

Consider Figure 63
and imagine that the NIC on ESX1 is not disconnected, but that for some
reason the path on the production network is not available (most likely
due to a misconfiguration). The HA cluster is not going to shutdown the
VM on ESX1 and restart it on ESX2, because ESX1 and ESX2 can still
communicate on the Service Console network. The result is that VMs on
ESX1 are isolated.

Similarly, assume that ESX1
has two vmnics, one used for the production network and one used for
the Service Console, VMkernel and iSCSI. If the first vmnic fails, this
is not going to cause VMs to be restarted on ESX2. The result is that
VMs on ESX1 are isolated.

In order to avoid this
scenario, the best solution is to not dedicate a single vmnic to
production traffic alone, but to use vmnics configured for NIC teaming.