Monday, July 25, 2011

VMware vSphere 5: The BIG feature list

VMware vSphere 5.0

  • ESXi Convergence – No more ESX, only ESXi (they said they would do it, they meant it)
  • New VM Hardware: Version 8 – New Hardware support (VS5 still supports VM Hardware 4 & 7 as well if you still want to migrate to the old hosts)
    • 3D graphics Support for Windows Aero
    • Support for USB 3.0 devices
  • Platform Enhancements (Blue Requires Hardware v8)
    • 32 vCPUs per VM
    • 1TB of RAM per VM
    • 3D Graphics Support
    • Client-connected USB devices
    • USB 3.0 Devices
    • Smart-card Readers for VM Console Access
    • EFI BIOS
    • UI for Multi-core vCPUs
    • VM BIOS boot order config API and PowerCLI Interface
  • vSphere Auto Deploy – mechanism for having hosts deploy quickly when needed
  • Support for Apple Products – Support for running OSX 10.6 Server (Snow Leopard) on Apple Xserve hardware. (although I betting technically, you can get it to run on any hardware, you will just not be compliant in your license)
  • Storage DRS – Just like DRS does for CPU and Memory, now for storage
    • Initial Placement – Places new VMs on the storage with the most space and least latency
    • Load Balancing – migrates VMs if the storage cluster (group of datastores) gets too full or the latency goes too high
    • Datastore Maintenance Mode - allow you to evacuate VMs from a datastore to work on it (does not support Templates or non-registered VMs yet…)
    • Affinity & Anti-Affinity – Allows you to make sure a group of VMs do not end up on the same datastore (for performance or Business Continuity reasons) or VMs that should always be on the same datastore. Can be at the VM or down to the individual VMDK level.
    • Support for scheduled disabling of Storage DRS – perhaps during backups for instance.
  • Profile-Driven Storage – Creating pools of storage in Tiers and selecting the correct tier for a given VM. vSphere will make sure the VM stays on the correct tier(pool) of storage. (Not a fan of this just yet. What if just 1GB of the VM needs high-tier storage? This makes you put the whole VM there.)
  • vSphere File System – VMFS5 is now available. (Yes, This is a non-disruptiveupgrade, however I would still create new and SVmotion)
    • Support for a single extent datastore up to 64TB
    • Support for >2TB Physical Raw Disk Mappings
    • Better VAAI (vStorage APIs for Array Integration) Locking with more tasks
    • Space reclamation on thin provisioned LUNs
    • Unified block size (1MB) (no more choosing between 1,2,4 or 8)
    • Sub-blocks for space efficiency (8KB vs. 64KB in VS4)
  • VAAI now a T10 standard – All 3 primitives (Write Same, ATS and Full Copy) are now T10 standard compliant.
    • Also now added support for VAAI NAS Primitives including Full File Clone (to have the nas do the copy of the vmdk files for vSphere) and Reserve Space (to have the NAS create thick vmdk files on NAS storage)
  • VAAI Thin Provisioning – Having the storage do the thin provisioning and then vSphere telling the storage which blocks can be reclaimed to shrink the space used on the storage
  • Storage vMotion Enhancements
    • Now supports storage vMotion with VMs that have snapshots
    • Now supports moving linked clones
    • Now supports Storage DRS (mentioned above)
    • Now uses mirroring to migrate vs change block tracking in VS4. Results in faster migration time and greater migration success.
  • Storage IO Control for NAS – allows you to throttle the storage performance against “badly-behaving” VMs also prevents them from stealing storage bandwidth from high-priority VMs. (Support for iSCSI and FC was added in VS4.)
  • Support for VASA (vStorage APIs for Storage Awareness) – Allows storage to integrate tighter with vcenter for management. Provides a mechanism for storage arrays to report their capabilities, topology and current state. Also helps Storage DRS make more educated decisions when moving VMs.
  • Support for Software FCoE Adapters – Requires a compatible NIC and allows you to run FCoE over that NIC without the need for a CNA Adapter.
  • vMotion Enhancements
    • Support for multiple NICs. Up to 4 x 10GbE or 16 x 1GbE NICs
    • Single vMotion can span multiple NICs (this is huge for 1GbE shops)
    • Allows for higher number of concurrent vMotions
    • SDPS Support (Slow Down During Page Send) – throttles busy VMs to reduce timeouts and improve success.
    • Ensures less than 1 second switchover in almost all cases
    • Support for higher latency networks (up to ~10ms)
    • Improved error reporting – better, more detailed logging (thank you vmware!)
    • Improved Resource Pool Integration – now puts VMs in the proper resource pool
  • Distributed Resource Scheduling/Dynamic Power Management Enhancements
    • Support for “Agent VMs” – These are VMs that work per host (currently mostly vmware services – vshield, edge, app, endpoint, etc) DRS will not migrate these VMs
    • “Agents” do not need to be migrated for maintenance mode
  • Resource pool enhancements – now more consistent for clustered vs. non-clustered hosts. No longer can modify resource pool settings on the host itself when it is managed by vcenter. It does allow for making changes if the host gets disconnected from vCenter
  • Support for LLDP Network Protocol – Standards based vendor-neutral discovery protocol
  • Support for NetFlow – Allows collection of IP traffic information to send to collectors (CA, NetScout, etc) to provide bandwidth statistics, irregularities, etc. Provides complete visibility to traffic between VMs or VM to outside.
  • Network I/O Control (NETIOC) – allows creation of network resource pools, QoS Tagging, Shares and Limits to traffic types, Guaranteed Service Levels for certain traffic types
  • Support for QoS (802.1p) tagging – provides the ability to Q0S tag any traffic flowing out of the vSphere infrastructure.
  • Network Performance Improvements
    • Multiple VMs receiving multicast traffic from the same source will see improved throughput and CPU efficiency
    • VMkernel NICs will see higher throughput with small messages and better IOPs scaling for iSCSI traffic
  • Command Line Enhancements
    • Remote commands and local commands will now be the same (new esxcli commands are not backwards compatible)
    • Output from commands can now be formatted automatically (xml, CSV, etc)
  • ESXi 5.0 Firewall Enhancements
    • New engine not based on iptables
    • New engine is service-oriented and is a stateless firewall
    • Users can restrict specific services based on IP address and Subnet Mask
    • Firewall has host-profile support
  • Support for Image Builder – can now create customized ESXi CDs with the drivers and OEM add-ins that you need. (Like slip-streaming for Windows CDs) Can also be used for PXE installs.
  • Host Profiles Enhancements
    • Allows use of an answer file to complete the profile for an automated deployment
    • Greatly expands the config options including: iSCSI, FCoE, Native Multipathing, Device Claming, Kernel Module Settings & more) (I don’t think Nexus is supported yet)
  • Update Manager Enhancements
    • Can now patch multiple hosts in a cluster at a time. Will analyze and see how many hosts can be patched at the same time and patch groups in the cluster instead of one at a time. Can still do one at a time if you prefer.
    • VMTools can now be scheduled at next VM reboot
    • Can now configure multiple download URLs and restrict downloads to only the specific versions of ESX you are running
    • More management capabilities: update certificates, change DB password, proxy authentication, reconfigure setup, etc.
  • High Availability Enhancements
    • No more Primary/Secondary concept, one host is elected master and all others are slaves
    • Can now use storage-level communications – hosts can use “heartbeat datastores” in the event that network communication is lost between the hosts.
    • HA Protected state is now reported on a per/VM basis. Certain operations no longer wait for confirmation of protection to run for instance power on. The result is that VMs power on faster.
    • HA Logging has been consolidated into one log file
    • HA now pushes the HA Agent to all hosts in a cluster instead of one at a time. Result: reduces config time for HA to ~1 minute instead of ~1 minute per host in the cluster.
    • HA User Interface now shows who the Master is, VMs Protected and Un-protected, any configuration issues, datastore heartbeat configuration and better controls on failover hosts.
  • vCenter Web Interface – Admins can now use a robust web interface to control the infrastructure instead of the GUI client.
    • Includes VM Management functions (Provisioning, Edit VM, Poer Controls, Snaps, Migrations)
    • Can view all objects (hosts clusters, datastores, folders, etc)
    • Basic Health Monitoring
    • View the VM Console
    • Search Capabilities
    • vApp Management functions (Provisioning, editing, power operations)
  • vCenter Server Appliance – Customers no longer need a Windows license to run vCenter. vCenter can come as a self-contained appliance (This has been a major request in the community for years)
    • 64-bit appliance running SLES 11
    • Distributed as 3.6GB, Deployment range is 5GB to 80GB of storage
    • Included database for 5 Hosts or 50 VMs (same as SQL Express in VS4)
    • Support for Oracle as the full DB (twitter said that DB2 was also supported but I cannot confirm in my materials)
    • Authentication thru AD and NIS
    • Web-based configuration
    • Supports the vSphere Web Client
    • It does not support: Linked Mode vCenters, IPv6, SQL, or vCenter heartbeat (HA is provided thru vSphere HA)
  • vCenter Heartbeat 6.4 Enhancements
    • Allows the active and standby nodes to be reachable at the same time, so both can be patched and managed
    • Now has a plug-in to the vSphere client to manage and monitor Heartbeat
    • Events will register in the vSphere Recent Tasks and Events
    • Alerts will register in the alarms and display in the client
    • Supports vCenter 5.0 and SQL 2008 R2

That’s what I have on vSphere 5, next up is SRM5, vShield5, Storage Appliance, and vCloud Director 1.5.

Thursday, May 19, 2011

ESXi Disk Full.

Symptom:

After restarting the host, I would be able to make configuration changes for 3 – 5 minutes from when the machine started before beginning to experience updating errors. The error seemed to indicate that the hard disk was full and it was unable to write configuration changes to /etc/vmware/esx.conf.

SNAGHTMLd3b2a0

My particular lab runs a non standard *(non supported) motherboard (SuperMicro X8SIL) that seems to have some issues with the VMware implementation of CIM. The Common Information Model is used to gather information about actual sensors in the hardware. Since my board did not play nice with VMware, it flooded the log files and thus after a couple of minutes from boot up, the disk would fill. A quick fix was to disable the Sensor Dashboard in advanced settings.

Workaround:

image

After restarting the host and BEFORE the disk filled, quickly change the Advanced Host value UserVars.CIMEnabled from 1 to 0. This corrected the issue immediately. Of course, I sacrificed the ability to view the board sensors from within Virtual Center.


Monday, August 2, 2010

Krystaltek: DRS/Fault Tolerance Placement Restrictions

DRS/Fault Tolerance Placement Restrictions: "Today I came across something in doing some tests of the VMware Fault Tolerance feature in vSphere 4.1. I was attempting to migrate some of ..."

Thursday, March 11, 2010

PowerCLI: Reconfiguring NTP Servers on ESX Hosts

I’ve been creating a PowerCLI scripts to help configure various aspects of out ESX environment. We’ve just implemented a new NTP Server to our network. So I was given the job to update all of our ESX Hosts.

$Cluster = "XXX"
$Hosts = Get-Cluster $Cluster Get-VMHost
ForEach ($Host in $Hosts)
{
Remove-VmHostNtpServer -NtpServer "" -VMHost $Host Out-Null
Add-VmHostNtpServer -NtpServer "" -VMHost $Host Out-Null
Get-VmHostService -VMHost $Host Where-Object {$_.key -eq "ntpd"} Restart-VMHostService -Confirm:$false Out-Null
write "NTP Server was changed on $Host"
}


Thursday, February 11, 2010

How to installing vSphere Converter

How to installing vSphere Converter 4.0


1. Installing and Upgrading vSphere Converter from 3.0 to 4.0

VMware vCenter Converter quickly, easily and affordably converts Microsoft Windows and Linux physical machines and third party image formats to VMware virtual machines. It also converts virtual machines between VMware platforms.

2. VMware vCenter Converter is available in two different versions:

Standalone Converter

Converter integrated with VMware vCenter Server

Here the integrated VMware vCenter Converter Server is being upgraded. The process for the install and upgrade is the same, but for the upgrade there is an additional prompt to let you know it’s being upgraded.

Download VMware vSphere vCenter 4 from the VMware download area.

Insert the DVD (it will autorun) or run the exe, from the extracted ZIP file.

Click "vCenter Converter.

Choose the language. Click OK.

If upgrading confirms you want to continue by clicking Yes.

Click Next.

Read and accept the license. Click Next.

Enter the installation path. Click Next.

Choose the typical/custom installation. Click Next.

If custom installation was choose, notice the converter agent is not installed but the converter server and CLI are.

We do not need the converter agent installing on this server. Click Next.

Enter the vCenter Server details. Click Next.

Confirm or modify the ports to be used. Click Next.

Choose the vCenter address. Click Next.

Click Next to install.

Converter is installed, wait for it to complete.

Installation is complete. Click Finish.

Start the vSphere client and connect to the vCenter Server.

Go to the vCenter plugin manager.

Right click "VMware Converter Enterprise" plugin and install the client plugin.

The client plugin will install.

Confirm the plugin shows under "Installed Plug-ins".

Choose the "Import Machine..." option in the inventory to use Converter to P2V an existing server.


Use the Converter Import Wizard to convert the server to a virtual machine.