Tag Archives: vsphere

VSAN 6.0 – What’s New

Among other things, VMware released Virtual SAN 6.0 earlier this month, in conjunction with vSphere 6.0. Realistically, this is a “2.0” release, but I am guessing they are calling it “6.0” to conform with the vSphere releases. I think that is a bad idea because it can put undue pressure on the developers to keep with the cadence of vSphere, vRealize and everything else that is taking on the “6.0” versioning. But the marketing people feel it is easier to manage compatibility questions.
Let’s face it, whether it is is called 2.0 or 6.0, it is still a “.0” release with plenty of new things to go awry.

Read more »

VMware PEX 2015: New Stuff With vSphere 6

With version 6.0 of VMware’s flagship product comes plenty of enhancements. According to VMware’s press release, there are more than 650 improvements, but I have not seen a master list yet. The maximums of vSphere are leapfrogging the maximums of Hyper-V. Unless you are planning on running SAP HANA in a virtualized environment, you could probably not give a crap about some of the scalability enhancements. They may be nice to have but how often will you use them. Here are some of the improvements to vSphere 6.0:

Read more »

VMware PEX 2015: The Big Announcements

As expected, several announcements were made at VMware Partner Exchange 2015. The most anticipated announcements involved vSphere 6, VSAN 6 and EVO:RAIL. As an old dog I’ve become fairly jaded in reaction to many of these announcements. However there are some significant features in vSphere 6 and VSAN 6. There are’s also some interesting things surrounding EVO:RAIL.
The message from VMware to its 4000 attending Partners was  “One Cloud, Any application, Any Device.” Oh, and no more PEX in the future. The plan is to have all technical sessions available at VMworld, which is a horrible idea. EMC does this at EMC World and all of the technical people are shuffling back and forth between SE-focused sessions and customer-focused sessions.

Read more »

vCloud Trick – Joining a Domain and Specifying a Machine OU

NOTE: This is no longer required in vCD 5.1 & above!

This is one of those situations where I really start to hate computers!  I was working with vCloud Director with a goal of having a winders VM run through guest customization, change the name, get a fixed IP from the network pool, join an Active Directory Domain and move to a specific OU in the AD.

The Problem

There is a spot in the VM properties to specify a domain to join. You can use the settings specified in the organization or enter the domain information directly. Read more »

Big vCloud Director Security Gotchas That I Have Found

This post includes an important security “gotcha” that I recently uncovered with vCloud Director 1.5 running on vSphere 5. If you are using vCloud Director, you should check your settings.

The BIG Security Issue

Read more »

PAVMUG Session – Optimal Designs for vSphere 5 Licensing

The PAVMUG session on Sept 22nd, 2011 that seemed to have the second most active audience was the session where I discussed vSphere 5 licensing and some of the design considerations. There were several good questions that I would like to re-address here and share some helpful links that I promised during the session. There is a great PowerCLI script and a tool that VMware themselves offer.

Read more »

PAVMUG Session – Virtualizing Business Critical Apps

For the September 2011 PAVMUG all day meeting, I participated in four sessions. To me, the session with the most audience participation was about virtualizing business critical applications. My session really dug deep into Microsoft Exchange but also covered some basics around SQL and Oracle. I wanted to expand some of the ideas that were discussed during the session and post the presentation slides.

Read more »

Maybe VMware Needs a Quality Oversight Department…

I was doing some research for session I am presenting at an upcoming PAVMUG session about vSphere remote management when I came across an apology by one of the PowerGUI guys. Essentially, he was apologizing for something that VMware changed in the functionality of PowerCLI that affects how the PowerGUI Virtualization Powerpack interacts with it.

Read more »

A Few Gotchas With vSphere 4.1! Updated

Since everyone else in the world is heralding the release of vSphere 4.1, I figured I would post some bad news. The stuff you may want to know BEFORE you jump into upgrading to vSphere 4.1. Before I start, I want to make it clear that vSphere 4.1 is a great product overall. And I have already been leaning to ESXi, so the announcement that this will be the last release with the “traditional” ESX has been expected. I will talk about ESXi and its improvements in a later post. I just want you to be aware of these rather significant Gotchas.

Gotcha #1 – Read Only Role allows members to add VMKernel NICs

From the release notes (You actually READ these, right?):

  • Newly added users with read-only role can add VMkernel NICs to ESX/ESXi hosts
    Newly added users with a read-only role cannot make changes to the ESX/ESXi host setup with the exception of adding VMkernel NICs, which is currently possible.

    Workaround: None. Do not rely on this behavior because read-only users will not be able to add VMkernel NICs in the future.

This is a fairly big security issue. I just LOVE the workaround notes. To be fair, I have found only one installation in my experience that uses the Read-Only Role. In my opinion, if they don’t have access to the physical data center, they don’t need any access to vCenter. But this is just something that should have been corrected before release.

Gotcha #2 – ESX/ESXi installations on HP systems require the HP NMI driver

  • ESX installations on HP systems require the HP NMI driver
    ESX 4.1 instances on HP systems require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems with ESX.

    CAUTION: Failure to install this driver might result in silent data corruption.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609.

It seems that every time HP releases a new set of SIM agents for ESX, something breaks. Is this VMware’s way of putting it on HP? Or was this an “OOPS”? If you search for “HP VMware NMI Driver” you come up with nothing. No download. It was no where to be found on Monday, but I did find it today on the HP support site.

Gotcha #3 – VMware View Composer 2.0.x is not supported in a vSphere vCenter Server 4.1 managed environment

The basic issue here is that vCenter 4.1 only works on a 64-bit system. View Composer only works on a 32-bit system. From the KB Article:

“VMware View Composer 2.0.x is not supported in a vSphere vCenter Server 4.1 managed environment as vSphere vCenter Server 4.1 requires a 64 bit operating system and VMware View Composer does not support 64 bit operating systems.
“VMware View 4.0.x customers who use View Composer should not upgrade to vSphere vCenter Server 4.1 at this time. Our upcoming VMware View 4.5 will be supported on VMware vSphere 4.1.”

Don’t these guys talk to each other? Didn’t they learn their lesson with the PCoIP issues? And why can’t you just admit it in the release notes instead of putting a link to the KB article? I completely missed this Monday morning.

Gotcha #4 – vCenter Installer SILENTLY Changes SQL Server Settings to Allow Named Pipes

  • vCenter Server installation or upgrade silently changes Microsoft SQL Server settings to enable named pipes
    When you install vCenter Server 4.1 or upgrade vCenter Server 4.0.x to vCenter Server 4.1 on a host that uses Microsoft SQL Server with a setting of “Using TCP/IP only,” the installer changes that setting to “Using TCP/IP and named pipes” and does not present a notification of the change.Workaround: The change in setting to “Using TCP/IP and named pipes” does not interfere with the correct operation of vCenter Server. However, you can use the following steps to restore the setting to the default of “Using TCP/IP only.”
  1. Select Start > Programs > Microsoft SQL Server 2005 > Configuration Tools > SQL Server Surface Area Configuration.
  2. Select Surface Area Configuration for Services and Connections.
  3. Under the SQL Server instance you are using for vCenter Server, select Remote Connections.
  4. Change the option under Local and Remote Connections and click Apply.

Can you hear the DBAs pissing and moaning?

Gotcha #4a – SQL Database is changed to Bulk Recovery Model (updated 10/27)

This on is funny. I just found out about it on 10/27/2010. When is comes to SQL for the vCenter database, VMware recommends using a simple recovery model. So, with their attention to detail, the upgrade process changes the database to a bulk recovery model. Inn this model, the logs keep growing until a backup purges it. No good.

Transaction log for vCenter Server database grows large after upgrading to vCenter Server 4.1 – http://kb.vmware.com/kb/1026430

Conclusion

Again vSphere 4.1 brings some great improvements and some welcome changes. As the product matures and more vendors work with the APIs, we will see some nice features that will help you in your journey to the private cloud. The Gotchas listed above may not exist if quality assurance is tightened. I think I would rather hear that a release is delayed because of pending bug fixes. How long will we need to wait to fix these? In any case, if the Read-Only Role or the View Composer gotchas don’t apply, then jump right in and install or upgrade to vSphere 4.1. Just make sure you install the NMI drivers and fix the SQL settings.

Update 2010-07-16

I got a tweet from William Lam last night. It looks like versions are hard-coded in Capacity-IQ making it incompatible with vSphere 4.1. Will also explains two ways to make it work.

vShield Zones – Some Serious Gotchas

OK..I’ll admit it: I am spoiled by the capabilities of vSphere. What other platform lets you schedule system updates that will occur unattended and without outages of the applications being used? I don’t mean the winders patches, they require a monthly reboot. I am talking about the hypervisor updates. VMware Update Manager coordinates all of this for you. Then along comes vShield Zones to break it all.

First, let me explain what I am trying to do. To simplify things, vShield Zones is a firewall for vSphere Virtual Machines. Rather than regurgitate how it works, take a look at Rodney’s excellent post. A customer has decided to use vShield Zones to help with PCI Compliance. The desire is that only certain VMs will be allowed to communicate with certain other VMs using specific network ports, and to audit that traffic. ’nuff said.

vShield Zones seems to be the perfect solution for this. It works almost seamlessly with vCenter and the underlying ESXi hosts. It provides hardened Linux Virtual Appliances (vShield Agents) to provide the firewalling. It provides a fairly nice management interface to create the firewall rules and distribute them to the vShield Agents. Best of all, IT’S FREE! At least for vSphere Advanced versions and above. Keep in mind, that this is still considered a 1.x release and some things need to be worked out.

Now, on to the gotchas.

Gotcha #1 – Networking

When it comes to networking, the vShield Agent is designed to sit between a vSwitch that is externally connected via physical NICs (pNICs) and a vSwitch that is isolated from the outside world. The vShield Agent installation wizard will prompt you to select a vSwitch to protect. This is illustrated below. The red line indicates network traffic flow.

Click the Image to Enlarge

Click the Image to Enlarge

This works like a champ in this configuration, using a vSwitch for management, which is naturally on an isolated network to begin with, using a vSwitch for VMs to connect to the vShield Agent and using a vSwitch to connect everything to the outside world.  This can also be deployed with limited down time. If you are lucky enough to have the Enterprise Plus version, you may want to use a vNetwork Distributed Switch or even a Cisco 1000v. You will need to make some manual configurations to make this work as outlined in the admin guide.

The gotcha is with blade servers or “pizza box” servers that have limited I/O slots. If all of the VM traffic must flow through the same physical NICs and you use a vSwitch, then you need the vShield Agent to protect a port group rather than an entire vSwitch. You will need to create a vSwitch with a protected port group and connect it to the pNICs. Then you you can install the vShield Agent. Once the vShield Agent is installed, you will need to go back to the vSwitch attached to the pNICs and add an unprotected port group. This is illustrated below. The red line is the protected traffic and the blue line is the unprotected traffic.

Click on Image to Enlarge

Click on Image to Enlarge

As you can see, there is an unprotected Port Group (ORIGINAL Network). This needs to be added to the vSwitch AFTER the vShield Agent is installed. If the ORIGINAL Network is already a part of the vSwitch, it will need to be removed BEFORE installing the vShield Agent. In order to avoid an outage, you will need to disable DRS and manually vMotion all VMs off of the ESX/ESXi host before installing the vShield Agent and modifying the port groups.

Gotcha #2 – DRS/HA Settings

The vShield Agents attach to isolated vSwitches with no pNIC connection. As you should already know, using DRS and vMotion on an isolated vSwitch could cause inter-connectivity between VMs to fail. By default, you cannot vMotion a VM that is attached to an isolated vSwitch. You will need to enable this by editing the vpxd.cfg file. You will also need to disable HA and DRS for the vShield Agents so they stay on the hosts where they are  installed. Both are well documented. Obviously, you will need to install a vShield Agent on every ESX/ESXi host in the cluster.

The Gotcha here is that, with HA disabled for the vShield Agent, there is no facility for automatic startup. There is an automatic startup setting in the startup/shutdown section of the configuration settings. First, this is an all-or-nothing setting. Second, according to the Availability Guide:

“NOTE The Virtual Machine Startup and Shutdown (automatic startup) feature is disabled for all virtual machines residing on hosts that are in (or moved into) a VMware HA cluster. VMware recommends that you do not manually re-enable this setting for any of the virtual machines. Doing so could interfere with the actions of cluster features such as VMware HA or Fault Tolerance.”

So, if a host fails, HA will restart all protected VMs on different hosts. If the host comes back on line, you risk having DRS migrate protected VMs back to that host. This will cause those VMs to become disconnected because the vShield Agent will not automatically start. If a host fails, hope that it fails good enough so it won’t restart.

Gotcha #3 – Maintenance Mode

At the beginning of this post, I mentioned how VMware Update Manager has spoiled me. VUM can be scheduled to patch VMs and hosts. When host patching is scheduled, VUM will place one host in Maintenance Mode, which will evacuate all VMs. Then, it will apply whatever patches are scheduled to be applied, reboot and then exit Maintenance Mode. It will repeat this for each host in a cluster. This works great unless there are running VMs that have DRS disabled, like the vShield Agent.

In the test environment, when a host was manually set to enter Maintenance Mode, it would stall at 2% without moving the test VMs. I am not sure the order that VMs are migrated off, but none were migrated in the test environment. This could vary in different installations. Here’s the gotcha: you cannot power the vShield Agent off because the protected VMs would become disconnected. You cannot migrate it to a different host because it would cause a serious conflict and cause protected VMs to become disconnected. The only thing you can do is place the host in Maintenance Mode, then MANUALLY (*GASP*) migrate all of the protected VMs and then power the vShield Agent off. So much for automated patch management. We’re back to the “oughts.”

Conclusion

I said already that vShield Zones is a 1.x product. It’s a great firewall, but it has a few gotchas that you need to consider. The benefits may outweigh the negatives. But vSphere is a 4.0 product.Some of this should be able to be addressed by tweaking vCenter or host settings.

vShield Zones should be smart enough to allow us to select specific port groups to protect rather than an entire vSwitch. I guess whatever scripting is being done in the background will need to be changed for this. Maybe we need a Ghetto vShield?

One of the REALLY smart people at VMware should be able to tell us the “order of migration” when a host is placed in Maintenance Mode. Once that is determined, there is probably a configuration file somewhere that we could tweak to change it.

There should be a way to set up automatic startup and shutdown of individual VMs. The Startup/Shutdown settings sort of deprecated once DRS was introduced. The only time it is useful is with a stand-alone server or in a NON-DRS cluster. I guess the only thing that could be done is to add a script somewhere in rc.d or rc.local to start up these VMs, but how can that be done in a “supported” fashion with ESXi and is it supported in either ESX or ESXi?

I brought these issues up with some VMware engineers and they assure me that they are working on this. Hopefully they will figure it out soon. I hate doing things manually. It seems like it is anti-cloud.

Creating an Automated ESXi Installer

Back in the summer, I saw Stu’s Post about automating the installation of ESXi. I was reminded again by Duncan’s Post. Then, I found myself in a situation when a customer bought 160 blades for VMware ESXi. With this many systems, it would be almost impossible to do this without mistakes. I took the ideas from Stu and Duncan and created an ESXi automated installer that works from a PXE deployment server, like the Ultimate Deployment Appliance. I took it a step further and added the ability to use a USB stick or a CD for those times when PXE is not allowed. The document below is a result of it.

This is a little different than the idea of a stateless ESXi server, where the hypervisor actually boots from PXE. This is the installer booting from PXE so that the hypervisor can be installed on local disk, an internal USB stick or SD card. You could also use it for a “boot from SAN” situation, but extreme care should be taken so you don’t accidentally format a VMFS disk.

As always, if anyone has comments, corrections, etc., please feel free to post a comment below.

The document can be found here -> Creating an Automated ESXi Installer

Summary

The ability to use an automated, unattended installation routine for a hypervisor is necessary whenever it is deployed to multiple systems or is done frequently. Automated installations help avoid a misconfiguration caused by human error, which become common when repetitive tasks are performed.  Because the “traditional” version of VMware ESX Server contains a Red Hat Linux based console operating system, we have been able to leverage kickstart scripts for automated installation. With the ESXi hypervisor, much of this functionality is not available because of the smaller footprint.

This document explains how to set up ESXi with little intervention. The modifications explained here can be used to deploy ESXi using a PXE server. In our examples we will use the Ultimate Deployment Appliance, but these methods will also transfer to such commercial packages as HP Rapid Deployment Pack, Altiris, or even a home grown PXE server. The modifications can also be used for deploying ESXi using a USB stick or a customized CD.

Requirements

  • ESXi Server Installable The ESXi CD image can be downloaded from the VMware site, however using a systems management and monitoring server, such as HP SIM or Dell OpenManage is highly recommended. Since there are usually vendor specific CIM providers to enhance the monitoring capabilities, some vendors will provide a customized CD image with the CIM providers. These additional CIM providers will also allow for more information to be displayed in the hardware sections of the vSphere Client. A search for “ESXi” on the HP and Dell sites produced links to the latest customized images.
  • Deployment Server A deployment server will allow for a controlled, automated installation of the ESXi Server software. The ability to handle multiple operating system installations is also desired. The ability to provide PXE and DHCP services is required as well. Most times, the deployment server will be running PXE services and TFTP. The DHCP services may be running on a different server in an enterprise. This document does not explain how to set up a separate DHCP server. For this document, we will be using the Ultimate Deployment Appliance (UDA) version 2.0 (beta).
  • Virtualization Software The UDA runs as a “Virtual Appliance,” which is a pre-configured virtual machine. It will run under VMware ESXi (available as a free or licensed instance), VMware Workstation (available for purchase), VMware Player (free) or VMware Server (free). In this document, VMware Workstation is used.
  • Optional software Although no additional software is required when using the UDA, you will need additional software if you plan on using a USB stick or if you plan on creating a customized CD image:
    • VMware Converter If you plan on using ESXi or Server to host the UDA, VMware Converter can be used to import the virtual appliance.
    • Syslinux In order to make a bootable USB stick, you will need the syslinux utility. This utility is available for Linux and Windows. The UDA does not include it. As an alternative, you can use the unetbootin utility.
    • CD Imaging and Burning In order to create a bootable CD image, you will need software to create the CD image (mkisofs) and then software to burn the image to the CD media (cdrecord). The cdrtools project includes versions for Linux and Windows. Most Debian versions of Linux, such as Ubuntu, come with the cdrkit, which uses genisoimage for imaging and wodim for burning.
    • Linux Desktop If you look at the contents of the ESXi CDs using Windows (Windows 7 was used), you may see all of the files listed using all capital letters. Since the ESXi software is based on Linux, all file operations are case sensitive and expect the files to be all lower case. This may cause errors when attempting to create the automated installer. For this reason, a Linux desktop is recommended. For most of the operations, UDS may be used. The only missing software on the UDA is syslinux. For a feature rich Linux desktop, Ubuntu is recommended. A few pre-configured Ubuntu Desktop virtual appliances are also available.

Conclusion

Once you have a hypervisor installed you will need to configure the server and add it to vCenter in an automated fashion. Look for a future doc covering this. For now, check out these resources for post install configurations:

http://communities.vmware.com/docs/DOC-7364

http://communities.vmware.com/docs/DOC-7511

http://communities.vmware.com/docs/DOC-8170

Is Your Blade Ready for Virtualization? Part 2 – Real Numbers

OK, so my last post brought on a blizzard of remarks questioning some of the validity of the data presented. I used what I was told during a presentation was a “Gartner recommended” configuration for a VM. My error was that I could not find this recommendation anywhere, but the sizing seems fairly valid, so I went with it. I went back to some of the assessments I have done and took data from about 2,000 servers to come up with some more real-world averages. I wanted to post these averages tonight. Remember what I said previously: This is just a set of numbers. You must ASSESS and DESIGN your virtual infrastructure properly. This is only a small piece of it.

I apologize for the images instead of tables, but I spent way too long trying to get tables to lay out properly in WordPress. Click on the images for larger views. I can post the raw data if someone wants to look at it, but I have to work on stripping away proprietary data first.  So, here we go:

Data Summary

If you have ever done a Virtualization Assessment, you will recognize this from the summary page of the workbook. We are going to look at data from 1956 servers. Average RAM usage is about 2069MB. Average CPU utilization is about 5.2%. Average network is about 31KB/s.

Performance Summary

From the same page in the workbook. From this chart, we see that the average ALLOCATED RAM is about 4342MB and the average FREE RAM is about 2273MB. This is where we get the average RAM usage from above.

Raw Data Averages

This is the averages calculated for each row in the raw data summary.

Storage Summary Report

This final chart is from a storage summary report. Average disk read bytes per sec (442,00) + average write bytes per sec (200,000) is about 600,000 bytes. So, total I/O bytes is about 632,000 (600,000 storage + 32,000 network). I used Google to convert this to gigabits: 632 000 bytes = 0.00470876694 gigabits. This is WAY less than the 0.3Gb recommended. So, here is my calculated AVERAGE VM sizing:

  • RAM = 2GB
  • I/O = 0.005Gb
  • Network I/O = 0.0002 Gb
  • Storage I/O = 0.004 Gb

I am not going to claim that this is my recommendation for a VM configuration, because it isn’t. My recommendation is still and will always be to ASSESS YOUR UNIQUE ENVIRONMENT and come up with your own data. I am not going to redo my previous post with these numbers because it is pointless. The intent of the previous post was to come up with a number of VMs in a chassis or rack based on a set of criteria. I also wanted to show a comparison of capabilities of each blade. If I use the numbers from this post, it will only show that each blade in question is capable of hosting even more VMs.

Is Your Blade Ready for Virtualization? A Math Lesson.

I attended the second day of the HP Converged Infrastructure Roadshow in NYC last week. Most of the day was spent watching PowerPoints and demos for the HP Matrix stuff and Virtual Connect. Then came lunch. I finished my appetizer and realized that the buffet being set up was for someone else. My appetizer was actually lunch! Thanks God there was cheesecake on the way…

There was a session on unified storage, which mostly covered the LeftHand line. At one point, I asked if the data de-dupe was source based or destination based. The “engineer” looked like a deer in the headlights and promptly answered “It’s hash based.” ‘Nuff said… The session covering the G6 servers was OK, but “been there done that.”

Other than the cheesecake, the best part of the day was the final presentation. The last session covered the differences in the various blade servers from several manufacturers. Even though I work for a company that sells HP, EMC and Cisco gear, I believe that x64 servers, from a hardware perspective, are really generic for the most part. Many will argue why their choice is the best, but most people choose a brand based on relationships with their supplier, the manufacturer or the dreaded “preferred vendor” status.  Obviously, this was an HP – biased presentation, but some of the math the Bladesystem engineer (I forgot to get his name) presented really makes you think.

Lets start with a typical configuration for VMs. He mentioned that this was a “Gartner recommended” configuration for VMs, but I could not find anything about this anywhere on line. Even so, its a pretty fair portrayal of a typical VM.

Typical Virtual Machine Configuration:

  • 3-4 GB Memory
  • 300 Mbps I/O
    • 100 Mbps Ethernet (0.1Gb)
    • 200 Mbps Storage (0.2Gb)

Processor count was not discussed, but you will see that may not be a big deal since most processors are overpowered for todays applications (I said MOST). IOps is not a factor either in these comparisons, that would be a factor of the storage system.

So, let’s take a look at the typical server configuration. In this article, we are comparing blade servers. But this is even typical for a “2U” rack server. He called this an “eightieth percentile” server, meaning it will meet 80% of the requirements for a server.

Typical Server Configuration:

  • 2 Sockets
    • 4-6 cores per socket
  • 12 DIMM slots
  • 2 Hot-plug Drives
  • 2 Lan on Motherboard (LOM)
  • 2 Mezzanine Slots (Or PCI-e slots)

Now, say we take this typical server and load it with 4GB or 8GB DIMMs. This is not a real stretch of the imagination. It gives us 48GB of RAM. Now its time for some math:

Calculations for a server with 4GB DIMMs:

  • 48GB Total RAM ÷ 3GB Memory per VM = 16 VMs
  • 16 VMs ÷ 8 cores = 2 VMs per core
  • 16 VMs * 0.3Gb per VM = 4.8 Gb I/O needed (x2 for redundancy)
  • 16 VMs * 0.1Gb per VM = 1.6Gb Ethernet needed (x2 for redundancy)
  • 16 VMs * 0.2Gb per VM = 3.2Gb Storage needed (x2 for redundancy)

Calculations for a server with 8GB DIMMs:

  • 96GB Total RAM ÷ 3GB Memory per VM = 32 VMs
  • 32 VMs ÷ 8 cores = 4 VMs per core
  • 32 VMs * 0.3Gb per VM = 9.6Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.1Gb per VM = 3.2Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.2Gb per VM = 6.4Gb Storage needed (x2 for redundancy)

Are you with me so far? I see nothing wrong with any of these yet.

Now, we need to look at the different attributes of the blades:

2009-12-31_112613

* The IBM LS42 and HP BL490c Each have 2 internal non-hot plug drive slots

The “dings” against each:

  • Cisco B200M1 has no LOM and only 1 mezzanine slot
  • Cisco B250M1 has no LOM
  • Cisco chassis only has one pair of I/O modules
  • Cisco chassis only has four power supplies – may cause issues using 3-phase power
  • Dell M710 and M905 have only 1GbE LOMs (Allegedly, the chassis midplane connecting the LOMs cannot support 10GbE because they lack a “back drill.”)
  • IBM LS42 has only 1GbE LOMs
  • IBM chassis only has four power supplies – may cause issues using 3-phase power

Now, from here, the engineer made comparisons based on loading each blade with 4GB or 8GB DIMMs. Basically, some of the blades would not support a full complement of VMs based on a full load of DIMMS. What does this mean? Don’t rush out and buy blades loaded with DIMMs or your memory utilization could be lower than expected. What it really means is that you need to ASSESS your needs and DESIGN an infrastructure based on those needs. What I will do is give you a maximum VMs per blade and per chassis. It seems to me that it would make more sense to consider this in the design stage so that you can come up with some TCO numbers based on vendors. So, we will take a look at the maximum number of VMs for each blade based on total RAM capability and total I/O capability. The lower number becomes the total possible VMs per blade based on overall configuration. What I did here to simplify things was take the total possible RAM and subtract 6GB for hypervisor and overhead, then divide by 3 to come up with the amount of 3GB VMs I could host. I also took the size specs for each chassis and calulated the maximum possible chassis per rack and then calculated the number of VMs per rack. The number of chassis per rack does not account for top of rack switches. If these are needed, you may lose one chassis per rack most of the systems will allow for an end of row or core switching configuration.

Blade Calculations

One thing to remember is this is a quick calculation. It estimates the amount of RAM required for overhead and the hypervisor to be 6GB. It is by no means based on any calculations coming from a real assessment. The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O ÷ 0.3 I/O per VM = 66 VMs.

I set out in this journey with the purpose of taking the ideas from an HP engineer and attempted as best as I could to be fair in my version of this presentation. I did not even know what the outcome would be, but I am pleased to find that HP blades offer the highest VM per rack numbers.

The final part of the HP presentation dealt with cooling and power comparisons. One thing that I was surprised to hear, but have not confirmed, is that the Cisco blades want to draw more air (in CFM) than one perforated tile will allow. I will not even get into the “CFM pre VM” or “Watt per VM” numbers, but they also favored HP blades.

Please, by all means challenge my numbers. But back them up with numbers yourself.

Cisco B200M1 Cisco B250M1 Dell M710 Dell M905 IBM LS42 HP BL460c HP BL490c HP BL685c
Max RAM 4GB DIMMs 48 192 72 96 64 48 72 128
Total VMs Possible 16 64 24 32 21 16 24 42
Max RAM 8GB DIMMs 96 384 144 192 128 96 144 256
Total VMs Possible 32 128 48 64 42 32 48 85
Max Total Redundant I/O 10 20 22 22 22 30 30 60
Total VMs Possible 33 66 72 73 73 100 100 200
Max VM per Blade (4GB DIMMs) 16 64 24 32 21 16 24 42
Max VM per Chassis (4GB DIMM) 128 256 192 256 147 256 384 336
Max VM per Blade (8GB DIMMs) 32 66 48 64 42 32 48 85
Max VM per Chassis (8GB DIMM) 256 264 384 512 294 512 768 680

vSphere 4.0 Quick Start Guide Released

The vSphere 4.0 Quick Start Guide: Shortcuts down the path of Virtualization has finally arrived!

I received a pre-release edition of the book at VMworld 2009. This guide has a great selection of shortcuts, tips and best practices for setting up and maintaining vSphere 4. I would be an excellent addition to any VMware administrator’s bookshelf. The book’s size also makes it a great reference for consultants as well. It will easily fit into your backpack.

It was authored by the following geniuses from the community:

Shows these guys some love and pick up a copy to support their efforts.

ESX vs. ESXi Which is Better? Revisited.

For over a year now, I have started off telling customers in Plan and Design engagements that they would be using ESXi unless we uncovered a compelling reason to NOT use it. The “which do I use” argument is still going strong. Our blog post “ESX vs. ESXi which is better?“  was posted in April and is still the most popular. It seems to be a struggle for many people to let go of the service console. VMware is trying to go in the direction of the thinner ESXi hypervisor. They are working to provide alternatives to using the service console.

VMware has provided a comparison of ESX vs. ESXi for version 3.5 for a while. Well, VMware posted a comparison for ESX vs. ESXi for version 4 last night. It’s a great reference.

What is Cloud Computing? I Don't Care!!

So , today I sat in a seminar hosted by VMware, EMC, Cisco and SunGard. It was called “Take the Risk Out of Cloud Computing“. It was the same old mantra…Create your Internal Cloud now in preparation for the coming of the External Cloud. SunGard puts an availability twist with its view on things: “Let us be your hosted cloud and/or your DR cloud.” The sessions seemed to be designed to inform someone who knows about virtualization, but may not understand cloud computing. I was there to see what SunGard’s take on it was. In the Cloud realm, they do two things and they do them well: hosting and DR. (I have to admit, I served a five year sentence with SunGard…)

When Clair Roberts got up to speak, the first thing he did was read the official VMware definition of Cloud Computing. Then he gave his own definition: “I don’t care!” Later, I spoke with him and he admitted that he borrowed it from someone else at VMware, so I am going to borrow it from them, too.

Think about it. “I don’t care!” I don’t care where it is. I don’t care about the hardware. I don’t care how it got there. I don’t care how it cooled. I don’t care how it is powered. I DO care that it is there when I need it and is reasonably responsive from anywhere at any time. That’s it. That’s what cloud computing should be.  Plain and simple: “I Don’t Care!”

Later, David Freund from EMC gave another good analogy of how Cloud Computing should be. He compared it to Intermodal Freight Transport.  You buy or rent a STANDARDIZED CONTAINER and put stuff in it. You don’t care how it gets to the destination, only that it gets there.

Today’s assignment is to put your stuff in the standardized container. That way we can put it somewhere later.

vSphere Install and Upgrade Best Practices KB Articles and Links

So, I use NewsGator to aggregate a BAZILLION feeds from several sources, blogs, like this one, actual news feeds and a bunch of VMware feeds. The VMware feeds are from the VI:OPS and VMTN forums. The VMTN forums allow you to create a custom feed by selecting the RSS link at the bottom right of each page or you can get a feed from a specific section of the forum by clicking the link on the bottom left of a list. On of the custom feed options is to get a feed of the new KB articles.

VMware has released quite a lot of new KB articles surrounding vSphere. They just released nice best practice guidelines for installing or upgrading to ESX 4 and vCenter 4. They are short and to the point. There is also a nice article covering best practices for upgrading an ESX 3.x virtual machine to ESX 4.0. One thing I noticed, but never thought about is this :

“Note: If you are using dynamic DNS, some Windows versions require ipconfig/reregister to be run.”

Eric Seibert over at vSphere-Land posted a nice set of “missing links” for everything vSphere. This is a nice, comprehensive set of links to evetrything you need for vSphere upgrades or installs.So, go check that out as well.