Category Archives: assessment

Is Your Blade Ready for Virtualization? Part 2 – Real Numbers

OK, so my last post brought on a blizzard of remarks questioning some of the validity of the data presented. I used what I was told during a presentation was a “Gartner recommended” configuration for a VM. My error was that I could not find this recommendation anywhere, but the sizing seems fairly valid, so I went with it. I went back to some of the assessments I have done and took data from about 2,000 servers to come up with some more real-world averages. I wanted to post these averages tonight. Remember what I said previously: This is just a set of numbers. You must ASSESS and DESIGN your virtual infrastructure properly. This is only a small piece of it.

I apologize for the images instead of tables, but I spent way too long trying to get tables to lay out properly in WordPress. Click on the images for larger views. I can post the raw data if someone wants to look at it, but I have to work on stripping away proprietary data first.  So, here we go:

Data Summary

If you have ever done a Virtualization Assessment, you will recognize this from the summary page of the workbook. We are going to look at data from 1956 servers. Average RAM usage is about 2069MB. Average CPU utilization is about 5.2%. Average network is about 31KB/s.

Performance Summary

From the same page in the workbook. From this chart, we see that the average ALLOCATED RAM is about 4342MB and the average FREE RAM is about 2273MB. This is where we get the average RAM usage from above.

Raw Data Averages

This is the averages calculated for each row in the raw data summary.

Storage Summary Report

This final chart is from a storage summary report. Average disk read bytes per sec (442,00) + average write bytes per sec (200,000) is about 600,000 bytes. So, total I/O bytes is about 632,000 (600,000 storage + 32,000 network). I used Google to convert this to gigabits: 632 000 bytes = 0.00470876694 gigabits. This is WAY less than the 0.3Gb recommended. So, here is my calculated AVERAGE VM sizing:

  • RAM = 2GB
  • I/O = 0.005Gb
  • Network I/O = 0.0002 Gb
  • Storage I/O = 0.004 Gb

I am not going to claim that this is my recommendation for a VM configuration, because it isn’t. My recommendation is still and will always be to ASSESS YOUR UNIQUE ENVIRONMENT and come up with your own data. I am not going to redo my previous post with these numbers because it is pointless. The intent of the previous post was to come up with a number of VMs in a chassis or rack based on a set of criteria. I also wanted to show a comparison of capabilities of each blade. If I use the numbers from this post, it will only show that each blade in question is capable of hosting even more VMs.

Is Your Blade Ready for Virtualization? A Math Lesson.

I attended the second day of the HP Converged Infrastructure Roadshow in NYC last week. Most of the day was spent watching PowerPoints and demos for the HP Matrix stuff and Virtual Connect. Then came lunch. I finished my appetizer and realized that the buffet being set up was for someone else. My appetizer was actually lunch! Thanks God there was cheesecake on the way…

There was a session on unified storage, which mostly covered the LeftHand line. At one point, I asked if the data de-dupe was source based or destination based. The “engineer” looked like a deer in the headlights and promptly answered “It’s hash based.” ‘Nuff said… The session covering the G6 servers was OK, but “been there done that.”

Other than the cheesecake, the best part of the day was the final presentation. The last session covered the differences in the various blade servers from several manufacturers. Even though I work for a company that sells HP, EMC and Cisco gear, I believe that x64 servers, from a hardware perspective, are really generic for the most part. Many will argue why their choice is the best, but most people choose a brand based on relationships with their supplier, the manufacturer or the dreaded “preferred vendor” status.  Obviously, this was an HP – biased presentation, but some of the math the Bladesystem engineer (I forgot to get his name) presented really makes you think.

Lets start with a typical configuration for VMs. He mentioned that this was a “Gartner recommended” configuration for VMs, but I could not find anything about this anywhere on line. Even so, its a pretty fair portrayal of a typical VM.

Typical Virtual Machine Configuration:

  • 3-4 GB Memory
  • 300 Mbps I/O
    • 100 Mbps Ethernet (0.1Gb)
    • 200 Mbps Storage (0.2Gb)

Processor count was not discussed, but you will see that may not be a big deal since most processors are overpowered for todays applications (I said MOST). IOps is not a factor either in these comparisons, that would be a factor of the storage system.

So, let’s take a look at the typical server configuration. In this article, we are comparing blade servers. But this is even typical for a “2U” rack server. He called this an “eightieth percentile” server, meaning it will meet 80% of the requirements for a server.

Typical Server Configuration:

  • 2 Sockets
    • 4-6 cores per socket
  • 12 DIMM slots
  • 2 Hot-plug Drives
  • 2 Lan on Motherboard (LOM)
  • 2 Mezzanine Slots (Or PCI-e slots)

Now, say we take this typical server and load it with 4GB or 8GB DIMMs. This is not a real stretch of the imagination. It gives us 48GB of RAM. Now its time for some math:

Calculations for a server with 4GB DIMMs:

  • 48GB Total RAM ÷ 3GB Memory per VM = 16 VMs
  • 16 VMs ÷ 8 cores = 2 VMs per core
  • 16 VMs * 0.3Gb per VM = 4.8 Gb I/O needed (x2 for redundancy)
  • 16 VMs * 0.1Gb per VM = 1.6Gb Ethernet needed (x2 for redundancy)
  • 16 VMs * 0.2Gb per VM = 3.2Gb Storage needed (x2 for redundancy)

Calculations for a server with 8GB DIMMs:

  • 96GB Total RAM ÷ 3GB Memory per VM = 32 VMs
  • 32 VMs ÷ 8 cores = 4 VMs per core
  • 32 VMs * 0.3Gb per VM = 9.6Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.1Gb per VM = 3.2Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.2Gb per VM = 6.4Gb Storage needed (x2 for redundancy)

Are you with me so far? I see nothing wrong with any of these yet.

Now, we need to look at the different attributes of the blades:


* The IBM LS42 and HP BL490c Each have 2 internal non-hot plug drive slots

The “dings” against each:

  • Cisco B200M1 has no LOM and only 1 mezzanine slot
  • Cisco B250M1 has no LOM
  • Cisco chassis only has one pair of I/O modules
  • Cisco chassis only has four power supplies – may cause issues using 3-phase power
  • Dell M710 and M905 have only 1GbE LOMs (Allegedly, the chassis midplane connecting the LOMs cannot support 10GbE because they lack a “back drill.”)
  • IBM LS42 has only 1GbE LOMs
  • IBM chassis only has four power supplies – may cause issues using 3-phase power

Now, from here, the engineer made comparisons based on loading each blade with 4GB or 8GB DIMMs. Basically, some of the blades would not support a full complement of VMs based on a full load of DIMMS. What does this mean? Don’t rush out and buy blades loaded with DIMMs or your memory utilization could be lower than expected. What it really means is that you need to ASSESS your needs and DESIGN an infrastructure based on those needs. What I will do is give you a maximum VMs per blade and per chassis. It seems to me that it would make more sense to consider this in the design stage so that you can come up with some TCO numbers based on vendors. So, we will take a look at the maximum number of VMs for each blade based on total RAM capability and total I/O capability. The lower number becomes the total possible VMs per blade based on overall configuration. What I did here to simplify things was take the total possible RAM and subtract 6GB for hypervisor and overhead, then divide by 3 to come up with the amount of 3GB VMs I could host. I also took the size specs for each chassis and calulated the maximum possible chassis per rack and then calculated the number of VMs per rack. The number of chassis per rack does not account for top of rack switches. If these are needed, you may lose one chassis per rack most of the systems will allow for an end of row or core switching configuration.

Blade Calculations

One thing to remember is this is a quick calculation. It estimates the amount of RAM required for overhead and the hypervisor to be 6GB. It is by no means based on any calculations coming from a real assessment. The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O ÷ 0.3 I/O per VM = 66 VMs.

I set out in this journey with the purpose of taking the ideas from an HP engineer and attempted as best as I could to be fair in my version of this presentation. I did not even know what the outcome would be, but I am pleased to find that HP blades offer the highest VM per rack numbers.

The final part of the HP presentation dealt with cooling and power comparisons. One thing that I was surprised to hear, but have not confirmed, is that the Cisco blades want to draw more air (in CFM) than one perforated tile will allow. I will not even get into the “CFM pre VM” or “Watt per VM” numbers, but they also favored HP blades.

Please, by all means challenge my numbers. But back them up with numbers yourself.

Cisco B200M1 Cisco B250M1 Dell M710 Dell M905 IBM LS42 HP BL460c HP BL490c HP BL685c
Max RAM 4GB DIMMs 48 192 72 96 64 48 72 128
Total VMs Possible 16 64 24 32 21 16 24 42
Max RAM 8GB DIMMs 96 384 144 192 128 96 144 256
Total VMs Possible 32 128 48 64 42 32 48 85
Max Total Redundant I/O 10 20 22 22 22 30 30 60
Total VMs Possible 33 66 72 73 73 100 100 200
Max VM per Blade (4GB DIMMs) 16 64 24 32 21 16 24 42
Max VM per Chassis (4GB DIMM) 128 256 192 256 147 256 384 336
Max VM per Blade (8GB DIMMs) 32 66 48 64 42 32 48 85
Max VM per Chassis (8GB DIMM) 256 264 384 512 294 512 768 680

VMware Capacity Planner 2.7 – Lipstick on a Pig

Last week’s upgrade / outage of the VMware Capacity Planner Dashboard was a COMPLETE surprise to me. I was trying to access data on Friday with no success. Why? Because I just don’t pay attention to the notices on the front page of the dashboard. Low and behold, the Capacity Planner Dashboard is now available! It has been upgraded to version 2.7 with perty colors and everything!

Capacity Planner 2.7 Dashboard

Capacity Planner 2.7 Dashboard

Actually, I use “Lipstick on a Pig” lovingly. Capacity Planner is huge – thus the “Pig” part. It collects and analyzes monster amounts of data rather well. I use it frequently.  I am currently involved in an assessment of about 1300 systems. I have learned from experience to “scale” the Capacity Planner “Data Collectors” by using multiple collector machines, limiting to about 200-250 systems per collector. All of the inventory and performance data gets collected efficiently and is uploaded to the mother ship in multiple, but smaller, chunks. All of the heavily lifting is done at the mother ship, so you don’t need a beefy machine for the data collection. Creating reports and Optimization Scenarios (Formerly known as Consolidation Scenarios) in the Dashboard is fairly straight-forward and the reports generate in about 10-15 minutes with larger assessments. Far better than some of the competing products that I have used.

The new version brings some nice new features as well. It makes it easier to perform desktop virtualization assessments and it looks like they are gearing up to provide application virtualization assessments as well. They have also tweaked users, groups, access and permissions as well. Although it works fine on my Linux desktop running Firefox 3,  sadly, VMware only officially supports Internet Exploser 5.5 and above.

So what is the difference between CP and the “competing” products? Why are people still paying for something that they can get for free from VMware or a VAC partner? The first is access to the data. You need a login to access the CP Dashboard. Other products are run locally. I say “So what?!”You can get your VMware guru to collect your data and then generate optimization scenarios and reports for you. They will give you some nice stuff with plenty of information. All you have to do is ask.

The other thing at issue with CP is the ability to generate graphs and charts for the corner office people. The CP Dashboard has a few graphs mixed in, but there are many other things you may want to put into a graph. In order to do this with CP, you need to dump the data into a spreadsheet and generate graphs and charts with the spreadsheet software. This can sometimes be a daunting task to some.

One of the few useful graphs in CP

One of the Few Useful Graphs in the CP Dashboard

Just an aside: As you can see from the screenshot above, even with a ton of servers, the vast majority of systems only show 10% or less processor utilization. This is typical for an assessment.

The final reason why you might NOT want to use Capacity Planner is that the Optimization Scenarios are locked in to VMware ESX or VMware Server. You cannot run a scenario against XEN, KVM or *GASP* Hyper-V…. But that doesn’t matter because you really WANT to use VMware anyway. So, what are you waiting for? Go forth and virtualize!

Below are the release notes:

VMware Capacity Planner Release Notes

Current Version 2.7 Build 32117
Last Updated 5/20/09

VMware Capacity Planner Version 2.7 is an upgrade from Version 2.6.x.  The purpose of this upgrade is to release new features.

What’s New in Capacity Planner 2.7

Capacity Planner 2.7 has a new look and feel. Many of the menu options have changed, and reports have been enhanced. The following items are the main changes in this version.

  • Desktop Virtualization. VDI assessments enable you to virtualize destops utilizing software profiles and base images.
  • Software Profiles. Software profiles replace application profiles and can now be edited by Partners. Software Profiles allow tags to describe the software. Software Profiles can represent applications and operating systems. They keep track of individual process utilization as well as system-wide use. More computing resource utilization dimensions are shown for each profile.
  • Base Image Creation. System Software Cluster analysis is used to build a few images that maximize software usage.
  • VM Template Sizing. You can create VM Templates, based on various base images, during an Optimization Scenario Analysis.
  • Reporting. Optimization reports now includes new reports. These reports are formally known as the Consolidation Estimator Reports. The new report is a complete assessment report. The controls for the output are located in the Assessment Global Settings. The link to get to the Global Settings is at the bottom of the Optimization Report Page. This report is the only place that contains the following information at this time: VM-to-VM Template mapping, VM Template Sizes, and Base Image Report. For the Custom Report, the display limit is set to 10,000. If the amount of data exceeds this limit, the data that exceeds the limit is not displayed.
  • Scenario. The scenario now includes the ability to select by system attributes. It also has a Base Image selection page. Selecting Base Images is required to include the Base Image, VM Template Size, and VM-to-VM Template mapping sections in the Assessment Report.
  • User Groups. You can now create a user group to give users access to a company, template, report, or scenario.
  • Access and Permissions. The security model that has been used by company roles is now extended to templates. This allows individual access to templates by a single user or a group of users. Partners and VMware can create templates that are meant only for a certain group of users. This will remove the need to create multiple companies to manage users and templates.
  • Date Range Selection. Users can now select a range of dates to be used for the assessment.
  • Alerts and Anomalies. The behavior of alerts and anomalies has changed in this release.
  • User Self-provisioning. A Partner Company (only partner) can adjust the security settings in their company to allow users with the same email suffix that is supplied in the company information to request and automatically approve a login account. The Partner will need to create a suffix to enable this feature adjust the Security Policy to allow self-provisioning.
  • Collector SSH Port setting. The collector now allows the user to change the SSH port to something other than 22. This is a global setting and will not allow per system port settings for now.
  • Collector/Dashboard Inventory Additions. The Collector and the Dashboard now collect desktop inventory and show Video Card, PnP Devices, Pagefile, and Printers for the purpose of doing desktop assessments.
  • Create new CE users. You can now create a user within a CE assessment.
  • Multiple Assessments. More than one assessment per company is now supported.
  • Sudo support. Sudo support has been added in this release.

Redesigned Interface

This release introduces a new look for the Dashboard. Many of the menus have changed. Online help is now available from the Help menu. The Online Library containing the Installation Guide, Getting Started with Capacity Planner, the Troubleshooting Guide, and the Reference Guide is available from the Portal. In addition, the Installation Guide and Getting Started with Capacity Planner are available as PDF files in the Portal.

The major changes for the Dashboard include:

  • Style changes
    • Style. The background and logo have changed.
    • Labels, Titles, Menus. The menu structure and labeling have changed. For example, Consolidation has changed to Optimization. The Roles label has changed to Access and Permissions.
    • Forms and Wizards. Several have been improved: New Assessment, New User, Access and Permissions
  • Feature changes
    • Notifications. Notification creation is now simpler.
    • Architecture. An analysis engine and a reporting engine have been added.
    • Software Profiles. Application Profiles is replaced with an improved Software Profile and Report feature.
    • Software Profile Templates are created and managed from Dashboard > Assessment > Assessment Tools > Software Profile Templates.

    • Online Help is a new addition available from the Help link. Other online documentation is available from the Portal link.
    • Reports. Reports have been enhanced. A storage report has been promoted to first class from the custom reports. It is located under Performance and does not include all the columns of the custom report.
    • Application Analysis. Application Analysis allows you to analyze application usage and create Base Images.
    • Base Images are created and managed from Dashboard > Analyze > Base Images.

All of the documentation is provided in HTML format. We would like to know what you think. Please take a moment to do our survey: