Category Archives: Cloud Computing

Q&A: Data Center Expert Shares What’s Next for Higher Education IT

Check out my interview with Nicci Fagan here:

Q&A: Data Center Expert Shares What’s Next for Higher Education IT

vCloud Trick – Joining a Domain and Specifying a Machine OU

NOTE: This is no longer required in vCD 5.1 & above!

This is one of those situations where I really start to hate computers!  I was working with vCloud Director with a goal of having a winders VM run through guest customization, change the name, get a fixed IP from the network pool, join an Active Directory Domain and move to a specific OU in the AD.

The Problem

There is a spot in the VM properties to specify a domain to join. You can use the settings specified in the organization or enter the domain information directly. Read more »

Big vCloud Director Security Gotchas That I Have Found

This post includes an important security “gotcha” that I recently uncovered with vCloud Director 1.5 running on vSphere 5. If you are using vCloud Director, you should check your settings.

The BIG Security Issue

Read more »

Creating an Automated ESXi Installer

Back in the summer, I saw Stu’s Post about automating the installation of ESXi. I was reminded again by Duncan’s Post. Then, I found myself in a situation when a customer bought 160 blades for VMware ESXi. With this many systems, it would be almost impossible to do this without mistakes. I took the ideas from Stu and Duncan and created an ESXi automated installer that works from a PXE deployment server, like the Ultimate Deployment Appliance. I took it a step further and added the ability to use a USB stick or a CD for those times when PXE is not allowed. The document below is a result of it.

This is a little different than the idea of a stateless ESXi server, where the hypervisor actually boots from PXE. This is the installer booting from PXE so that the hypervisor can be installed on local disk, an internal USB stick or SD card. You could also use it for a “boot from SAN” situation, but extreme care should be taken so you don’t accidentally format a VMFS disk.

As always, if anyone has comments, corrections, etc., please feel free to post a comment below.

The document can be found here -> Creating an Automated ESXi Installer

Summary

The ability to use an automated, unattended installation routine for a hypervisor is necessary whenever it is deployed to multiple systems or is done frequently. Automated installations help avoid a misconfiguration caused by human error, which become common when repetitive tasks are performed.  Because the “traditional” version of VMware ESX Server contains a Red Hat Linux based console operating system, we have been able to leverage kickstart scripts for automated installation. With the ESXi hypervisor, much of this functionality is not available because of the smaller footprint.

This document explains how to set up ESXi with little intervention. The modifications explained here can be used to deploy ESXi using a PXE server. In our examples we will use the Ultimate Deployment Appliance, but these methods will also transfer to such commercial packages as HP Rapid Deployment Pack, Altiris, or even a home grown PXE server. The modifications can also be used for deploying ESXi using a USB stick or a customized CD.

Requirements

  • ESXi Server Installable The ESXi CD image can be downloaded from the VMware site, however using a systems management and monitoring server, such as HP SIM or Dell OpenManage is highly recommended. Since there are usually vendor specific CIM providers to enhance the monitoring capabilities, some vendors will provide a customized CD image with the CIM providers. These additional CIM providers will also allow for more information to be displayed in the hardware sections of the vSphere Client. A search for “ESXi” on the HP and Dell sites produced links to the latest customized images.
  • Deployment Server A deployment server will allow for a controlled, automated installation of the ESXi Server software. The ability to handle multiple operating system installations is also desired. The ability to provide PXE and DHCP services is required as well. Most times, the deployment server will be running PXE services and TFTP. The DHCP services may be running on a different server in an enterprise. This document does not explain how to set up a separate DHCP server. For this document, we will be using the Ultimate Deployment Appliance (UDA) version 2.0 (beta).
  • Virtualization Software The UDA runs as a “Virtual Appliance,” which is a pre-configured virtual machine. It will run under VMware ESXi (available as a free or licensed instance), VMware Workstation (available for purchase), VMware Player (free) or VMware Server (free). In this document, VMware Workstation is used.
  • Optional software Although no additional software is required when using the UDA, you will need additional software if you plan on using a USB stick or if you plan on creating a customized CD image:
    • VMware Converter If you plan on using ESXi or Server to host the UDA, VMware Converter can be used to import the virtual appliance.
    • Syslinux In order to make a bootable USB stick, you will need the syslinux utility. This utility is available for Linux and Windows. The UDA does not include it. As an alternative, you can use the unetbootin utility.
    • CD Imaging and Burning In order to create a bootable CD image, you will need software to create the CD image (mkisofs) and then software to burn the image to the CD media (cdrecord). The cdrtools project includes versions for Linux and Windows. Most Debian versions of Linux, such as Ubuntu, come with the cdrkit, which uses genisoimage for imaging and wodim for burning.
    • Linux Desktop If you look at the contents of the ESXi CDs using Windows (Windows 7 was used), you may see all of the files listed using all capital letters. Since the ESXi software is based on Linux, all file operations are case sensitive and expect the files to be all lower case. This may cause errors when attempting to create the automated installer. For this reason, a Linux desktop is recommended. For most of the operations, UDS may be used. The only missing software on the UDA is syslinux. For a feature rich Linux desktop, Ubuntu is recommended. A few pre-configured Ubuntu Desktop virtual appliances are also available.

Conclusion

Once you have a hypervisor installed you will need to configure the server and add it to vCenter in an automated fashion. Look for a future doc covering this. For now, check out these resources for post install configurations:

http://communities.vmware.com/docs/DOC-7364

http://communities.vmware.com/docs/DOC-7511

http://communities.vmware.com/docs/DOC-8170

Is Your Blade Ready for Virtualization? Part 2 – Real Numbers

OK, so my last post brought on a blizzard of remarks questioning some of the validity of the data presented. I used what I was told during a presentation was a “Gartner recommended” configuration for a VM. My error was that I could not find this recommendation anywhere, but the sizing seems fairly valid, so I went with it. I went back to some of the assessments I have done and took data from about 2,000 servers to come up with some more real-world averages. I wanted to post these averages tonight. Remember what I said previously: This is just a set of numbers. You must ASSESS and DESIGN your virtual infrastructure properly. This is only a small piece of it.

I apologize for the images instead of tables, but I spent way too long trying to get tables to lay out properly in WordPress. Click on the images for larger views. I can post the raw data if someone wants to look at it, but I have to work on stripping away proprietary data first.  So, here we go:

Data Summary

If you have ever done a Virtualization Assessment, you will recognize this from the summary page of the workbook. We are going to look at data from 1956 servers. Average RAM usage is about 2069MB. Average CPU utilization is about 5.2%. Average network is about 31KB/s.

Performance Summary

From the same page in the workbook. From this chart, we see that the average ALLOCATED RAM is about 4342MB and the average FREE RAM is about 2273MB. This is where we get the average RAM usage from above.

Raw Data Averages

This is the averages calculated for each row in the raw data summary.

Storage Summary Report

This final chart is from a storage summary report. Average disk read bytes per sec (442,00) + average write bytes per sec (200,000) is about 600,000 bytes. So, total I/O bytes is about 632,000 (600,000 storage + 32,000 network). I used Google to convert this to gigabits: 632 000 bytes = 0.00470876694 gigabits. This is WAY less than the 0.3Gb recommended. So, here is my calculated AVERAGE VM sizing:

  • RAM = 2GB
  • I/O = 0.005Gb
  • Network I/O = 0.0002 Gb
  • Storage I/O = 0.004 Gb

I am not going to claim that this is my recommendation for a VM configuration, because it isn’t. My recommendation is still and will always be to ASSESS YOUR UNIQUE ENVIRONMENT and come up with your own data. I am not going to redo my previous post with these numbers because it is pointless. The intent of the previous post was to come up with a number of VMs in a chassis or rack based on a set of criteria. I also wanted to show a comparison of capabilities of each blade. If I use the numbers from this post, it will only show that each blade in question is capable of hosting even more VMs.

Is Your Blade Ready for Virtualization? A Math Lesson.

I attended the second day of the HP Converged Infrastructure Roadshow in NYC last week. Most of the day was spent watching PowerPoints and demos for the HP Matrix stuff and Virtual Connect. Then came lunch. I finished my appetizer and realized that the buffet being set up was for someone else. My appetizer was actually lunch! Thanks God there was cheesecake on the way…

There was a session on unified storage, which mostly covered the LeftHand line. At one point, I asked if the data de-dupe was source based or destination based. The “engineer” looked like a deer in the headlights and promptly answered “It’s hash based.” ‘Nuff said… The session covering the G6 servers was OK, but “been there done that.”

Other than the cheesecake, the best part of the day was the final presentation. The last session covered the differences in the various blade servers from several manufacturers. Even though I work for a company that sells HP, EMC and Cisco gear, I believe that x64 servers, from a hardware perspective, are really generic for the most part. Many will argue why their choice is the best, but most people choose a brand based on relationships with their supplier, the manufacturer or the dreaded “preferred vendor” status.  Obviously, this was an HP – biased presentation, but some of the math the Bladesystem engineer (I forgot to get his name) presented really makes you think.

Lets start with a typical configuration for VMs. He mentioned that this was a “Gartner recommended” configuration for VMs, but I could not find anything about this anywhere on line. Even so, its a pretty fair portrayal of a typical VM.

Typical Virtual Machine Configuration:

  • 3-4 GB Memory
  • 300 Mbps I/O
    • 100 Mbps Ethernet (0.1Gb)
    • 200 Mbps Storage (0.2Gb)

Processor count was not discussed, but you will see that may not be a big deal since most processors are overpowered for todays applications (I said MOST). IOps is not a factor either in these comparisons, that would be a factor of the storage system.

So, let’s take a look at the typical server configuration. In this article, we are comparing blade servers. But this is even typical for a “2U” rack server. He called this an “eightieth percentile” server, meaning it will meet 80% of the requirements for a server.

Typical Server Configuration:

  • 2 Sockets
    • 4-6 cores per socket
  • 12 DIMM slots
  • 2 Hot-plug Drives
  • 2 Lan on Motherboard (LOM)
  • 2 Mezzanine Slots (Or PCI-e slots)

Now, say we take this typical server and load it with 4GB or 8GB DIMMs. This is not a real stretch of the imagination. It gives us 48GB of RAM. Now its time for some math:

Calculations for a server with 4GB DIMMs:

  • 48GB Total RAM ÷ 3GB Memory per VM = 16 VMs
  • 16 VMs ÷ 8 cores = 2 VMs per core
  • 16 VMs * 0.3Gb per VM = 4.8 Gb I/O needed (x2 for redundancy)
  • 16 VMs * 0.1Gb per VM = 1.6Gb Ethernet needed (x2 for redundancy)
  • 16 VMs * 0.2Gb per VM = 3.2Gb Storage needed (x2 for redundancy)

Calculations for a server with 8GB DIMMs:

  • 96GB Total RAM ÷ 3GB Memory per VM = 32 VMs
  • 32 VMs ÷ 8 cores = 4 VMs per core
  • 32 VMs * 0.3Gb per VM = 9.6Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.1Gb per VM = 3.2Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.2Gb per VM = 6.4Gb Storage needed (x2 for redundancy)

Are you with me so far? I see nothing wrong with any of these yet.

Now, we need to look at the different attributes of the blades:

2009-12-31_112613

* The IBM LS42 and HP BL490c Each have 2 internal non-hot plug drive slots

The “dings” against each:

  • Cisco B200M1 has no LOM and only 1 mezzanine slot
  • Cisco B250M1 has no LOM
  • Cisco chassis only has one pair of I/O modules
  • Cisco chassis only has four power supplies – may cause issues using 3-phase power
  • Dell M710 and M905 have only 1GbE LOMs (Allegedly, the chassis midplane connecting the LOMs cannot support 10GbE because they lack a “back drill.”)
  • IBM LS42 has only 1GbE LOMs
  • IBM chassis only has four power supplies – may cause issues using 3-phase power

Now, from here, the engineer made comparisons based on loading each blade with 4GB or 8GB DIMMs. Basically, some of the blades would not support a full complement of VMs based on a full load of DIMMS. What does this mean? Don’t rush out and buy blades loaded with DIMMs or your memory utilization could be lower than expected. What it really means is that you need to ASSESS your needs and DESIGN an infrastructure based on those needs. What I will do is give you a maximum VMs per blade and per chassis. It seems to me that it would make more sense to consider this in the design stage so that you can come up with some TCO numbers based on vendors. So, we will take a look at the maximum number of VMs for each blade based on total RAM capability and total I/O capability. The lower number becomes the total possible VMs per blade based on overall configuration. What I did here to simplify things was take the total possible RAM and subtract 6GB for hypervisor and overhead, then divide by 3 to come up with the amount of 3GB VMs I could host. I also took the size specs for each chassis and calulated the maximum possible chassis per rack and then calculated the number of VMs per rack. The number of chassis per rack does not account for top of rack switches. If these are needed, you may lose one chassis per rack most of the systems will allow for an end of row or core switching configuration.

Blade Calculations

One thing to remember is this is a quick calculation. It estimates the amount of RAM required for overhead and the hypervisor to be 6GB. It is by no means based on any calculations coming from a real assessment. The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O ÷ 0.3 I/O per VM = 66 VMs.

I set out in this journey with the purpose of taking the ideas from an HP engineer and attempted as best as I could to be fair in my version of this presentation. I did not even know what the outcome would be, but I am pleased to find that HP blades offer the highest VM per rack numbers.

The final part of the HP presentation dealt with cooling and power comparisons. One thing that I was surprised to hear, but have not confirmed, is that the Cisco blades want to draw more air (in CFM) than one perforated tile will allow. I will not even get into the “CFM pre VM” or “Watt per VM” numbers, but they also favored HP blades.

Please, by all means challenge my numbers. But back them up with numbers yourself.

Cisco B200M1 Cisco B250M1 Dell M710 Dell M905 IBM LS42 HP BL460c HP BL490c HP BL685c
Max RAM 4GB DIMMs 48 192 72 96 64 48 72 128
Total VMs Possible 16 64 24 32 21 16 24 42
Max RAM 8GB DIMMs 96 384 144 192 128 96 144 256
Total VMs Possible 32 128 48 64 42 32 48 85
Max Total Redundant I/O 10 20 22 22 22 30 30 60
Total VMs Possible 33 66 72 73 73 100 100 200
Max VM per Blade (4GB DIMMs) 16 64 24 32 21 16 24 42
Max VM per Chassis (4GB DIMM) 128 256 192 256 147 256 384 336
Max VM per Blade (8GB DIMMs) 32 66 48 64 42 32 48 85
Max VM per Chassis (8GB DIMM) 256 264 384 512 294 512 768 680

What is Cloud Computing? I Don’t Care!! Part Two

As an update to yesterday’s “I don’t care” post, Mike DiPetrillo has claimed ownership of the original “I don’t care” quote.

And just so everyone understands completely, the “I don’t care” is for the users. The administrators should absolutely care. Users should not need to care. The design of the cloud should be such that the user doesn’t need to care.

Did you put all your stuff in containers yet?

What is Cloud Computing? I Don't Care!!

So , today I sat in a seminar hosted by VMware, EMC, Cisco and SunGard. It was called “Take the Risk Out of Cloud Computing“. It was the same old mantra…Create your Internal Cloud now in preparation for the coming of the External Cloud. SunGard puts an availability twist with its view on things: “Let us be your hosted cloud and/or your DR cloud.” The sessions seemed to be designed to inform someone who knows about virtualization, but may not understand cloud computing. I was there to see what SunGard’s take on it was. In the Cloud realm, they do two things and they do them well: hosting and DR. (I have to admit, I served a five year sentence with SunGard…)

When Clair Roberts got up to speak, the first thing he did was read the official VMware definition of Cloud Computing. Then he gave his own definition: “I don’t care!” Later, I spoke with him and he admitted that he borrowed it from someone else at VMware, so I am going to borrow it from them, too.

Think about it. “I don’t care!” I don’t care where it is. I don’t care about the hardware. I don’t care how it got there. I don’t care how it cooled. I don’t care how it is powered. I DO care that it is there when I need it and is reasonably responsive from anywhere at any time. That’s it. That’s what cloud computing should be.  Plain and simple: “I Don’t Care!”

Later, David Freund from EMC gave another good analogy of how Cloud Computing should be. He compared it to Intermodal Freight Transport.  You buy or rent a STANDARDIZED CONTAINER and put stuff in it. You don’t care how it gets to the destination, only that it gets there.

Today’s assignment is to put your stuff in the standardized container. That way we can put it somewhere later.

More on Cisco UCS, HP Matrix and ITaaS

I just finished reading Project California: a Data Center Virtualization Server – UCS (Unified Computing System) from Cisco. It gave an excellent take on Cisco’s view of how UCS benefits a datacenter. It also explains how new technologies from Intel, QLogic and Emulex all complement the Cisco gear. As a matter of fact, the first four chapters are all about the complementing technologies. Obviously, it is all twisted into a nice package that Cisco offers as their Unified Computing System. Its a great, educational geek book.

The UCS depends on several enabling technologies, like FCoE. FCoE allows you to take your existing Fibre Channel investment and send it down an ethernet channel. A big FAT 10GbE channel. The benefit here is that you can have eight cables feed everything to eight blades and have a nice neat rack. But Scott Lowe points out some limitations on his blog. Right now, it appears that Cisco’s FCoE will terminate at the top of the rack with the Nexus 5000. The book explains how iSCSI is a great alternative and you don’t even need a CNA to make that work, but you need an iSCSI interface on the storage system. So the UCS requires change at some point in the data path.

The HP Matrix is really just the C-Class blade offering coupled with the software to enable management and orchestration, both are important aspect that I believe will assist in making Cloud Computing a reality. The beauty part of the C-Class Blades is that you can keep using Fibre Channel and Ethernet as separate entities, so you don’t really need to make a change in order to use them. The problem is that it doesn’t seem that HP has a mezzanine available to provide FCoE, or some of the virtualization technologies, like SRIOV, VNTag, etc. So, if you want to jump on the FCoE bandwagon or start using some of the neat new networking toys, you will need to wait for a bit.

So, there’s some things about storage, what about networking? Well, the UCS uses what is termed a Fabric Interconnect, which is described as a multiplexer that funnels the sixteen 10GbE ports from the blades down to eight 10GbE uplink ports. I am taking this to be their version of HP’s Virtual Connect, with the added benefit of transferring all of those little Cisco features right up to the Nexus 1000-V dvSwitch. This returns control of the complete network path back to the network admins. This gives the network admins the ability to set up things like policies at the VM level. These settings will follow the VM during VMotion activities, which should allow for a more efficient network.

HP only offers Virtual Connect if you want 10GbE switching within the chassis. Don’t misunderstand me, there is absolutely nothing wrong with Virtual Connect. I have even set them up in (traditionally) Cisco networks. But there are also politics involved when choosing the networking. If HP wants to tout flexibility with interconnects, they may want to make nice with Cisco and come up with a Nexus offering. Or is this a case of Cisco taking their ball and going home to try to force people to buy UCS? I don’t know a lot about Dell Blades, but I don’t see a Cisco 10GbE there either. I used to hear the quip that all of the winders, Linux and Unix boxes are just I/O attached to the Mainframe. Is this a case of the x64 boxes being just I/O attached to the network?

As for ITaaS, both UCS and HP offer some pretty software to allow for management and orchestration. Both have their plusses and minuses (C’mon Cisco… Java? Really?!?) This could be where HP has a big leg up on Cisco. With all of their management software having the same look and feel, on-the-fly dynamic changes can take place with less sdministrator interaction. I’m not so sure CIsco can allow you to provision server, network and storage from the OS to the LUN. Like I said Cloud Computing won’t become a commonplace reality until all of the moving parts can be managed, monitored and provisioned (Orchestration). I’m still not convinced that HP software will allow me to create a RAID/Disk group and provision storage on an EMC box. I’m not so sure that Cisco will play nice in a Brocade fabric and allow for all of the Brocade specific features. And what about someone that chooses to (*GASP*) install an OS directly on the blade? I know that I can provision any hypervisor or winders or Linux on an HP blade. Can Cisco provide an interface to provision an OS directly on the hardware? How about the ability to have VMware running on a blade today, Xen next week and Linux the following week? All without an administrator mounting a CD or interacting with the installer? And how about having that VMware or Xen or Linux OS jump over to a different blade, with or without service interruption, but without manual intervention? That’s ITaaS. That’s Cloud Computing.

DISCLAIMER: I work for a company that is both an HP Partner and a Cisco Partner. These are my opinions, not theirs. Also, I did not pay for the book, but that did not influence this post either.

Stevie's Unified Event Management, My Cloud Shangri-La

If you know Steve Chambers you know he just moved to Cisco. Before that, he was with VMware and has been a pillar of the VI:OPS boards. He is now working on a document about Unified Event Management and in the spirit of community, he is looking for comments, suggestion, etc. He called my attention to the post via Twitter as we were discussing Splunk and it’s capabilities for “Centralized Event Aggregation” (Steve’s terms). Take a look at his post when you get a chance and make some comments. You know that I have heralded the benefits of a centralized logging server. Steve just plain gets it.

And since I mentioned Cisco, I also discovered that Cisco put out a whitepaper on their take regarding the Virtualization Blueprint for the Datacenter. Its their take on how virtualization will benefit your business.  The chart shows how a business’ agility will increase as we climb the lifecycle from consolidation to virtualization and then on to automation.

It doesn’t matter what you are using underneath of it all – VMware, Xen, Hyper-V – UCS, Matrix. It just matters that you have methods to provide centralized monitoring and centralized automation. Although centralized event monitoring and centralized automation are two different things, they are both necessary if you wish to properly monitor and manage your piece of the cloud. I’ve already said my piece on the need for centralized event monitoring and Steve lays out a sample blueprint.

Automation is the new big thing when it comes to the cloud. VMware saw that way back when and they bought Dunes almost two years ago. VMware Orchestrator (VMO) was a big buzz for a little while, but great big VMware couldn’t couldn’t pull off what teenie little Dunes could when it comes to customizing the Orchestrator. They left it in a fairly decent state for smaller businesses with VMware Lifecycle Manager, but it was a hobbled state and didn’t scale very well. You can customize VMO, but you need to be good at the Dunes interface and have a decent knowledge of JavaScripting and that kind of stuff. Even being free, its not for me. The standard release of VMO allows you to set up a facility to request, approve, provision and archive VMs. A great start, but not quite enough.

A quick search for data center orchestration reveals Cisco at the top of the list. But there are others from Novell PlateSpin, Egenera, and DynamicOps that appear to do more. What we REALLY need is a way to orchestrate/ automate the entire data center. Physical servers, VMs, storage and networking can all be provisioned, monitored and managed. Can they all be managed from a common platform? Once you can have a seamless process for provisioning, managing and monitoring every component of the data center, you will see cloud computing really take off. A user (consumer / customer) that needs an application should not care if it is deployed on a physical or virtual machine, what storage devices hold the data or the network that connects it. The user should know the basic requirements for the application and the ORCHESTRATOR should make the decisions about all of these things. The orchestrator will take a request, ask for approval and make sure the application gets deployed without making mistakes. The orchestrator will interface with the monitoring facility and change management to make sure the application is accounted-for. The orchestrator will hand off to the backup facility. The orchestrator will notify you when the application as reached end of life. That’s when we will have “Cloud Shangri-La” (My term).

VMware Partner Exchange Notes – Keynotes

Here are some breif notes from VMware Partner Exchange. I will also post about some of the technical sessions. I am not going to regurgitate the keynotes. The content will be available soon on Partner Central and there are several blogs that have plenty of information from the keynotes. I will however provide some highlights:

Partner Central and Partner University will soon be revamped. The accreditations will be changing and will require a certain number of accredited VCPs before the company can get an accreditation. The categories will be similar to our practices, such as infrastructure virtualization, desktop virtualization and BCDR. If you go to partner central now and click on the partner university link, you will see a little bit of what the changes will be. There is also plenty of web-based, self-paced training. On line tests are available so you can receive accreditations for many different products, most are jumpstarts and plan and design related.

VMware’s obvious desire is “100% virtualized.” Their primary focus will center around cloud computing with an initial push for the internal cloud as many see challenges with getting acceptance for the external cloud. Private clouds will eventually bridge the gaps between the internal and external clouds. Much of this information is already available on VMware’s main site.

The software surrounding VI4 took around 3 million engineering hours to develop. It includes great improvements on resources that will be available to the VMs. The resources will be increased to 8 vCPUs per VM, 256GB RAM per VM, 40 Gbps network throughput per VM, and 200,000 storage IOPS per VM. vCenter maximums will increase to 3000 VMs / 300 Hosts. There will also be a capability for linking up to 10 vCenter servers with a centralized search function.

A new function centers around host profiles, which works similarly to VUM. It establishes configuration baselines for an ESX / ESXi host that includes such things as network, security, storage and NTP settings. A host can be scanned for compliance and remediated with the baseline. The BIG “however” is that it will require “Enterprise Plus” to enable host configuration controls and distributed switches. This will carry a $600 price tag and is not ala carte.

Using ESX4 allowed for 85% native performance on 8-way RHEL/Oracle servers in spec performance tests. The amount of transactions (I forget how many) was 8x the number of Visa’s current transactions.

Out of the gate, vSphere will offer optional components surrounding security, BCDR and networking. Additional vSpere components will become available “over the summer.”

Questioning SaaS

I was torn on whether or not to post this rant, but then I read a post that made my head spin….

First, there was the “Great Gmail Outage of February 2009“. There are constant Twitter outages as it grows in popularity and the servers struggle to keep up. Just last week, Yahoo Mail and Hotmail users were suffering through outages. I read on one site “Although the timing of the incident means that UK customers are unlikely to have been affected, the news will add to those doubts some users have over the software-as-a-service model.” This is the post that nudged me into posting this rant. I have had a few Hotmail accounts since 1998 and have had occasional access issues through the years, before I even knew what SaaS meant. My question is this: So What?!?

How can you doubt Saas because your free email is down? Free is free. You get what you pay for. I read that Google has offered credits to the paying GMail customers, and that is the proper thing to do. But how can executives whine because their GMail/Hotmail/Yahoo is off line when they don’t pay for it? Why are they not paying for a business email service?  I have worked for a few companies that have used “ousourced” paid email services – the REAL model for SaaS. I have had scheduled outages during hours when I am sleeping.

The fact is that Saas is here to stay and it is increasing in value and popularity. Yes, Google is leading the way with their free apps.  Saas is a piece of Cloud Computing. Check out this video explaining Cloud Computing in Plain English:

The Open Cloud Manifesto

The Open Cloud Manifesto was released the other day. The list of supporters is pretty impressive and the non-present are typical. I actually read the manifesto last night on my Blackberry during my daughter’s piano lesson. It was a nice read, even though the site does not have a mobile format.

The idea of cloud computing is apon us. Are you ready for it?