Monthly Archives: November 2009

VMware Workstation 7, VMware Player and Microsoft Virtual PC

A little over a week ago,  I was pleasantly surprised by an email from VMware announcing the  release of VMware Workstation 7. Since I actively participated in the beta, they gave me a free license key for the new version. That’s reason enough to love it in itself! But, to be honest, I have been using VMware Workstation for quite some time now. I vaguely remember Y2K testing with it back when is was an IT pup. Since I got the fresh copy, I decided to completely redo my laptop with a fresh install of Winders 7 and all of my handy convenience programs (Office, TortoiseSVN,  TweetDeck, FeedDemon, Firefox, Pandora, etc.). Since Winders 7 and IE8 have some compatability issues with some things, I decided to create a hybrid of what I did when I ran Ubuntu as the host OS. Since I was making things fresh, I created a Winders 2003 template then spawned a VM to host all of my favorite tools for VMware. I will most likely create spawns of the template for other things, like SAN tools. This gives me modules to do the job of the day and portability in case the host crashes.

So, let’s say I didn’t get a free copy of Workstation. What are the options? Would I be able to justify the $189 for it? Let’s look at some of the differences, starting with the free stuff:

Windows Virtual PC

The newest version of Windows Virtual PC is available as a free download. I want to say that I did NOT download or install it, so my comparison is based on marketing materials from the Evil Empire’s site.


  • It’s free
  • “Access your Windows 7 Known Folders.” I think this compares to shared folders, but it looks like it may be limited to the folders in the “Libraries.”
  • USB Support
  • Clipboard Sharing
  • Seamless Applications. It sounds like their version of unity, which I almost never use anyway.
  • It supports Windows XP mode in Windows 7


  • Requires AMD-V or Intel-VT CPU feature. They list this as a feature…
  • It only runs on specific versions of Winders (The newest version only runs or Win7)
  • It only runs Winders guests
  • No VM Teams
  • No snapshots

VMware Player

The newest version of VMware Player is also available as a free download. It also installs when you install Workstation. Unlike previous versions, this new version allows you to create VMs.


  • Its free
  • Easily share ANY folder
  • USB Support
  • Clipboard Sharing
  • Unity mode
  • Supports many versions of Windows and Linux for host and Guest Operating systems
  • It supports Windows XP mode in Windows 7


  • No VM Teams
  • No Snapshots
  • No Clones

VMware Workstation 7

The newest version of VMware Workstation is available for free download for a 30 day evaluation.


  • Easily share ANY folder
  • USB Support
  • Clipboard Sharing
  • Unity mode
  • Supports many versions of Windows and Linux for host and Guest Operating systems
  • VM Teams
  • Multiple Snapshots
  • Automatic VM Backups


  • Not free

As you can see, even VMware Player offers much more than Windows Virtual PC. It supports Windows XP Mode for Windows 7 users, and it does it even better than Virtual PC. You do, however need to use VMware Converter to change the Windows XP VHD to a VMDK. It also supports many more operating systems as hosts and guests. It even supports more versions of Windows. This MAY be good enough, but not for me. Here are the features I most like about VMware Workstation:

VM Teams

The idea of a VM Team is similar to then new vApp found in vSphere, but it has been in workstation for quite some time. I guess I should say that the vSphere vApp is similar to a VM Team. It allows you to create a set of VMs that work with each other. You can set startup delays, bandwidth throttling, etc. It offers you a thumbnail view of all of the VMs in the team as well.

VM Team

Multiple Snapshots

The ability to take multiple snapshots has been around for a while, too. It allows you to take snapshots on the fly and revert to a point in time if needed. This comes in handy for developers testing code. I use it for a few things. I have a “Virtual Data Center” set up with an ESX server, and ESXi server and a vCenter Server. I have it set up with snapshots at certain states of the installation process. If I need to create a script for a certain task or create a dcoument, I can create a linked clone of the team based on a certain point in the process.

Snapshot Manager


The new AutoProtect feature is my favorite. It automatically creates snapshots to back up your VM. You can set it to create restore points every hal-hour, hour or day and how many snapshots to retain. It will tell you how many hourly, daily and weekly snapshots it will keep and how much additional disk space it expects to use. Its great for me because I sometimes forget to take a snapshot before installing something.


Free VMware Workstation Training!

VMware is offering a self-paced online course to introduce you to VMware Workstation. It covers many of the important features.


As a technical professional, you really cannot do without the features provided by VMware Workstation. There are so many things that you can do with it that you can’t do with Virtual PC or VMware Player that it should justify the cost. Looking for a free copy? If you are in the Philadelphia area on November 19th, consider attending the PAVMUG meeting, you may just win a copy. My next post will be covering what is planned for this day.

A Different Take on CEE and FCoE

Last Month, I attended a Brocade Net.Ed Session that covered Converged Enhanced Ethernet (CEE) and Fibre Channel over Ethernet (FCoE) and the idea of Server I/O Consolidation. If you missed the Net.Ed sessions, you can learn about it at Brocade’s Training Portal.  Once you register / login, click on Self-Pased Training and search or browse for FCoE 101 Introduction to Fibre Channel over Ethernet (FCoE).  It’s free. Here is an unabridged report about the Net.Ed session with some of my opinions wrapped in:


With cloud computing, the consolidation of servers, storage and I/O are becoming popular. Once upon a time, server consolidation ratios were bound by processor and RAM count. With the introduction of servers with higher core count, faster processors and higher RAM capacities, the new boundaries are becoming I/O. related. And the I/O stack is answering the call for faster speeds. If you look at the trends, Fibre Channel speed has gone from 1Gb to 2Gb to 4Gb and now 8Gb. Soon, 16Gb FC will be the norm. Ethernet has gone from 10Mb to 100Mb to 1Gb and now 10Gb. The next chapter will bring 40Gb or 100Gb or both.

Fibre Channel and Ethernet have been in a leap frog contest since Fibre Channel was introduced. And there are plenty of arguments about which is “better” and why. Remember how iSCSI was going to take over the world with storage I/O? Why? Because people think they can implement it on the cheap. If it is implemented properly, it may not be that much cheaper than FC. I see too many instances where admins will implement iSCSI over their existing network, without thought of available bandwidth, security, I/O, etc. Then they complain how iSCSI sucks because of poor performance. Consolidation magnifies this. To top it off, iSCSI doesn’t help when dealing with things like FICON or the many tape drives that need faster throughput than what iSCSI can offer.

Hardware consolidation is also popular, and sometimes occurs during the server consolidation project. Blade servers are becoming more popular for many reasons. Less rack space, less cables, centralized management, etc. are all good reasons for blade servers. I just LOVE walking in to a data center and looking at the spaghetti mess behind the racks! Even with blade servers, the number of cables is still crazy. Some people still have Top of Rack switches, even with blades. More enlightened people have End of Row or Middle of Row switches. But there is still that mess in the back of the rack. I especially love when some genius decides to weave cables through the handles on a power supply….

Consolidate Your I/O

Enter I/O consolidation. Brocade calls it Unified I/O.  This is supposed to reduce cabling even more. I say “maybe.” In order to consolidate I/O, different protocols, adapters and switches are necessary. OH MY GAWD! New technology! This means the dreaded “C” word…Change. In a nutshell, it reduces the connections. You go from two to four NICs and two to four FCAs to two Converged Ethernet Adapters (CNAs). It is supposed to reduce cabling and complexity. It’s supposed to help with OpEx and CapEx by enabling more airflow/ cooling, and saving money on admin costs and cable costs, blah blah blah… Didn’t we hear this about blades too?

The Protocols (Alphabet Soup)

In order to make all of this work and become accepted, you need to worry about things like low latency, flow control and lossless quality. This needs to be addressed with standards. The results are CEE and FCoE. The issue arises with CEE. Not all of the components have been finalized. Things like priority based flow control (IEEE 802.1Qbb), Enhanced Transmission Selection (IEEE 802.1Qaz), Congestion Management (IEEE 802.1Qau) and. The IETF is still working on Transparent Interconnection of Lots of Links (TRILL) which will enable a layer 2 multipath without STP.



Priority Flow Control (PFC)
IEEE 802.1Qbb

Helps enable a lossless network, allowing storage and networking traffic types to share a common network link

Enhanced Transmission Selection (Bandwidth Management)
IEEE 802.1Qaz

Enables bandwidth management by assigning bandwidth segments to different traffic flows

Congestion Management
IEEE 802.1Qau

Provides end-to-end congestion management for Layer 2 networks

Data Center Bridging Exchange Protocol (DCBX)

Provides the management protocol for CEE

L2 Multipathing: TRILL in IETF

Recovers bandwidth, multiple active paths; no spanning tree

FCoE/FC awareness

Preserves SAN management practices

Source: Brocade Data Center Convergence Overview Net.Ed Session

My Two Cents

So, without fully functioning CEE, the FCoE cannot traverse the network. This stuff is all supposed to be ratified soon. Until these components are ratified, the dream of true FCoE is just a dream. The bridging can’t be done close to the core yet. So People who decided to start using CNAs and Data Center Bridges will need to place the DCBs close to the server (No Hops!) and terminate their FC at the DCB. In the case of the UCS, this is the Top of Rack or End/Middle of Row switch . In the case of an HP chassis, it’s the chassis, and they don’t even have this stuff yet.

My question is this: Why adopt a technology that is not completely ratified? Like I said before, all of this requires change. You may be in the middle of a consolidation project and you are looking at I/O consolidation. Do you really want to design your data center infrastructure to support part of a protocol? Are you willing to make changes now and then make new changes in six months to bring the storage closer to the core?

So, let’s assume everything is ratified. You have decided to consolidate your I/O. How many connections do you really save? Based on typical blade chassis configurations, it may be four to eight FC cables. But look at it another way: You are losing that bandwidth. A pair of 10Gb CNAs will give you a total of about 20Gb of bandwidth. A pair of 10GbE Adapters and a pair of 8Gb FC adapters gives you about 36Gb. So, sure, you save a few cables. But you give away bandwidth. When you think about available bandwidth, is a pair of 10Gb CNAs or NICs enough? I remember when 100Mb was plenty. If consolidation is becoming I/O bound, do you want to limit yourself?  How about politics? Will your network team and storage team play nice together? Where is the demarcation between SAN and LAN?

I first saw the UCS Blades almost a year ago and I was excited about the new technology. Their time is coming soon. The HP Blades have always impressed me since they were introduced. They will never go away. I have used the IBM and Dell blades. My mother always said that if I didn’t have anything nice to say about something, don’t say anything at all…

When I take a look at the server hardware available to me now (HP and Cisco), I see pluses and minuses to both. The UCS Blades have no provisions for FC, so you need to drink the FCoE Kool Aide or use iSCSI. The HP blades allow for more I/O connections and can support FC, but not FCoE. If you want to make the playing field similar, you should compare UCS to the HP Blades with Flex-10. This will make the back-end I/O modules similar. Both act as a sort of matrix to map internal I/O to external I/O. Both will pass VLAN tags for VST and both will accommodate the 1000-v dvSwitches. The thing about Flex-10 is that it requires a different management interface if you are already a Cisco shop.

There’s a fast moving freight train called CHANGE on the track. It never stops. You need to decide when you have the guts to jump on and when you have the guts to jump off.