Fail Uncategorized vmware

#VMware #vSA appliance, the stupidest pricing decision I\’ve seen in a long time…

On the face of it the VMware Storage Appliance is a really good idea.

Many installations of virtualisation have a bunch of servers, but no separately installed network storage on which the VM\’s can be stored.  This means that VM\’s are tied to the host on which they are running.  Amongst other disadvantages, it means that if the host fails, the VM\’s go.  It\’s a bit like the old physical days, lose the server, lose the service.

In a decently configured SAN setup, HA will cause any guest servers to be restarted on other hosts, subject to certain conditions – but in principal provided you have both a) the capacity and b) configured it correctly; then your network servers will be back quite quickly.

If you factor in Fault Tolerance (or guest server level resilience like Exchange DAG\’s) then users might not even notice an outage. Perfect.

The vSA gives the owner of servers without a SAN the benefits.  Internal storage on the host servers is consolidated into a single space available to all hosts.  In the event of a host failure, the other hosts still have copies of the VM guests and can bring them back quickly.

But the conditions/requirements attached to this are somewhat, ahem, interesting,
1. You must have RAID10 configuration for the internal storage.
2. Each server must have 4 GB Ethernet ports to provide triangulated connections to the other 2 servers (the vSA is aligned with the SMB editions and only runs on 3 servers).
3. Best practice is that the vCentre should not run on the vSA.  VMware staff at VMworld suggested it run on a separate box outside the cluster – how 2008!!

The consequences of this:
1. To provide (say) 3TB of usable storage the installation will need 12TB of raw disk space.
2. You need to re-use an old box (hardware support contract anyone? RAID support anyone? Driver support anyone?) to run the vCentre server. And don\’t forget, this \”old\” box has to be 64bit!!
3. You need to invest in 6 dual port NIC\’s (you could get quads, but better to spread the physical risk across 2 cards per server).
4. You should have a separate GB switch to link up the vSA so that there is no LAN traffic impacting performance, and your SAN traffic is secure.

You then get an under the covers SAN running across all the hosts and provisioning storage for your VM guests.
Lets\’s say £100 for each dual NIC card, and £200 for each of 12 1TB drives.  That\’s £3,000 in total.

The alternative of say, a NetGear ReadyNAS 3200 (other SAN\’s are available!) with 6TB raw disk space providing about 3.5TB available in a RAID6 style configuration.  This can be got for around £3,000.  I\’d put a second dual NIC card in the SAN to give resilience for the SAN connections, and another 2 resilient ports for a network management interface; say £175 (it\’s special, it\’s for a SAN).  You\’d need the switch still, and I\’d certainly consider two NIC cards in the server for physical resilience.  So let\’s say will still get the 6 dual NIC cards for £600 total again.  You might also want a pair of disks in each server to provide a RAID1 mirrored boot drive, but as you can boot ESXi from USB I\’m going to say no (we are in an economy drive after all)

This means the SAN is going to set you back about £775 more than the vSA cost (or about 25%).

Oh, but wait, i forgot something. The vSA licence costs money. A shade under $8,000, or say (and I\’m being generous) about £5,000.  But hold on, if you\’re a new customer and buying VMware for the project, they\’ll give you a whacking 40% discount.  So let\’s call it £3,000.

Your 25% saving by not buying the SAN has just turned into a 125% premium cost.

What the %^]{ were they smoking when they came up with that idea???

Not only are you paying more but:
1. Your ESX servers are spending valuable computing resources managing a virtual SAN across themselves.
2. Your ES servers are also spending valuable computing resources handling data from the virtual SAN.
3. The setup is so intertwined (vSA is managed by vCentre, as are the ESX hosts themselves) that VMware recommend you host it off the cluster – so the vCentre server is more exposed to risk, and an additional cost and burden (which I\’ve not coated)
4. By recommending a physical vCentre server VMware are exposing you to all the problems of a physical server – which they would normally rubbish.
5. If you hosted the vCentre on the VMware cluster then if everything was shutdown, you might not be able to start your servers up again.  No risk there then 🙂

I am appalled.

If the licence was a factor of 10 cheaper then it might be worth considering. But for any business looking at new kit for a virtualisation project, steer well clear.

If (as VMware said in targeting the product) you are worried about managing another box then a) you have to in this model – the vCentre and b) get some training or good support for the SAN.  If you truly think managing the SAN is going to be a problem, then managing the ESX farm as well will be. So get someone in to do it for you.

VMware – I expressed concerns directly to you this week about your perception and targeting of SMB\’s.  This proves it to me.


PS all numbers in the article are top of the head recollections not Internet searched latest figures. But they serve to prove the point.

Uncategorized vmware

Later that same night…. #vmworld

Uncategorized vmware

#VMworld party, they\’re still channelling #TechEd…

If you were in Nice in 1996 for Microsoft\’s 3rd European TechEd then you may remember the indoor funfair (and Elton, ahem, Jack performing).

I\’ve already tweeted that VMware seem to be channelling the fun, focus, excitement and energy of those events. So blow me, if they didn\’t get an indoor funfair at the Carlsberg centre in Copehagen too!

Fifteen years ago, and it feels just the same. Except I\’m
• greyer (and there\’s less of it)
• wiser
• grumpier

hey ho!


PS. I might remember more of this one too. I recall some very cheap vino… The only time I did so at a conference party.

PPS just realised, and this is really weird – I think I\’m wearing the same shirt – my (still very proud to have and wear) Windows 95 official technical beta testers golf shirt.

Uncategorized vmware

I could have sworn that said muggle spray…

Fair Succeed Uncategorized vmware VMworld

#Success Yesterday I got angry with #VMware, at #VMworld but #congratulations are now deserved

Yesterday I blogged

Later that day, VMware proved they can be nimble and take out of scope decisions quickly, i have tired over recent years of large corporates telling me I have a good idea, but their policy/budget/manger/exec does not allow and that they are sorry that they cannot execute the good idea, so…

After the potential PR disaster of mistakenly telling a few hundred people at VMworld Europe they had won an iPod, VMware\’s initial response was simply sorry. Later that was upgraded to a free marketing t-shirt. Ho hum. I was not impressed.

So I wrote to them suggesting that for a few hundred quid (probably not even detectable in the budget for the conference!) they could have one extra iPod and hold a random draw for all those who thought they had already won one. It would not fix things, but it would at least give everyone a chance, and demonstrate that VMware understood the impact they\’d had.

It\’s NOT about \”compensating\”, it\’s about recognising the excitement and then disappointment that people will have experienced.

To my amazement, VMware not only agreed, but said they\’d give me an iPod to say thanks for the idea. I don\’t often get the chance to praise big companies, but I am happy to do so here.

But, I stress, even if they\’d not got a second for me, I\’d still have written this post

Begin forwarded message:

Subject: RE: Suggestion, was: Re: Congratulations, You are a VMworld Survey Prize Winner


Thank you for the suggestion.

Since you came up with this suggestion, we will provide you with one.

Please come by meeting room xx in the Bella Center to receive your iPod.


Name removed

Subject: Suggestion, was: Re: Congratulations, You are a VMworld Survey Prize Winner

How about you put an iPod Touch into a draw for all the people who got the email?

Cost you a few hundred quid/dollars/euros and everyone who thought they had one, would at least have had a bite at one.

Not trying to be troublesome, just making a suggestion to overcome the loss of goodwill and the major disappointment felt all round. It wasn\’t *inconvenient* it was exciting, and then massively disappointing.

Uncategorized vmware

#Fail #VMware "Congratulations, You are a #VMworld Survey Prize Winner"

Or in this instance. Not.
Someone pushed the wrong button and this email went to \’several hundred\’ attendees. Delegates were being severely disappointed in droves at the registration desk.
We all make mistakes, but having reported record Q3 results last night, and upped the expectations for Q4, maybe VMware could do something a bit more than say they\’re really sorry.

Begin forwarded message:

From: The VMworld Team <>
Subject: Congratulations, You are a VMworld Survey Prize Winner

Thank you for completing a VMworld session survey. You have been randomly chosen to win an iPod Touch for your participation.

Please stop by the Registration Queries desk during the following hours to claim your prize.

07.30 – 20.00 Tuesday
07.30 – 18.30 Wednesday
07.30 – 16.30 Thursday


The VMworld Team

© 2011 The Active Network, Inc.

Fail Licencing Uncategorized vmware

In case you missed my tweets yesterday from the #VMware Licencing session…

Message from #VMware ref licencing 6. Look, honestly, did you *really* think you\’d bought it?

Message from #VMware ref licencing 5. It\’s so complex, we\’ve written an plugin for it!

Message from #VMware ref licencing 4. We\’ve introduced a paradigm shift where software can alert you to the need to send us lots of money…

Message from #VMware ref licencing 3. We really thought hard about making it easy, but thought you should have to think hard too.

Message from #VMware ref licencing 2. You really need to reduce the RAM assigned to your VM\’s until the pips in the guest squeak.

Message from #VMware ref licencing 1. We really want you to pay for your test labs/spare VM\’s that you spin up. Best minimise your VM farm.

PowerCLI PowerShell Uncategorized vmware

#LFMF VMware datastores are case sensitive!

Working in a Microsoft world with only brief forays into Unix and Apple server technologies, I tend to forget some lessons from those alternatives 🙂

So, today when working on some PowerShell scripts to copy datastore folders around for backup purposes I was a bit stumped by a copy failing as no object was found. The essential components of the script are:

Add-PSSnapin VMware.VimAutomation.Core

Connect-VIServer -Server FQDN of server or vCentre -Protocol https
$datastore = Get-Datastore Test
New-PSDrive -Location $datastore -Name TT -PSProvider VimDatastore -Root \’\\\’
Copy-DatastoreItem \’TT:\\sage\\*\’ \’J:\\esx\\test\\sage\’
start-vm -vm \’Sage\’

The Add-PSSnapin puts the VMware supplied PowerCLI snapins in place to manage ESX/vCentre architecture
Connect-VIServer does what it says on the tin
New-PSDrive creates a PowerShell drive mapping to the datastore in question so that it can be maipulated., and the Copy-DatastoreItem with those parameters copies the entire folder over (you can recurse through folders and so on if you wish, this is a simple copy)

Can you see the mistake, no I couldn\’t either!

The script would fail on the copy-datastoreitem command and jump onto the start-vm. Now I know there should be error handling and all that stuff, but this was a quick 1-off to sort something out.

So I browsed teh data store through the vCentre interface, all was there, the target folders were there…

In the end the Unix issue of capitalisation rang a distant echo. The Sage folder on the datastore was precisely that \”Sage\” not \”sage\”.

Quick edit, and all is running.


\’scuse the inappropriate word wraps in the code.

Fail King Crimson PowerCLI PowerShell Succeed Uncategorized vmware

#LFMF #PowerCLI Get-Folder contents #PowerShell

Because a “copy folder from the Datastore browser” backup of VM files is so inefficient, I’m writing a PowerShell process to improve my backups of the virtualised world.  Because I can move VM’s around onto different storage locations a hard coded “goto this datastore, download these VM’s” is going to need rewriting every time I do this.*

So I resolved to use as a starting point the Get-Folder command (and spawn a generic process for each Folder) that I have.

So I started to look at a folder (from the VMs and Templates view, not Hosts and Clusters) to do some testing on.  As the only, completely non active folder is Templates, I thought I’d start with that.

So the line of code I was looking at was something like:

Get-VM -Location (Get-Folder Templates) | Sort Name)

However I was getting nothing back, the code would run (there’s a lot more, but I won’t bore you with it until it’s all working), and there was a null result.  I didn’t quite spend days and days looking at it (see King Crimson – Indiscipline, Lyrics here), but I did spend quite a while thinking I’d got something wrong.

Then I had a thought – isn’t there a Get-Template command too?

Coded like this:
Get-Template -Location (Get-Folder Templates) | Sort Name)

I get some results.  Stupid of me to test a folder with wholly atypical contents

More later!

*I know some will wonder why I take flat file backups of VM’s.  It’s because I’m paranoid OK?  I copy them to external USB/FireWire drives for complete recoverability.  It’s not like I do it every day or anything

Backdoor Fail Fixed Problem Succeed Uncategorized vmware

ESXi 4.1 Update 1 travail – lessons learned.

I’ve been biding my time over the last few months to migrate to ESXi.  Knowing that ESX4.1 is that last edition of the “full fat” VMware, I knew my next move would have to be to ESXi, so rather than make a bigger job whenever (cough) 5.0 is launched. I thought I’d change over the long weekend when I knew clients would be closed.
It was entertaining.
Building a boot and install USB stick rather than using a DVD burned with an ISO image was an important part of the test.
This is going to come in useful next month as I have some client work then, where the dirty nature of the computer room (a breeze block room in the corner of the warehouse) means that DVD drives become unusable within a few months – I dread to think (and am not responsible for!) the state of the servers and SAN…  So anyway I want to be able to boot and install from USB if necessary. didn’t really work for me, but proved to be a good source of a procedure on how to do this.  However there are some caveats to the process:
  • Syslinux 4.0.4 (the latest) does not work (or at least did not for me) – stick to 4.0.3!!
  • When modifying the contents of the stick remember to do everything!
  • Whilst the storage in my instance is software iSCSI IT IS IMMENSELY PRUDENT TO DISCONNECT STORAGE.  As this install process initialises some storage, you do not want to accidentally wipe a LUN.  My recommendation is always to build ESX(i) hosts disconnected from storage.  It prevents an easily avoidable mistake.  Likewise I avoid “Boot from SAN” setup.
  • Make sure you follow all the steps. I managed to miss 1 or 2 a few times before I got it right.
  • Don’t forget that the KS.CFG is YOUR INSTALL SCRIPT.  It’s easy to forget this and take the content and run with it.  If you do, you’ll get an ESX box with as its IP, VMware01 as the root password, and ESXi-01.beerens.local as its full name connected to a domain “beerens.local”.  I could be wrong, but I think this is unlikely to work in your world J
So once the stick is done:
  • Check for any Anti-Affinity rules in DRS, this will make sure your VM’s can have maximum mobility around the farm during the change.  You may want to weaken them
  • Move any non-running servers off local storage (if there is any) to SAN or other shared storage – cut and paste or storage migrate.  If you storage migrate you can change the host as well to unregister them from the server.
  • Storage migrate all running VM’s on local storage off the server to shared storage (no downtime here).
  • Put the ESX host in maintenance mode (and take the option to migrate all paused and stopped machines off the host).  All running guests will migrate off
This will leave you with a host doing no work, and having no VM’s stored in its local storage.
Now, and this is optional, but I highly recommend it.
  • Document the server setup – including network settings, iSCSI paths, vSwitch names and configs.  In fact everything you can!!!  If you are licenced for it, then consider Host Profiles as a means to the end.
  • Disconnect all external storage connections, and verify this by checking via vCentre.
Now you can start, insert the USB, boot the server, select boot from USB if required and watch it install.  If you have boot from USB as default, then at the end of the install you should remove the USB before it boots again.
Your KS.CFG will do the initial configuration and you have a new ESXi server.
This is where some of my fun started.  Now please bear with me – some of this was done late at night over a bank holiday, so I did not do my more normal thorough investigation, and I do not have answers to all the questions, but a list of issues encountered and some observations.
  1. vCentre
I thought my vCentre was up to date.  I was lazy, it was not.  I discovered on adding the new host to my network that there were some management issues from VC to ESX.  So I needed to upgrade vCentre.  I also discovered that some VM’s would not start when running on the new host – it seems they were mostly VM Version 4; but also (to make things harder) VMtools needs to be updated too!
  1. vCentre upgrade ISO
This is a 2.2GB download.  You do not want to do this on a 512KB ADSL connection.  I hoiked out my 3G MiFi unit, and downloaded it over the air instead to the laptop.  I achieved a 10 fold performance benefit by using this.  Fortunately I had 3.5GB left on the monthly allowance, so all was well.
  1. vCentre Upgrade action
Sadly this is a lengthy process, but by using full documentation from the installation (you do have this don’t you?) I was able to breeze through the dialog boxes and get everything up to date except Update Manager.  For some reason that part of the ISO is corrupt.  I am downloading it again as I type.
For prudence I snapshotted the VM that is the VC before starting.  At times later on, I would be tempted to restore to this, put ESX4.1 back on the host and give up.
Oh, and don’t forget to take the in place upgrade option – if you go for a new database your whole farm is screwed! (no, I didn’t)
  1. vCentre Client upgrade
On starting the vCentre Client, the new VC edition wants an upgrade before I can connect to it.  This install fails…
Now this was fun… My main management server (physical still – for good historical reasons), is where I do most of the work.   However this is now 6 years old and has a large number of VMware components go through it.  Unfortunately… some old MST file was hanging around and the VI Client upgrade failed.  By now it was late at night after a quick burst of investigation I decided on a more radical approach.  I stopped all VMware services, hacked out all the VMware stuff from the registry, killed VMware folders in Program Files, and rebooted the machine.  This did not completely fix the install, and found a few more VMware folders in the Documents and Settings tree, they went too.
  1. DNS and AD failure
Yes, you read that right.  When this box came back DNS was down, and AD was not working as a consequence.  Fearing I’d ripped something out I hadn’t meant to I was tempted to hit the backup tape (you do take backups don’t you?) but waited a bit…
This being more a test lab than a production network the primary physical box on which I was working is the original DC of the network.   The other DC’s are virtual, and it turned out that neither had started properly when I had restarted the ESX hosts a bit earlier.  We had had a power cut earlier in the day, and whilst the kit had all stayed up, it seemed (only with hindsight) that whilst I have UPS’s all round a slight barf on one UPS had impacted a network switch and the virtual world was not talking to the physical world properly.  Taking the IT Crowd “Turn it off and on again” philosophy to its logical limit… I shut down all the VM guests (you do have a PowerShell script for this don’t you?!) and shutdown the hosts.  I then power cycled the switches and waited for them to come back.  I then booted the ESX boxes, and the physical server and all was well.  A quick check round logs and events proved this was the case.
I’m not going to try to work out why, as this was now 1am…
  1. vCentre client now installed properly and I can connect to vCentre Server again.
A quick bit of configuration of vSwitches, and all seemed to be well except…
  1. iSCSI connections
One of the iSCSI connections relies on decent security from the SAN side – and with the new ESXi installation the IQN’s on the software iSCSI had changed, so the SAN had to be told it was allowed to connect!  A quick fix there, and the new ESXi box can see all storage, and works a treat.
  1. Finally all was well
  1. So I just need that good ISO for the Update Manager installation so that I can now manage updates across the VM’s (VM Version and VMTools for now).
  • Well you can see from the above that Douglas Adams was right when he wrote “Don’t Panic” – I could have given up with the backups, snapshots and original ESX4.1 that I had and gone back to square one.
  • Document your setup, NOW.  You never know when it might come in useful
  • In ESXi the Service Console no longer exists – look for the Management Network in your ESXi networking setup
  • IQN’s can change
  • Check your VM version – some of your older VM’s may be 4 instead of 7.  In my experience, a VM version 4 had some issues starting and seeing network hardware on a new host.
  • Anti-affinity – keep an eye on it, and restore it when done
  • If you use ESXTOP on ESX, don’t forget – without the service console, you won’t get this on the host
  • ILO – if you have it, make sure you know the password – it saves a lot of hassle connecting to the host
  • Lastly NEVER FORGET you can use the VI Client directly to the host to work things.  If the VC goes down, it means you can still start stop guests, enter/exit maintenance mode, reboot and shutdown an ESX box.  This can be your friend.  A lot.
Oh, and very lastly – if you finish work at nearly 3am in the morning after some problems like this, then the early morning Radio4 news on the day Osama Bin Laden is killed makes for a pretty good wakeup call.