Tom Handy is a writer, philosopher who has popularized the Frog Tactic. The idea is that if you want to cook a frog (or World) alive – all you have to do is slowly turn up the heat from warm to warmer until you have the legs (countries of the World) cooked and done exactly the way you want. Well the software vendors are performing the Frog Tactic on the broad IT community. Slowly and stealthily they have added little constraints of larger and wider impact to their EULA and other licensing agreements limiting how and whether benchmarks of their software can be done and published. Tucked away in some hidden corner of the EULA-that-is-never-read-BUT-STILL-APPLIES-IN-A-COURT-OF-LAW are such benign and equitable sounding snippets such as: “benchmarks on this software cannot be published without prior consent from Redmond” (or Armonk or Cupertino or Redwood Shores or Palo Alto or whomever, the practice is very widespread and not just from Microsoft). This semi-innocuous restriction starts to morph to: “any benchmarks, performance tests, or reviews of the software are subject to review by the vendor to establish that they have been conducted properly.” And the most recent level of heat is: “no comparisons, benchmarks or performance tests of this software can be published”.
Now the IT Press and media are reeling for a number of reasons: migration of readers to other venues; consequent migration of ads to other waters, commodity pricing in major markets, massive changes to how the IT business is covered. And one of the those changes has been the EULA restrictions and how much they are relaxed when benchmarking is done by the IT trade press. Not a lot apparently.
Stellar example number one. Microsoft Vista and its bloat factor. Elsewhere we have been following the Vista reviews waiting to see performance benchmarks because they are of the essence in assessing whether Microsoft has finally succeeded in delivering an enterprise caliber desktop OS (and Server side to appear just shortly) with good performance, security as well as features. To date we have seen performance benchmarks in NONE of the Vista reviews.
For example, Information Week has dared to produce one of the most complete reviews of Vista. They have compared it to Mac OS/X and Linux. They have looked not just at the UI and set of “new” features but also the old utilities within a fairly broad set of usage cases. There are many screen shots of the Vista UI versus the Mac/OS UI. There are basic pricing and packaging comparisons. But is there any word on the most important factors of all – price /performance based on measured benchmarks of how fast apps and utilities start-up, run a suite of benchmarks or pass a gauntlet test of known security and reliability risks ? NADA.
So how do you make intelligent “price/performance” decisions when the IT vendors have virtually prohibited measuring and benchmarking performance? Well of course, you use Stephen Colberts newest neologism – factiness. “factiness” occurs when only the vendor gets to package and structure the tests and facts for you. You go to “Get the Facts – the only ones allowed” from carefully scaffolded and constructed company sponsored tests and benchmarks and hope they are at least truthy. Any dissenting tests and opinions may be branded as liberal and libellous – and therefore, not allowed.
Some people scoff at the TPC set of benchmarks – hopelessly rigged for maximum possible performance or where-can-you-get-that-pricing, known to have had cheat non-standard software and/or hardware, and even an occasional ringer routine running deep in the bowels of the test set-up. But in the current desert that is benchmarking, TPC begins to look like a lavish oasis.