Some "Rank" Behavior Regarding Outsourcing Listings
Steve Hamm penned a blog piece and an article for BusinessWeek (see: "Outsourcing's Dubious Kingmakers", www.businessweek.com, July 14 & 21, 2008) re: Brown & Wilson's annual Black Book of Outsourcing. The article questioned the methodology used by the firm in creating its annual listing of outsourcers. The article also questioned the movement in rankings and the location of the company's office.
I also saw Ed Nair's editor's note in this month's Global Services magazine (see: "The Outsourcing Lists", July 2008, www.globalservicesmedia.com). He took offense to a document that purportedly touts the Black Book list and disparaged others. Since I cannot find this document, I can't comment on it but I was certainly intrigued to its alleged comments.
That a list could generate so much passion and print was interesting to me as I generally can't get many people to care that much about the general ranking of two outsourcers. People seem much more interested in People magazine's Sexiest Man choice than a list of outsourcers.
Today, Wachovia Capital LLC hosted a conference call with Scott Wilson and Doug Brown of Brown & Wilson. Mr. Brown and Mr. Wilson stepped through a litany of Facts and Fictions re: their study and its methodology. Just based on this, it would appear that many of stories attributed to their study are not true.
On that conference call, we learned that:
- vendors are evaluated by customers across 26 factors, each receiving a 1-10 score for a maximum score of 260 points
- each vendor is ranked based on the mean score across all submitted ballots
- customer ballots are carefully screened to ensure that only the customer, and no one else, completed the ballot
- at least ten ballots per company are needed for the company to be included in the rankings
- outsourcers who try to cajole more clients to vote statistically hurt their rankings on average
What appears to be the biggest point of confusion in the market is that there are now 4200 outsourcing vendors being evaluated and the differences in scores between one vendor and the next in the top 50 list could be a mere few thousandths of a point. For example, look at the difference in scores in the example provided by Brown & Wilson for the Wachovia call:
When the differences are a mere few thousandths of a point, the differences are not statistically significant. What we should be seeing are the standard deviations for these scores as potential users of this data would see that meaningful differences may not exist between large groups of outsourcers evaluated. In other words, the numerical differences between two outsourcers with similar scores may not be a solid basis for including or excluding either from a selection short-list.
Moreover, when the top 50 vendors in a pool of 4200 vendors are selected, you can bet that the scores will be close for these. In fact, the top 50 represent just over the top 1% of outsourcers evaluated. These scores are like trying to delineate the differences between your salutatorian and valedictorian. They're both great.
As for the Brown & Wilson report, I am satisfied that they have adequately calculated the results and protected the integrity of the inputs. People who rely on this data need to recognize that:
- a statistical score is no substitute for doing your own due diligence re: outsourcers
- narrow score differences are not reasons for including or excluding outsourcers from consideration
- with outsourcers and mutual funds, past performance is no guarantee of future results
I do not have, nor have seen, the document that supposedly came from CIO magazine that alleged some other points.
If you'd like to see the Top 50 list, click on this Wall Street Journal link.