Wednesday, August 19, 2009

Sherman Meter Blends New Hotel-Review Cocktail

The travel industry is grappling with a fix for sometimes-suspect hotel reviews, and the latest entity to offer a solution is Shermans Travel, a deal-publisher and media company.

Shermans Travel believes it has found a credible formula in its new Sherman Meter Rating.

This new beta test of the Sherman Meter Rating closely follows the move by professional-review website Oyster Hotel Reviews to require consumers to use Facebook Connect if they want to pen hotel reviews. Oyster feels having consumers shed their anonymity will lead to more reliable reviews.

And, the context for the and Shermans Travel review experiments is months of controversy about the usefulness of user-generated content in general, and hotel reviews on, in particular.

In its announcement, Shermans Travel gets a little carried away in its hype for the Sherman Meter, billing it as a "proprietary meta-search tool."

It may be proprietary, but it is not a metasearch tool in the sense that you cannot compare reviews from various sources in grid-like fashion.

But, still the Sherman Meter indeed offers a hedge against review manipulation by hotels, and it is a useful contribution to the debate.

The Sherman Meter rates hotels by aggregating consumers' reviews from sites like TripAdvisor, Yelp and Yahoo Travel; tosses in "professional" critiques from Concierge, Frommer's and Fodor's; sprinkles it all with its own expert assessments; and comes up with a percentage of positive reviews with a corresponding color code.

If the Sherman Meter is green, then the reviews generally are positive. Yellow means the reviews are mixed, and red points to negative reviews.

Darren Frei, editorial director for, tells me that the aggregated user reviews, the professional reviews and Shermans' own expert reviews get "roughly equal weight in our proprietary algorithm."

Hence Sherman has figured out a way -- however imperfect -- to hedge against hotel industry/pr/marketing gamesmanship.

That's because Shermans utilizes the "wisdom of the masses" in sometimes hundreds or even more than 1,000 consumer reviews from a variety of websites for a particular property; the opinions of the professional or guidebook crowd; and also Shermans' own staff-formulated reviews, with the latter being "a qualitative evaluation independent" of the Sherman Expert Review displayed on the website, Frei says.

Given that Shermans Travel -- which publishes deals, provides a travel search engine and has a print magazine -- likely makes most of its money in the media/advertising business, is there an advertising bias in the Sherman Expert reviews?

Frei says: "All of Sherman's Expert Reviews are selected and managed by our in-house editorial staff, which evaluates and edits hotel reviews without input from the advertising department."

So, what are the results?

I just spent the last eight hours or so checking out the Shermans Meter for a variety of properties and found similar patterns.

Let's consider, for example, the Waldorf-Astoria in New York City.

The Sherman Meter assesses 1,310 user reviews from a variety of websites, "professional reviews" [more on this later] from Fodor's and Frommer's [Concierge doesn't rate the Waldorf], and the opinions of Shermans' own experts, and gives the Waldorf a 76 percent, or green, rating.

Incidentally, the Sherman Expert Review for the Waldorf is descriptive, but neutral and doesn't really tell me whether the wallpaper is pealing or whether I am guaranteed to get a Brangelina sighting.

Independent of Shermans Travel, TripAdvisor, using 1,051 traveler reviews of the Waldorf, gives the property four stars and a 72 percent TripAdvisor Traveler Rating, in the same ballpark as the Shermans Meter.

Then we arrive at the so-called professional reviews.

It turns out that doesn't rate the Waldorf.

Fodor's rating actually isn't a professional rating. Instead Fodor's provides a 3.4 "member rating," meaning the Fodor's rating turns out to be another consumer-review compilation.

However, I found the Fodor's Review of the Waldorf valuable because, unlike Sherman's Expert Review, at least the Fodor's Review offered pros and cons about the property, including Fodor's opinion that the "rooms [are] not contemporary; very public lobby includes loads of tourists hoping for Brangelina sighting."

And, I have problems with the Frommer's review methodology and the fact that the Shermans Meter gives it substantial weight.

Someone correct me if I'm wrong, but it appears that, which features Arthur Frommer periodically railing against user-generated content, doesn't post negative reviews. Its hotel ratings are: 0 stars (recommended), 1 star (highly recommended), 2 stars (very highly recommended) and 3 stars (exceptional).

Thus, there are no reviews of hotels that are not recommended. Where is the credibility in that?

Frommer's rates the Waldorf three stars [its highest rating], or exceptional, and is lone among the above review sites to do so.

And, the actual Frommer's Review of the property, like Sherman's Expert Review, is descriptive, but neutral and not very insightful.

Meanwhile,, which relies on journalist reviews and just added consumer reviews using Facebook Connect, gives the Waldorf four stars out of five, and doesn't yet offer a consumer comment about the hotel.

Like Fodor's -- and unlike Shermans and Frommer's -- at least serves up some pros and cons about the Waldorf, including its opinion that the property has a "top-notch gym and spa (among New York's best)."

So, where does this leave us?

Shermans, with its mix of consumer, professional and its own expert reviews on the Waldorf, has the same overall view of the Waldorf as TripAdvisor does, despite all the allegations about TripAdvisor's suspect consumer reviews.

TripAdvisor doesn't come out looking so bad, although I know I am discussing merely one, large, well-known property. (And, it is easier to manipulate rankings for smaller hotels.)

The expert reviews offered by Fodor's and Oyster are cool because at a minimum they are unafraid to ruffle some feathers.

And, Frommer's review stands out as the questionable one in the crowd because Frommer's stands alone in its assessment that the Waldorf is exceptional.

Meanwhile, there's raveable, which slices and dices 1,469 user reviews of the Waldorf and analyzes what might seem like a morass of complexity and transforms it into some clear and effective metrics.

raveable outlines its methodology here.

Instead of having to wade through 1,469 reviews and murky star ratings, raveable gives the Waldorf an overall rating, plus ranks the property on its service, the rooms and value. It also outlines where the Waldorf stands not only when measured against all other New York City hotels, but also evaluates its stature amongst the other four-star hotels (49th of 62).

The question then becomes: Would travelers rather rely on the opinions of admittedly seasoned experts on Frommer's, unvetted consumer reviews displayed on TripAdvisor, concise analysis of consumer reviews on raveable, journalists' critiques on, or Shermans sometimes-flawed mix of user reviews, professional reviews and its own experts?

The jury is out.

And, consumers likely will vote with their feet -- or their browsers.


TripAdvisor Launches Family Vacation Critic in its Own Image

TripAdvisor: The Beat (of Hawaii) Goes On

Hotel-Review Dilemma: Expedia's TripAdvisor Gives Notice, Mum, But...

Update and Flashback: The Truth About TripAdvisor and Professional Reviews

On TripAdvisor, Oyster and Professional Hotel Reviews: My Totally Reasonable Solution

TripAdvisor vs. Kayak: Paid Reviews Don't Float My Boat


Daniele Beccari said...

Hi Dennis,

I think the problem of hotel industry gamesmanship has been solved years ago when hotel bookings sites from Venere, Booking, and now everyone else have started taking reviews exclusively from people having actually booked with the property reviewed.

What the Sherman and other such indexes are trying to do is rather provide a reliability score for Tripadvisor, mainly, as it is the main repository of free reviews which are open to manipulation.

Josh Steinitz said...

I like the Sherman's concept, but I agree that there's no great way to create a single blended score that incorporates consumer and expert reviews effectively, since they tend to be on different scales and have different goals and audiences. Small tweaks in the algorithm can have dramatic impacts. And perhaps most importantly, when review sources are weighted by number of reviews, TripAdvisor will always drive the bulk of the score.