No stars among agent-rating systems

Know the flaws in measurement tools

There’s always a lot of talk about agent ratings and how that’s supposed to make life great for responsible agents and for consumers. It’s supposed to be a big holy grail of greatness in helping match up home sellers, homebuyers and real estate professionals.

I don’t really get it, though.

Sure, on the surface it makes some sense. Agent reviews are a little like testimonials. And there is also the whole concept of “social proof” in our socially networked world.

But when I really get down to examining it, I’m not sure I see how agent ratings help anyone. Except maybe the site or service that is hosting the ratings. And even then it’s a little questionable.

I’ll throw out some of my thoughts on this and hopefully the readership here can school me on where I’m off track.

The mechanics and patterns of rating systems

I’ll start off with the issues that are inherent in current rating systems, regardless of the industry. These issues present fundamental problems to rating agents online.

There are a variety of standard patterns for rating systems:

  • Thumbs up or thumbs down (binary).
  • Star systems (usually a five-position switch).
  • Open text (qualitative feedback).

None of these provide the granularity for a complex transaction or business problem. Sure they’re good enough for movie reviews or book reviews — at least for books and movies that aren’t very complex.

But my friends and clients in real estate constantly remind me that the problems that real estate professionals overcome in their efforts to match people with places to live are complex and can’t be digitized very easily.

Either the rating systems are inadequate or my friends and clients are wrong. If the success of a real estate transaction can be accurately portrayed digitally — either in a binary system or in a 1-to-5 system — then we should be able to start modeling real estate agent software.

The qualitative, open-text-field rating system is an improvement but offers its own challenges. Because the specific needs and situation of the person providing the text review isn’t always embedded in the person’s review, there is little basis to judge the applicability of what he has to say.

The nature of rating things

For a review to have actual valuable meaning, the reviewer would need to have a variety of experiences to make a judgment. Saying that you like something or don’t like something, or even that you had a good or bad experience, is not very meaningful in a void.

For example, I have had shrimp and grits at exactly one restaurant, ever. It is in Duluth, Ga., and it was the absolute most awesome thing I have ever eaten. I’m a complete junkie for this dish.

I started a hashtag about it. A friend of mine, Rivers Pearce, who works in Charleston, S.C., respectfully disagrees with my assessment of Duluth’s shrimp and grits prowess.

The reason is that Rivers has a vast quantity of experience regarding shrimp, grits and the combining thereof. I have one experience (well … many, many experiences at one restaurant).

My review of the greatest shrimp and grits would not add value to anyone’s decision-making process about where to eat shrimp and grits. Rivers’ review would add value.

Ratings systems, out of politeness, would show both of our reviews as if they were equally valid.

If it’s true, as I’m told by real estate professionals and the sales consultants who love them, that people have a homebuying/selling experience about once every seven years, then the base level of experience in the general population isn’t really that sufficient for much of a review.

And if the person does have a lot of experience, then chances are he is a specific type of homebuyer/seller who may have needs that don’t align well with the typical homebuyer/seller.

People buy or sell houses for a variety of reasons, and these reasons have a direct relationship to whether their experience is positive or negative — even when the outcome (a bought or sold house) is the same for everyone.

Given the lack of experience by people leaving the ratings and reviews, there will be a natural bias toward extremely negative experiences and mind-blowingly good experiences.

The merely competent ratings and reviews will not be adequately represented in the bell curve. This is because the reviewers and raters know that they don’t have enough experience to tell if they have had a good or bad or competent experience.

Reliability, transparency and social

To help get around this issue of weighting all reviews equally — even when someone lacks experience to leave a qualified review — ratings systems have all sorts of bolted-on junk:

  • Authenticated identity of reviewers.
  • Strict no-takedown policies.
  • Social layers to help people identify whether they know the reviewer or not.
  • Clearly marked paid reviews, and so on.

None of these really help. They just highlight the lack of reliability in online rating systems.

Furthermore, the gaming of the ratings systems is fairly straightforward. It’s much, much easier than gaming search engine results (usually because the ratings systems are making money directly from the people who want to game the system).

This results in a transparency problem that visitors either choose to ignore or self-filter in some way.

As long as the system providing the rating is being paid for by the people who are being rated, the whole thing is a simple con, and most people will understand this.

The system will be used if there’s no other reliable way of making an informed decision, but it won’t be relied upon. Advertising business models are at risk in this, and so are membership business models.

Adding social layers helps this a little, but mainly to encourage the person reading the rating to ask the rater what the “real” situation is. If the person leaving the rating is truly a valuable social contact for the person reading the rating, then he would have talked about this already.

Ratings as entertainment

There is value in current ratings systems. But it’s entertainment value — not value in decision-making.

The entertainment value for the person leaving the rating is in expressing his status as an expert — someone worthy of rating and reviewing something, someone important.

This is great fun and may be important to the self-worth of the individual leaving the review or rating. But it shouldn’t be misconstrued as being useful for the person reading the rating.

The entertainment value for the person reading the rating is to see what sort of funny things people get all jacked up about. It’s sort of like listening to shock radio or viewing a horror movie.

Because the bias of ratings will be at the extremes of the experience spectrum (horrible and great) and because people’s emotional self-worth is tied up in their own rating or review, the reader will have access to a wealth of exciting commentary. It’s sort of like reading pulp fiction, I guess.

Is buying or selling a house the same as buying a product on Amazon or going out to dinner?

There is a high cost for making an error in a homebuying/selling experience. There is also a correspondingly higher personal-status issue at stake.

People don’t buy or sell houses as often as they go out to dinner. They don’t do business with as many real estate professionals as they do with book authors.

The idea of a rating system makes sense. It’s desperately needed and wanted by people who are looking to buy or sell their house. But unless the issues outlined above are navigated clearly, we’ll be stuck with entertainment value rather than decision-making value.

Gahlord Dewald is the president and janitor of Thoughtfaucet, a strategic creative services company in Burlington, Vt.

Contact Gahlord Dewald:
Facebook Twitter Facebook Email Facebook Letter to the Editor


Comments