The MLS has served the industry well. For decades, it has provided a basis for all broker participants and their agents — large or small franchise or indie — to share listings while simultaneously competing for business under a common set of local rules. It’s the great equalizer regarding access to available inventory.

  • Cognitive computing is coming to real estate, not to displace real estate professionals, but to make them more valuable.
  • MLSs have focused their efforts on processing listing data and are not set up to acquire, store and process the data necessary for cognitive computing applications at the local level.
  • Upstream or the MLSs have a chance to be leaders in bringing cognitive computing applications to the industry, but it's unlikely that both will be necessary to fill this need.

The MLS has served the industry well. For decades, it has provided a basis for all broker participants and their agents — large or small franchise or indie — to share listings while simultaneously competing for business under a common set of local rules. It’s the great equalizer regarding access to available inventory.

It is, by and large, the single essential utility one needs to be in the business of residential real estate. You know this by asking the simple question: how would you or your firm function today without easy access to all of the listings and the corresponding offer of compensation?

Going deeper than listing data

But what would happen if brokers and their agents needed to share more than local property listing data to compete at the most basic level? That day is coming.

If you follow tech trends, you certainly have come across the term artificial intelligence. More accurately, AI is part of a larger trend called cognitive computing. It sounds scary and for some, this term is either dismissed as too science fictiony or absurd, at least for the real estate industry where machines, as we hear, could never replace the function of the trusted agent.

So what is cognitive computing? It is a form of data analytics that goes beyond telling us what happened or what might happen. It actually can learn based on all the data it has at its disposal.

Through this learning process, the system makes inferences about the data it is analyzing. In other words, it starts to think, and the more it thinks, the more it learns. And the result is that it can help us decide on what to do next. Not to displace humans, but to make human decisions even better by analyzing more data than a human could ever do.

For you skeptics, according to Bloomberg, IDC predicts that in just four short years, 50 percent of all business analytics software will include “decision-technology” built on cognitive computing. In that same period, Deloitte predicts that nearly all of the top 100 companies in the world will be using cognitive computing applications, and revenue is forecasted to reach nearly $14 billion. That’s billion with a “B.”

You may have read about the now infamous IBM Watson beating (no, crushing) the Jeopardy champion in 2011. And just a few weeks ago, Google’s DeepMind beat (no, slaughtered) the world champion in the ancient Chinese game of Go, which apparently is a really big deal in terms of mimicking human intuition and decision-making.

Here is a great video showing how doctors are using it today. Other professional service industries such as accounting and law are jumping on board. Although these examples all reference IBM’s Watson, keep in mind that Microsoft, Google, Apple, Facebook and HP are all in the fray as well. It’s here; it’s real, and it’s coming to Main Street.

Why cognitive computing?

What makes cognitive computing so special is its ability to analyze large amounts of unstructured data such as text, images, voice, sensors and video.

Unlike the data that lives in tidy fields in your MLS database, unstructured data (what some call “dark data”) is all around us, in the nooks and crannies of human interaction with each other and machines. And it is super important because it makes up most of the data in the world, and it can provide insight not obtainable from structured data.

For example, Redfin is doing some fascinating things with beacon technology in its app. This is the type of data that could provide real-time insight into consumer home shopping behavior. It’s also unstructured data — generated in real time — that does not live in the MLS. There are even virtual brokerage firms trying to use this to run their entire organization.

The future of service

So here’s the question. What if cognitive computing enables agents to be better professionals and make better recommendations to their clients? What if access to cognitive computing power, and the data necessary to power it, becomes the 21st century equivalent of the MLS utility?

Step aside from your business for a moment. If you could choose between a doctor who had access to a Watson-like system that provided that doctor with insight into data that he or she could never assimilate on his or her own or one that didn’t, which would you choose? What about your lawyer or accountant? In a cognitive computing world of real estate haves and have-nots, who will be positioned to deliver a better value proposition to their buyers and sellers?

So here’s the issue. Cognitive computing takes a boatload of data. Multiple types from various sources. The more, the better. Today, MLSs are not equipped to acquire, store and provide access to unstructured data. Remember, the “L” in MLS stands for listings.

On the other hand, it appears that Upstream intends to go beyond listing data. In a marketing document recently published by Upstream, here is what the say about the type of data Upstream intends to store:

“Storage of the ‘Full Spectrum’ of Data. MLSs only store and manage listing content, which is only part of the spectrum of data your company handles. Participating Upstream firms will enter all kinds of other information, from customer data to agent rosters, to vendor information to historical data on properties and many other categories that are not currently part of MLS and never will be.

“Any data generally entered more than once is especially appropriate to enter into Upstream, but this database can be each broker’s single point of data entry, and every use of data can be fed by accurate, up-to-date data from Upstream.”

Upstream or the MLS?

So who is better positioned to fill this upcoming need? Upstream or the MLS?

It would seem that Upstream has the advantage here. Upstream will be a nationalized data warehouse that is apparently being designed to address all types of data. It has large national brokerages and franchise organizations behind it that have access to vast amounts of dark data, either themselves or through their affiliated brokers and agents.

It does not have the legacy of vendor driven reliance on listing-centric technology nor the institutional mindset that the word data starts and ends with the listing. Those things are very, very hard to break away from.

The big problem for Upstream, however, is something that is often repeated in brokerage circles. Real estate is local. And the benefits of cognitive computing in real estate will also be applied at the local level. What is going on in my hometown of Seattle’s real estate market is very different than the real estate happenings in Phoenix, Atlanta or D.C.

The present advantage that MLSs have is that all of the local players already participate. The fact that the MLS publishes all of the available listings is its power. Although the national brokerages and franchise organizations participating in Upstream might have data at scale on a national level, I don’t see any that have data at scale on a local level.

So I see one of three things happening:

1. Upstream builds its vast data warehouse and all (or enough) brokers participate so that it will have the data necessary to power cognitive computing applications at a local level.

If all the brokers are participating in Upstream, then it will surely follow that the MLS, as we presently know it, will either go away or be replaced by a new form of MLS that is not a data organization. There would simply be no need for redundant data services.

2. MLSs recognize the train that is coming at them and quickly move their mindset from a model focused on listing data to all of the data necessary for cognitive computing applications in their local market.

In essence, they compete with Upstream by leveraging their local broker participation asset. All (or enough) local brokers play, and the multiple listing service becomes the multiple data service. Upstream might still exist but have a more limited scope.

3. Neither No. 1 or No. 2 happens. That won’t stop cognitive computing from coming to real estate. It just won’t happen from within (hint: we’ve seen this before).

I sincerely hope we don’t end up with the third option. Cognitive computing has the potential to add massive value to the real estate brokerage value proposition and do for agent professionalism what no other initiative could touch. And it would be a shame if outside entities led the charge.

Formerly Senior Vice President of Industry Relations at realtor.com, Russ Cofano is an industry consultant and speaker with 25 years of executive-level experience in technology, association, MLS, brokerage and law.  You can find him at Cofano Consulting or LinkedIn.

Email Russ Cofano.

Show Comments Hide Comments

Comments

Sign up for Inman’s Morning Headlines
What you need to know to start your day with all the latest industry developments
Success!
Thank you for subscribing to Morning Headlines.
Back to top