One point of the Edina Realty Inc. "Syndication Thesis" can be handled simply, for the benefit of all.
The thesis, on what a data owner needs from syndication — which I regret to say I can’t find on the company site even though it’s been quoted across many sources — reads as follows:
"Our agents don’t lose future business opportunities because a nonlisting competitor pays to present themselves as the contact for your listing.
"Our agents don’t have to pay — directly or indirectly — for leads on their own listings.
"Our sellers can be assured that leads on their listing are being handled by an expert.
"The quality and accuracy of your listings data is assured.
"Potential buyers are provided with fast, knowledgeable responses via the listing agent or our seven-day-a-week customer service department."
In all the hubbub of discussion and seeking secret motives and so on, it might be easy to miss that this isn’t a half-bad list to have some honest discussions about making industry relationships stronger as well as for discussing the nature of service within the real estate industry.
Tackling all of the items in the thesis would require a bit more space than we have in this column. But I want very much to approach one of the requirements:
"The quality and accuracy of your listings data is assured."
This issue of data quality has come up a lot in a variety of discussions I’ve had with vendors, aggregators, consultants, brokers and agents. Pretty much every position in the real estate industry has had some sort of take on data quality and whether it was important, or to whom, and so on.
Why don’t we just fix the data quality problem so that other more interesting and challenging aspects of the Edina thesis can be addressed?
Fixing syndication: Admitting there’s a problem
First off, let’s be sure that everyone agrees that syndication of real estate data is completely and utterly broken. It’s so broken. It’s broken in a way that really doesn’t help anyone. Not even the people that you, personally, think benefit from the brokenness of syndication are really benefiting.
Tech vendors have to integrate with a wide spectrum of data formats and distribution methods, increasing costs to develop products exponentially.
Brokers feel that their business model is being threatened by others who use their data.
Third-party aggregators have the same issue as tech vendors with integrating data, plus they have to negotiate a variety of uneasy feelings at the policy level because their data suppliers don’t feel in control.
Sellers are wondering why data published on some sites is out of date.
Buyers are wondering why the house they just bought is still listed for sale all over the Internet.
Let’s be clear: Syndication is broken.
I could be totally wrong on what I’m about to say because I wasn’t in the room for all the decisions getting made, but here goes:
- I think real estate syndication is broken at a technical level because everyone went the path of least resistance.
- Also, it appears industry stakeholders weren’t looking for better solutions and weren’t willing to make the technical and cultural changes necessary to adapt new better solutions as they came along.
In short, I don’t see some sort of secret cabal behind the data syndication mess. I see humanity at work.
Fixing syndication: The technical solution
"API," or application programming interface, is a technical term referencing how one piece of software can talk to another piece of software.
For example, if you want to pull photos from the popular Flickr photo site into Facebook, the way that happens is via an API. The owner of the Flickr account lets Facebook grab the photos and post them over on Facebook. Facebooks gets a little API "key" that allows this to happen.
Possession and control of that key, combined with some sound policy, is what can help the real estate industry get beyond the data quality issue and on to talking about the other aspects of the Edina thesis.
One could set the controls of a real estate data API such that every time data is to be shown for a particular property, the API key needs to be verified and the data is delivered — fresh from the source — only after verification.
One could set the policy so that storage of that data is not permitted outside of the API system. In this way, the data will always be fresh, and should some sort of infraction on other policies occur, access could be shut off. Also, once the property is no longer on the market, the data would no longer be syndicated.
By using an API for syndication of real estate data, the data owner can be in the maximum amount of control of data to do with as the owner sees fit. If the owner doesn’t want the data to appear somewhere, then the owner doesn’t grant access.
For those who believe this is bad for customers, etc. — they are free to grant access to whomever they choose. Each data owner would have possession of an individual API key for the data owned.
Perhaps this could be a source of competitive advantage for one party or another. The important thing is that one data owner’s business model would not be bound by the other data owner’s business model. So both could operate as they see fit. Let the market decide.
Technology vendors would be able to work with their customers more directly since they could be given direct access to the relevant data by their customers instead of having to negotiate with multiple listing service/Internet Data Exchange (IDX) managers. This would result in more vendors entering the space and decreased costs for growing services.
Aggregators would be able to compete based on the value they bring to the industry in terms of developing and capturing the attention of an audience. Like the tech vendors, their data integration burden would lessen. As would their data storage costs.
Data quality solutions
Quality of data is always an issue in any reasonably large pile of data that gets transferred around. For data to have "quality" it needs to be authenticated.
The best way to handle authentication is to have one copy of the data (instead of many), and consciously control distribution of that data only to people and entities agreeing not to do things to it that would reduce the quality of the data.
From there on out, it’s a policy thing, not a technical thing.