Over the weekend, as all those new iPhone 4S’s got booted up and people started playing with Siri, a built-in tool that assists in using the smartphone’s features via voice commands, a realization began to take hold.

If we can use our voice to tell a machine what we’re looking for, what will become of search as we know it?

I’ve been playing with Siri for several years now (it was a free app on the iPhone before Apple bought the technology) and while not perfect, it is amazing at what it points toward.

Over the weekend, as all those new iPhone 4S’s got booted up and people started playing with Siri, a built-in tool that assists in using the smartphone’s features via voice commands, a realization began to take hold.

If we can use our voice to tell a machine what we’re looking for, what will become of search as we know it?

I’ve been playing with Siri for several years now (it was a free app on the iPhone before Apple bought the technology) and while not perfect, it is amazing at what it points toward.

Siri is Apple’s voice-activated "intelligent agent." It is software that can understand questions asked in English, and then use that understanding to find answers.

Siri was developed by defense research group SRI International in Menlo Park, Calif.; it was in development a long while before Apple bought it in April 2010.

Apple has a long history in working with voice technologies (anyone else nerdy enough to have custom-scripted some MacInTalk stuff?), but the intelligence that Siri brings to the table is a real game-changer.

Siri isn’t just about hearing and understanding human language. Siri is about knowing what to do with that understanding.

In this column I’m going to talk a bit about a fun, geeky topic called "human-computer interaction" (sometimes abbreviated as HCI, which is a subset of an even geekier field called haptics), how a speech-capable intelligent agent like Siri intersects with that field, what it means for search, and then end with a few thoughts specific to real estate.

A brief overview of human-computer interaction

Since the beginning of using computers there has always been an issue with "How do we give instructions to this machine?" We’ve had levers and stuff. We’ve had Hollerith cards (punched cards). We’ve had keyboards and cursors. We’ve had the mouse. We’ve had trackpads. We’ve had touch.

What all of these things have in common is that they help us get our ideas into a machine so that the machine can then do something with our ideas.

Typically, we need to be very specific about how we format our ideas so that the machine can understand them. Earlier forms of human-computer interaction are more strict than later ones.

In the progression of ways of getting thoughts into a machine, more and more of the specificity gets written into the software layer and so it is no longer necessary to enforce it at the hardware layer.

This is the difference between needing to line your mouse up exactly on some text to click on a link vs. a touch device like the iPad learning over time how skinny or fat your finger is and determining how close is "close enough." The software, the machine, is learning so we don’t have to.

But in all of these modes of interacting with the computer, the computer has a very specific advantage: context. We get to move cursors and pointers around on a specific screen of a computer program that has only a limited number of interaction points.

The computer software knows, for example, that we’re on the "search for a house" page and is expecting us to touch, type, poke and prod in ways that relate to searching for a house.

In short, the computer can use the context of the interface to know in a very direct and scripted way what it is we want.

Voice interface

Siri doesn’t have that advantage. When we fire up Siri, the only thing that software knows is that we’re very likely to ask a question. There is no immediate clue what sort of question it will be, what sort of urgency the question carries, or how specific or picky we’re likely to be about the answers.

Being able to function in a useful capacity in that ambiguous environment is the real value of the technology that is driving Siri. Sure, understanding human voice is cool, but knowing what to do with it is the breakthrough here.

Once Siri is able to parse the question we’re asking of it, then it has a great deal of resources at its disposal to help formulate a response. All of those sensors and radios in an iPhone help Siri place us in the real world.

All of the data stored in our address books and profiles help Siri know who we know and provides some data on our tastes.

And then, beyond that, Siri needs to make sense of all the data available to it from sources like the entire Internet.

This is where Siri and other intelligent agents (which, at this point, don’t exist but let’s pretend they do so it doesn’t seem we’re being overly Apple-centric) start to collide with "traditional search" functionality.

The intelligent agent and the search engine business model

Intelligent agents — software that comprehends our wishes and helps us fulfill them — perform the service of answering questions. They can help us find things or places or people. Just like search engines.

However, if the interface we’re using to communicate our questions to an intelligent agent is our voice and ears, we’re not likely to tolerate much advertising.

Contemporary search engines are funded by all those ads sprinkled among the results. Where would Siri put those ads? "I’ll tell you when the next train leaves just after this commercial break." No, I don’t think that’s how it will work.

Development of Siri is, presumably, funded by the sale of the hardware device that runs the software. So it doesn’t need advertising to get in the way of the human-computer interaction.

But what happens to the search engine business model if it’s forced to compete with voice-driven intelligent agents? I think we’re seeing a classic gathering of three technology trends setting the stage for disruption:

  • Ubiquitous interface (aka "mobile" and "post-PC"): One of the challenges with mobile technology is in the haptics — how the device fits us as humans. Typing on mobile devices is identified as a primary challenge and voice is one of the solutions.
  • Ubiquitous access (aka "the cloud"): The availability of large, scalable clusters of computers and memory create the technological capability for software to work through the ambiguous, contextless nature of random questions humans can ask their phones.
  • Ubiquitous data (aka "big data" and "social"): The volunteering and gathering of data and opinions on every facet of human endeavor via social and business systems creates a source of information that a well-crafted, intelligent agent can comb for answers.

The contemporary search engine business model can certainly thrive via these three trends, as well. In addition, they have greater experience dealing with the core concepts related to ubiquitous data and ubiquitous access.

But the interface — the part where humans make a decision to use one thing vs. another thing — has been unchallenged for more than 10 years. And the potential disruptor — voice — doesn’t leave an easy hook for the advertising, which has supported the contemporary search engine business model for more than 10 years.

Instead of competing with search engines on comprehensive or fast results, Apple’s Siri competes in a realm in which Apple has extensive experience: human-computer interaction. The interface.

OK, Mr. Conehead, what about real estate?

Assuming that voice-driven intelligent agents like Siri turn out to be a success, the thing for real estate professionals to watch for is whether people would prefer to just speak their wishes into a phone versus the use of Web-based search interfaces.

If it turns out that customers would rather speak their wishes into a phone, there are a couple of possible courses of action:

  • Where do intelligent agents learn about houses for sale? Make sure your properties are listed there.
  • Where do intelligent agents learn about who is a real estate professional? Make sure you are listed there.
  • Is your site currently heavily reliant on mobile-based organic search for traffic? This may change suddenly, so keep an eye on it and use your advantage to figure out how Siri answers human questions about property.
  • Is your site now heavily reliant on search engine advertising? There are entire real estate business models based on running efficient pay-per-click (PPC) campaigns. If intelligent agents do disrupt the traditional search engine model, then the related real estate business model would need to adapt to something different.
  • If you have the resources, you could perhaps build a voice-driven intelligent agent that was vertically focused on real estate, thereby owning the platform. Pricey, I know.

I’m not saying Google is going out of business tomorrow. Or even ever, really (you can still search at AltaVista.com).

But it’s a worthy exercise to follow the voice-driven intelligent agent concept all the way through to see how your business might be affected and how you can position yourself for success.

Show Comments Hide Comments

Comments

Sign up for Inman’s Morning Headlines
What you need to know to start your day with all the latest industry developments
Success!
Thank you for subscribing to Morning Headlines.
Back to top
Refer, reward, repeat. Share a 90-day free trial and get $$$.Refer & Earn×