Why it’s hard to sell me on the Semantic Web – Part 3

This is the third in a series.  Part 1 covered the basics of the Semantic Web vision. Part 2 gave a brief overview of 3 problems in the way – all of them of a technical nature.  This post looks at a problem that is not just technical – Trust.

When it comes to computer agents answering questions for me, trust is an essential problem, not a technical one.  Whenever I ask a question and get an answer, I’m outsourcing trust.  I’m believing in the answer and in the source of that answer.  If I’m asking a computer, I’m trusting the computer and the results that it will return.

What’s Good to Eat Around Here?
If I’m asking a simple question like “where’s the nearest stop for the 17 bus”, there’s not much room for mistrust, but if the question is any more complex, trust becomes a serious issue.  Let’s say I’m asking the question – “Where is the nearest place I can get a good sandwich at a decent price?”  Of course there are issues of ontology, markup, and reasoning involved here (What’s qualifies as a sandwich?  Am I talking about food or construction supplies?  How does one determine ‘decent price’?  How does one define ‘nearest’?)  But let’s look at the one word which begs the trust question – good.

Nowadays, to find out if a restaurant has a good sandwich, I can hit a whole bunch of websites looking for reviews.  For each piece of information I see, I make a judgment about whether to trust that piece of information.  I’ll use all sorts of subtle and not-so-subtle clues to decide to trust or not.  I look at what site it’s on, what else the person there has posted, how they express themselves, whether it’s balanced, whether it uses criterion I value – ultimately, there’s an element of intuition to it.  When I ask my computer the question and the computer comes back with an answer, the decisions of trust are left to the computer. 

Is there a Doctor Nearby?
The word “good” begs the trust question directly, but the question comes up even in less opinion-oriented questions.   The computer’s entire concept of reality is taught to it by people.  Who do you trust to teach your computer about what exists?  To teach it what is consequential and what is not, what is worthy of mention and what is not, what is part of reality and what is not?    

Let’s keep it simple.  If I own a restaurant that serves wraps, and I know that most of the world searches for “sandwiches”, not “wraps”.  I’ll publish an ontology that says “A wrap is a sandwich (a really valuable sandwich)”. My competitor down the street, a standard deli, will publish an ontology that says “Wraps aren’t sandwiches, people looking for sandwiches don’t want wraps, and wraps aren’t worth anything.”  Which one does the computer trust?  Similar questions will come up in all domains – politics, economics, news, medicine, nutrition, etc.

If businesses know that I am searching through semantic agents, they’ll do everything they can to optimize their business to be discovered by semantic agents.  This includes, of course, declaring themselves as fit in as many ways as they possibly can. With computer agents returning information, we can expect this to be standard practice by any business looking to attract customers.

As soon as we farm off our question answering to an outside agent, we can’t avoid this problem.  The definitions of everything will still be up for great debate – only we will have abdicated our right to answer the question and entrusted it to our computers. 

Who do you Trust?
There may be a first light of a solution to this question in the social network. The social network provides an explicit declaration of who I trust.  The computer can tell me “You can believe this review, because someone you trust (or someone who they trust) posted it.” 

The current networks are far too limited to cover the broad range of issues that will come up.  I may be interested in something that none of my friends know anything about.  To broaden the footprint of trust, we may see the formation of societies of mutual trust.  They will collectively form a vision of reality and self police to insure the lack of misleading information.  There would have to be many of these, as my conception of reality may not jive with yours.  The same question will have different answers depending on the differing underlying assumptions and network of trust.

In Summary
So that’s a capsule of my thoughts on the Semantic Web.  We’re making slow progress on each of these questions, but the questions are big and the progress is incremental.  The “Semantic Web” is growing organically – don’t buy it when the next start-up tells you they are delivering it to your door.  

2 thoughts on “Why it’s hard to sell me on the Semantic Web – Part 3

  1. Great post. My two cents is to argue that any semantec web that we’re likely to see will be a social web. It will be an extension of the way NetFlix can refer a “good” drama movie to me. NetFlix has a database of what I like, an algorithm to compare that to the likes of others and thus can make an accurate recommendation to me. The Sandwich question will be answered in this way.

    Other classifications and “teaching the computer” will take place in a similar way, through millions of individuals interpreting, tagging, rating, sorting, and linking. This the way of Google and Flickr, etc., and is perhaps too obvious and simple. But it seems to me to be the way the semantic web will evolve.

    I agree that there is not going to be an application, SemantikWeb 1.0, that will drop down and deliver it too us. Rather, it will be an evolution. At some point, perhaps in our lifetime, we will realize that the semantic web has arrived.

  2. One thing I learned in my journalism program that I see proof for all the time: Media won’t necessarily tell you what to think but it will define the issues up for discussion. I’m still not sure if social media reinforces this notion or obliterates it. But I’m inclined to think it is is the former. Online communities create their own “mainstream” cultures, deciding collectively what’s “in” and what’s “out.” So do we ever arrive at a set of external elements that totally reflect us as individuals? If not, there there can’t be absolute trust.

Leave a Reply