How do we judge truth in the Twitter age?

Massive messaging markets (3M?) are changing the game in news reporting. The generation of news is already a distributed and complex interweaving of parsed and recombined news streams – we can expect that to only grow and take on new forms. It will be staccato and rapid fire.

Living in what is arguably the most focused-on city in the world, I am well acquainted with how even reputable news outlets routinely slant their stories.  With tens of millions of news providers, all of whom adjust and filter what they report consciously or sub-consciously, how will we judge what is true?  How do we judge truth in the Twitter age?

Instinctively, we judge the reality of a message by how distributed and consistent the corroboration is from multiple sources.  This already is, and will continue to be, gamed by groups with targeted agendas.  Any group with a semblance of organization is busy flooding relevant forums with their message.

When the same message comes from varied quarters – people from many different backgrounds – it starts to earn believability.  But this too can be gamed. 

‘Witnessing’ has a certain power and weight – one who claims to have seen an event with their own eyes.  Yet, the claim is easy, how do I know it’s true?

A rare, but convincing, argument for the truth of a story is when it is propagated by someone with an explicitly contrary agenda – a story which is injurious to the teller. To even come to this evaluation, though, I need to be acquainted with the teller’s true leanings.

So tell me, how do you know when what you read is true?

Questions for Social Media Man

We teeter hysterically on the consequences of rumor
about President Eisenhower’s viscera.

These are Marshall McLuhan’s words (circa 1955) about the impact of electronic media on the human psyche and society.  Substitute ‘twitter’ for ‘teeter’ and virtually anything for Eisenhower and you have a compelling picture of the present age. Information moving instantaneously to all parts of the globe, he writes, is explosive. 

We can, and do, have world events pouring through us like electricity.  I can easily become a twitching, twittering nerve cell in a massive identity-robbing global network.

What are the emotional impacts of this for the individual?

Do we have a moral obligation to be present to all of this information – to feel it?

How can I live if I do not put up walls or selectively empathize?

Who do I become if I do put up walls and selectively empathize?

Why it’s hard to sell me on the Semantic Web – Part 3

This is the third in a series.  Part 1 covered the basics of the Semantic Web vision. Part 2 gave a brief overview of 3 problems in the way – all of them of a technical nature.  This post looks at a problem that is not just technical – Trust.

When it comes to computer agents answering questions for me, trust is an essential problem, not a technical one.  Whenever I ask a question and get an answer, I’m outsourcing trust.  I’m believing in the answer and in the source of that answer.  If I’m asking a computer, I’m trusting the computer and the results that it will return.

What’s Good to Eat Around Here?
If I’m asking a simple question like “where’s the nearest stop for the 17 bus”, there’s not much room for mistrust, but if the question is any more complex, trust becomes a serious issue.  Let’s say I’m asking the question – “Where is the nearest place I can get a good sandwich at a decent price?”  Of course there are issues of ontology, markup, and reasoning involved here (What’s qualifies as a sandwich?  Am I talking about food or construction supplies?  How does one determine ‘decent price’?  How does one define ‘nearest’?)  But let’s look at the one word which begs the trust question – good.

Nowadays, to find out if a restaurant has a good sandwich, I can hit a whole bunch of websites looking for reviews.  For each piece of information I see, I make a judgment about whether to trust that piece of information.  I’ll use all sorts of subtle and not-so-subtle clues to decide to trust or not.  I look at what site it’s on, what else the person there has posted, how they express themselves, whether it’s balanced, whether it uses criterion I value – ultimately, there’s an element of intuition to it.  When I ask my computer the question and the computer comes back with an answer, the decisions of trust are left to the computer. 

Is there a Doctor Nearby?
The word “good” begs the trust question directly, but the question comes up even in less opinion-oriented questions.   The computer’s entire concept of reality is taught to it by people.  Who do you trust to teach your computer about what exists?  To teach it what is consequential and what is not, what is worthy of mention and what is not, what is part of reality and what is not?    

Let’s keep it simple.  If I own a restaurant that serves wraps, and I know that most of the world searches for “sandwiches”, not “wraps”.  I’ll publish an ontology that says “A wrap is a sandwich (a really valuable sandwich)”. My competitor down the street, a standard deli, will publish an ontology that says “Wraps aren’t sandwiches, people looking for sandwiches don’t want wraps, and wraps aren’t worth anything.”  Which one does the computer trust?  Similar questions will come up in all domains – politics, economics, news, medicine, nutrition, etc.

If businesses know that I am searching through semantic agents, they’ll do everything they can to optimize their business to be discovered by semantic agents.  This includes, of course, declaring themselves as fit in as many ways as they possibly can. With computer agents returning information, we can expect this to be standard practice by any business looking to attract customers.

As soon as we farm off our question answering to an outside agent, we can’t avoid this problem.  The definitions of everything will still be up for great debate – only we will have abdicated our right to answer the question and entrusted it to our computers. 

Who do you Trust?
There may be a first light of a solution to this question in the social network. The social network provides an explicit declaration of who I trust.  The computer can tell me “You can believe this review, because someone you trust (or someone who they trust) posted it.” 

The current networks are far too limited to cover the broad range of issues that will come up.  I may be interested in something that none of my friends know anything about.  To broaden the footprint of trust, we may see the formation of societies of mutual trust.  They will collectively form a vision of reality and self police to insure the lack of misleading information.  There would have to be many of these, as my conception of reality may not jive with yours.  The same question will have different answers depending on the differing underlying assumptions and network of trust.

In Summary
So that’s a capsule of my thoughts on the Semantic Web.  We’re making slow progress on each of these questions, but the questions are big and the progress is incremental.  The “Semantic Web” is growing organically – don’t buy it when the next start-up tells you they are delivering it to your door.