People of the Book

Nicholas Carr’s latest ponders the life of the book as it moves into its online incarnation.

I appreciate, as a observant Jew, that no matter how deep technology gets into our lives, there will always be one day a week where we will turn off the phones and the computers, get out of the cars, turn off the Kindle, power down the bionic retinal enhancers, and appreciate the simple world.  Barring major unrest, I’ll always have books on my shelves.
 

Getting My Geek Head Around Twitter

My brain balks at fully understanding Twitter. I feel as if I can understand other media – newsprint, TV, Wiki, whatever. Twitter feels to me as if it’s half born – as if we haven’t yet seen what this baby is going to do.

Global Social Message Bus

In essence, it’s the pure social network – no frills, hobbies, books I’ve read, pokes, movie reviews. It’s just nodes, directed connections, and the ability to pass messages along those connections.

Massive Messaging Anything Market

The power is multiplied when combined with URLs. Nodes, directed connections, and the ability to pass ANYTHING along those connections.

Neurons and Synapses – Rewire at Will

We’ve spoken for decades about electronic communication becoming the nervous system of the planet. This baby is laying it bare and bringing it out to the edges.

Questions for Social Media Man

We teeter hysterically on the consequences of rumor
about President Eisenhower’s viscera.

These are Marshall McLuhan’s words (circa 1955) about the impact of electronic media on the human psyche and society.  Substitute ‘twitter’ for ‘teeter’ and virtually anything for Eisenhower and you have a compelling picture of the present age. Information moving instantaneously to all parts of the globe, he writes, is explosive. 

We can, and do, have world events pouring through us like electricity.  I can easily become a twitching, twittering nerve cell in a massive identity-robbing global network.

What are the emotional impacts of this for the individual?

Do we have a moral obligation to be present to all of this information – to feel it?

How can I live if I do not put up walls or selectively empathize?

Who do I become if I do put up walls and selectively empathize?

Why it’s hard to sell me on the Semantic Web – Part 3

This is the third in a series.  Part 1 covered the basics of the Semantic Web vision. Part 2 gave a brief overview of 3 problems in the way – all of them of a technical nature.  This post looks at a problem that is not just technical – Trust.

When it comes to computer agents answering questions for me, trust is an essential problem, not a technical one.  Whenever I ask a question and get an answer, I’m outsourcing trust.  I’m believing in the answer and in the source of that answer.  If I’m asking a computer, I’m trusting the computer and the results that it will return.

What’s Good to Eat Around Here?
If I’m asking a simple question like “where’s the nearest stop for the 17 bus”, there’s not much room for mistrust, but if the question is any more complex, trust becomes a serious issue.  Let’s say I’m asking the question – “Where is the nearest place I can get a good sandwich at a decent price?”  Of course there are issues of ontology, markup, and reasoning involved here (What’s qualifies as a sandwich?  Am I talking about food or construction supplies?  How does one determine ‘decent price’?  How does one define ‘nearest’?)  But let’s look at the one word which begs the trust question – good.

Nowadays, to find out if a restaurant has a good sandwich, I can hit a whole bunch of websites looking for reviews.  For each piece of information I see, I make a judgment about whether to trust that piece of information.  I’ll use all sorts of subtle and not-so-subtle clues to decide to trust or not.  I look at what site it’s on, what else the person there has posted, how they express themselves, whether it’s balanced, whether it uses criterion I value – ultimately, there’s an element of intuition to it.  When I ask my computer the question and the computer comes back with an answer, the decisions of trust are left to the computer. 

Is there a Doctor Nearby?
The word “good” begs the trust question directly, but the question comes up even in less opinion-oriented questions.   The computer’s entire concept of reality is taught to it by people.  Who do you trust to teach your computer about what exists?  To teach it what is consequential and what is not, what is worthy of mention and what is not, what is part of reality and what is not?    

Let’s keep it simple.  If I own a restaurant that serves wraps, and I know that most of the world searches for “sandwiches”, not “wraps”.  I’ll publish an ontology that says “A wrap is a sandwich (a really valuable sandwich)”. My competitor down the street, a standard deli, will publish an ontology that says “Wraps aren’t sandwiches, people looking for sandwiches don’t want wraps, and wraps aren’t worth anything.”  Which one does the computer trust?  Similar questions will come up in all domains – politics, economics, news, medicine, nutrition, etc.

If businesses know that I am searching through semantic agents, they’ll do everything they can to optimize their business to be discovered by semantic agents.  This includes, of course, declaring themselves as fit in as many ways as they possibly can. With computer agents returning information, we can expect this to be standard practice by any business looking to attract customers.

As soon as we farm off our question answering to an outside agent, we can’t avoid this problem.  The definitions of everything will still be up for great debate – only we will have abdicated our right to answer the question and entrusted it to our computers. 

Who do you Trust?
There may be a first light of a solution to this question in the social network. The social network provides an explicit declaration of who I trust.  The computer can tell me “You can believe this review, because someone you trust (or someone who they trust) posted it.” 

The current networks are far too limited to cover the broad range of issues that will come up.  I may be interested in something that none of my friends know anything about.  To broaden the footprint of trust, we may see the formation of societies of mutual trust.  They will collectively form a vision of reality and self police to insure the lack of misleading information.  There would have to be many of these, as my conception of reality may not jive with yours.  The same question will have different answers depending on the differing underlying assumptions and network of trust.

In Summary
So that’s a capsule of my thoughts on the Semantic Web.  We’re making slow progress on each of these questions, but the questions are big and the progress is incremental.  The “Semantic Web” is growing organically – don’t buy it when the next start-up tells you they are delivering it to your door.  

Why it’s hard to sell me on the Semantic Web – Part 2

The first post in this series gave a background to the Semantic Web, as traditionally conceived.  This post gives an overview of three of the problems we’re facing in making that vision a reality.  I went longer on these three than I had thought I would.  I’ll delve into the fourth problem – trust – in the next post.

As for the title – let me clarify myself a bit.  I like the Semantic Web vision – it has poetry.  We’ll likely continue to see incrementally closer and closer implementations of it.  What I have a hard time swallowing is the claims (usually of software vendors) that they are delivering it today.  There are just too many tough problems between us and the goal to imagine that it’s all been solved by one software vendor.

If the vision of the Semantic Web is creating a distributed world-wide library of facts that your computer can use to answer all sorts of questions for you – what makes it so hard?   Let’s take a look at three of the major problems.

First creating an ontology is hard. An ontology is an explicit, computer-readable declaration of what exists in the world and how all of those things are related.  If it’s to really include all of the myriad things that people care about, it’s a monstrously complex task.

The task of classifying… all the ideas that seek expression is the most stupendous of logical tasks.  Anybody but the most accomplished logician must break down in it utterly; and even for the strongest man, it is the severest possible tax on the logical equipment and faculty.- Charles Sanders Peirce

One way to tame this beast is to settle for an ontology that only covers the most popular items (e.g. food, travel, popular entertainment and consumer merchandise).  We’ll be able to ask our computers about mass-market things, but anything more unusual (Burmese culture, history of organized crime, vacuum repair) would be outside of the system’s depth.  It looks like Headup, among others, is tackling the problem from this angle.

The second problem – marking up semantic content is hard.  Beyond the very simple cases, creating documents that effectively tell computers interesting facts is a job for experts; it’s not at all as easy as HTML/CSS.  OWL, the language that the W3C has recommended for doing this work, is terrifically complex.  A person needs to breath first order logic in order to use it in any interesting way.  The general public is outclassed on this one.

Some less rigourous and less ardous ways to markup content are showing up (e.g. Microformats).  These provide a simpler syntax for marking up very common items like places and people.   Some companies are also marking up major storehouses of information (like IMDB) by hand in order to provide the core information for the mass market audience.  In either case, the long-tail of human knowledge is left out of the picture.

Even if we were to have a good model of what exists in the world and gobs of documents all marked up beutifully, we’d still have our third problem – the reasoning problem.  It is by no means simple to get a computer to do acts of logic in the wild.  Getting these reasoners rolling to the point where you can ask them a question and have them come back with an answer sometime before the heat-death of the universe is not a simple task.  Some questions are simply not answerable, but these are considered nice.  There are some questions that are not-answerable in such a way that the computer will never know that they are not answerable – those are a bit nastier.  There are all sorts of people working on their doctorates on just small subsets of this problem.

That’s three of the barriers – in the next post I’ll tackle the trust issue.

Why it’s hard to sell me on the Semantic Web – Part 1

A good friend of mine works as a social media editor.  We periodically get together for long lunches where the free wheeling conversation hits all the topics of note in the current communication scene.  I was surprised today when he brought up the question of the Semantic Web.  After a half-decade stint in the business of semantic technologies, I’ve basically written off the Semantic Web.  After ten years of failed promise, I’m always a bit surprised to hear another rumor of it’s pending existence.

In short – the Semantic Web promises to turn all of the text found on the web into machine readable facts, and to provide programs that can use those facts to answer questions for you.  So, for example, a restaurant website may say “We’re located at 518 Chestnut Street, have a wide variety of sandwiches, and are open on Saturday.”  The website may give a full menu, driving directions, a list of daily specials, etc.  To a computer this looks like just a bunch of text – blah blah blah blah.  A semantically marked up document would put a formal representation of this information in place along with the text.  Very loosely speaking, it would look something like this:

<Organization type=”Restaurant” name=”Bob’s Restaurant” id=”1″/><isLocatedAt/><Address text=”518 Chestnut Street”/>

<Organization id=”1″/><sellsGoods/><Food type=”sandwiches”/>

<Organization id=”1″/><isOpen/><recurringDay=”7″/>

Once beautiful documents like this are in place, you can ask your computer a question like “Where can I get a sandwich on Saturday”, and the computer would come back with my restaurant.  You could even give your computer quite complex tasks and have it come back with good answers –  “I have to pick up toothpaste, a watermelon, and a large camelhair coat, meet with the mayor, my fiancee, and my lawyer, and I want to get a good sandwich around lunchtime.  Please plan out a course of travel and schedule that takes into account expected traffic and the hours of the shops I have to visit. Also, let me know if I’m passing any place that’s having a going-out-of-business sale.”  The computer would hit tens of websites, communicate with other agents, and put together the schedule and information for you.

That’s the dream.  None less than Tim Berners Lee, the father of the web, has been championing this for years.  The seminal article on the topic was published in 2001.

There are a few major roadblocks.  Teaching computers about common sense is hard – that’s the ontology problem.  Creating those beatiful documents above is hard – that’s the markup problem.  Teaching computers to reason through all those facts is hard – that’s the reasoning problem.  The one I’d like to really focus on, though, is the trust problem.  I’ll post on that one in the coming days.

Change Afoot

For the next 10 days or so, the second round of voting is happening at change.org. 90 issues passed the first round, and the top ten vote getters will be presented to the Obama administration on January 16th.

One that gets my attention, but hasn’t yet worked it’s way to the top ten, is Lawrence Lessig’s proposal for publicly funded elections. See his presentation and vote here. It may well be the best idea I’ve ever seen in American politics. Moreso, without this or a similar measure, it’s easy to see America suffering greatly as corruption eats away at the heart of its political system.

The 7 minute presentation is well worth watching, and is, I believe, a cause for hope.

This is your Brain on New Media

There’s been a firestorm of late about the amount of repetitive stories on RSS, particularly in the technical blogs. Michael Arrington declared open war on embargoes, which touched off an insightful article from Louis Gray. (Thanks to this article from Smoothspan for sending me over.)

Louis writes:

While I look forward to banging through my Google Reader feeds every day, I can pretty much bank on seeing the same story, spun a different way, a good dozen or two dozen times by every single tech blog – even if it’s clear that they are just reporting that someone else reported the news. If you see a story has been covered already and you have nothing to add – leave it alone.

What is most interesting to me here is the personal and societal. We’re the guinea pigs in a new media reality. I would really love to hear a voice as incisive as Marshall McLuhan’s to help me understand what that is doing to my brain. We have here a media that can be treated either as hot or as cold. It is neither entirely overwhelming or intensively participatory. Neither is is somewhere in between – it’s something other than the media we’ve seen up until now. Its character is entirely dependent on the reader.

This media calls to the forefront each person’s ability to choose, and it’s likely for this reason that it’s becoming the arena for a brilliant hashing out of interpersonal ethics – When do I speak and when am I silent? What obligations do I have to the people who listen to me? What obligations do I have to myself when I participate in this? How much responsibility do I bear for the overall state of the media?

Still cooking these ideas…any insight welcome.

How Powerful are the People?

Lawrence Lessig just won me as a new fan. I feel like I can breath better after listening to this interview (below).
(For those of you reading via syndication, click through to the original post to see the video.)

Topics include Professor Lessig’s relationship with Obama, national emergencies, transitional government, trust, the virtues of amateur creativity, hybrid economies, copyright (the entrenched policy, the dangerous reaction, and a more reasonable reform), remix as fair use, Creative Commons, his shift into focusing on corruption as the core underlying problem, the influence of money on politics, how to break the political dependency on money, and getting congress to put their reform chips on the table.

Favorite Quote:
“These are not the hard things that congress are getting wrong; these are the easy things that congress is getting wrong.”

Update: Here’s the powerful presentation on changing congress that he refers to in the video.