A highly recommended and very long post from Edge on Danny Hillis and his view on the future evolution of the web with interesting comments from leading thinkers like Stewart Brand (GBN), Jaron Lanier, Douglas Rushkoff, Marc Hauser, Bruce Sterling (Wired, WorldChanging), Esther Dyson, Freeman Dyson and Howard Gardner. It touches many, many emergent web trends.
Semantic Web or Web 3.0 is about the World Wide Database instead of WWW. It is about structured, more machine readable data and information on the web. It is about advanced and accellerating eLearning, the next phase of the web after the current entertainment and community phase within Web 2.0. Focusing in factual and procedural knowledge. There are many interesting and current case studies integrating some aspects or technologies of the semantic web. Examples: FreeBase, Hakia, Radar Networks, MetaWeb, Joost and RealTravel. While I am not 100% sure about this at this moment, I do believe Google Base can be included in this space as well. It combines structured data with bottom-up, collective tagging systems.
What does this all mean ?
- Machine learning -> more outsourcing of (factual) tasks to bots and agents -> people will devote more time towards local low-end services, higher-level (symbolic) thinking and other human-specific skills/talents like soft skills (intuitive creativity, communicative/emotional/social skills).
- Better search engine results/experiences -> higher productivity and more innovation
- More self-aware/correcting nature of online articles/posts dynamically integrating feedback loops on predictions in their texts. This allows for more easily deciphering the true nature of experts making future claims in their fields. This is a boost for reputational systems. Think more structured Wikipedia self regulation.
- Better data remixes/mash-ups -> higher productivity and more innovation
- Less impact of SEO (spam) tactics due to rise of structured and verified (!) data (formats) like PICS, Content Labels and microformats. The content of (commercial) websites will be indexed more authentically bringing back a better search engine experience for end users
- Deepens the impact, breadth and relevance of Mixed and Augmented Reality (AR) applications
Most importantly in my view is that a Knowledge Web has to take into account the mental, evolutionary state of the recipient as to be truly effective. Communication and learning is a two-way street. How does the Knowledge Web know about this mental state ? Through personalization ? Behavorial, contextual, profiled, social networked history ? Through emotional sensing ? MIT and DARPA (Pentagon) are working on these (recipient) items as well (Emotional Computing and Cognitive Augmentation).
"As useful
as the Web is, it still falls far short of Alexander's tutor or even
Vennavar Bush's Memex. For one thing, the Web knows very little about you
(except maybe your credit card number). It has no model of how you
learn, or what you do and do not know—or, for that matter, what it does
and does not know. The information in the Web is disorganized,
inconsistent, and often incorrect. Yet for all its faults, the Web is
good enough to give us a hint of what is possible.
It is changing
the way we learn. For
example, one topic in the knowledge web might be Kepler's third law
(that the square of a planet's orbital period is proportional to the
cube of its distance from the sun). This concept would be connected to
examples and demonstrations of the law, experiments showing that it is
true, graphical and mathematical descriptions, stories about the
history of its discovery, and explanations of the law in terms of other
concepts. For instance, there might be a mathematical explanation of
the law in terms of angular momentum, using calculus. Such an
explanation might be perfect for a calculus-loving student who is
familiar with angular momentum. Another student might prefer a picture
or an interactive simulation. The database would contain information,
presumably learned from experience, about which explanations would work
well for which student. It would contain representations of many
successful paths to understanding Kepler's law.
In
retrospect the key idea in the "Aristotle" essay was this: if humans
could contribute their knowledge to a database that could be read by
computers, then the computers could present that knowledge to humans in
the time, place and format that would be most useful to them. The
missing link to make the idea work was a universal database containing
all human knowledge, represented in a form that could be accessed,
filtered and interpreted by computers.
One
might reasonably ask: Why isn't that database the Wikipedia or even the
World Wide Web? The answer is that these depositories of knowledge are
designed to be read directly by humans, not interpreted by computers.
They confound the presentation of information with the information
itself. The crucial difference of the knowledge web is that the
information is represented in the database, while the presentation is
generated dynamically. Like Neal Stephenson's storybook, the
information is filtered, selected and presented according to the
specific needs of the viewer.
Most
search engines are about algorithms and statistics without structure,
while databases have been solely about structure until now, Esther Dyson said."In
the middle there is something that represents things as they are," she
said. "Something that captures the relationships between things." That
addition has long been a vision of researchers in artificial
intelligence. "It's
like a system for building the synapses for the global brain," said Tim
O'Reilly."