“Global naming leads to global network effects.”

First, a reminder about what makes the Web, the Web….

W3C.org: Architecture of the World Wide Web, Volume One: 2. Identification:

In order to communicate internally, a community agrees (to a reasonable extent) on a set of terms and their meanings. One goal of the Web, since its inception, has been to build a global community in which any party can share information with any other party. To achieve this goal, the Web makes use of a single global identification system: the URI. URIs are a cornerstone of Web architecture, providing identification that is common across the Web. The global scope of URIs promotes large-scale “network effects”: the value of an identifier increases the more it is used consistently (for example, the more it is used in hypertext links (§4.4)).

Principle: Global Identifiers

Global naming leads to global network effects.

This principle dates back at least as far as Douglas Engelbart’s seminal work on open hypertext systems; see section Every Object Addressable in [Eng90].

What are the global – public – URI’s of Facebook? What are they in regards to any social network for that matter?

This is an important train of thought to consider when debating how Facebook and other social networks influence our relationship with Google, and the entire Web.

Facebook’s growth devalues Google’s utility – it devalues the public Web – at least how it is described in “Small Pieces Loosely Joined” and the Web’s own architecture document.

This is why Scoble can’t be more wrong when he says “Why Mahalo, TechMeme, and Facebook are going to kick Google’s butt in four years” because Facebook and other social networks are going to not only affect how we use Google – but will eliminate the utility of the Mahalo’s and TechMeme’s of the world – because they too rely on a robust and growing *public* URI ecosystem.

Dare: Why Google Should be Scared of Facebook:

What Jason and Jeff are inadvertantly pointing out is that once you join Facebook, you immediately start getting less value out of Google’s search engine. This is a problem that Google cannot let continue indefinitely if they plan to stay relevant as the Web’s #1 search engine.

What is also interesting is that thanks to efforts of Google employees like Mark Lucovsky, I can use Google search from within Facebook but without divine intervention I can’t get Facebook content from Google’s search engine. If I was an exec at Google, I’d worry a lot more about the growing trend of users creating Web content where it cannot be accessed by Google than all the “me too” efforts coming out of competitors like Microsoft and Yahoo!.

The way you get disrupted is by focusing on competitors who are just like you instead of actually watching the marketplace. I wonder how Google will react when they eventually realize how deep this problem runs?

None of this invalidates Scott Karp’s riff on Scoble’s main point – there is a growing role for “Trusted Human Editors In Filtering The Web”. Our friends, our families, our communities. Not just machines and algorithms.

My favorite and fellow bloggers, Slashdot, Salon, the home page of the NYTimes, Philly Future, Shelley Powers, Scott himself, my news reader subscriptions, are all trusted humans, or representations of trusted humans, filtering the Web for me.

There’s nothing new to that fact that people play a direct role in how we discover what may interest us on the Web. It goes back to Yahoo!’s earliest days. Back to links.net, back to the NCSA What’s New page. It goes to the heart of what blogging is all about.

People have been way too hung up on Digg’s voting algorithms and forget that what makes Digg, Digg is its community of participants.

People forget Slashdot outright. As they do Metafilter.

So it still comes down to trust – What organizations do we trust? What systems do we trust? What communities do we trust? What people do we trust?

And just how do we share that with each other?

New Looks

Leonard Witt’s blog and PJNet got a great facelift.

So did Scott Karp’s Publishing 2.0 courtesy of bokardo.

Doc Searls recently moved into new digs.

Steve Rubel is contemplating a move to WordPress.com from TypePad (I think his reasoning is flawed and so are most of the folks leaving comments for him).

While Rafe recently upgraded to MT4 . I need to get around to doing the same.

Speaking of Scott Karp, he’s launched an interesting new journalism service.

Simple RESTful URLs with JSPs

Bill de hOra posted an interesting question the other day, that has to do with mapping views to requests, cleanly, in a RESTful way, as Sam Ruby framed it:

it’s easy to forget that Servlets were Java’s response to CGI, way back when. Here’s is the link for Stefan’s entry:

http://www.innoq.com/blog/st/2007/08/15/java_web_frameworks.html

I’m wondering how would one produce a URL space for a blog style archive, using Servlets+JSP, and do so in a way that isn’t a CGI/RPC explicit call? That is, the URLs don’t end up like this:

http://www.innoq.com/blog/entry.jsp?id=java_web_frameworks

with one constraint – “just a servlet” that pulls java_web_frameworks.html direct from a “2007/08/15” folder on the filesystem and byapsses JSP is out. All the response is to be generated via JSP. Would we need to a create framework, however ‘micro’?

In Django world, answering such a question is rather easy. And for PHP hackers, you’re probably saying, hey, use .htaccess to route requests, but in Java, this question becomes a bit more complicated.

A Java developer would want solve two problems here: enable “clean” RESTful URLs, and do as little Java coding as possible by distributing responsibility for defining views to a templating language. Hopefully empowering someone who knows just HTML/CSS to work their magic. The benefits to such an approach can’t be underestimated. We we went down such a path at Knight Ridder with the Cofax CMS and it empowered a lot of creativity with little resources on hand (lots of folks know HTML/CSS/JS, few know Java).

Carbon Five discusses an approach that decomposes path info into parameters for Spring MVC controllers: Parameterized REST URLs with Spring MVC. This solves problem one. It still routes requests to a Controller defined in Java, and I’ve seen far too many not solve problem two, which leaves a design where you have a Request, that maps to a Controller that maps to a single View. But this leaves you with an *excellent* foundation to solve the second problem.

Sam Ruby points to URLRewriteFilter as one possible solution. This potentially solves both problems.

Stefan Tilkov explains how to decompose path info and use RequestDispatcher as a solution. In Sam Ruby’s comments, I suggested just such an approach and it’s worked great for me in previous (and current) projects. This potentially solves both problems.

BTW, if you’re interested in a templating language, outside of JSP (and who isn’t?), consider FreeMarker. A huge project I’m helping design and develop is having terrific success with it and Spring MVC. Real magic starts to happen when you decouple Requests from Views. A shortcut to this in Spring MVC is implementing a RequestToViewNameTranslator.

Yahoo! and Google Move to Squeeze Newspapers Further

Yahoo! has relaunched it’s local search service. It better surfaces community driven participation and feels far more like a destination than before.

Screenwerk: Yahoo! Refreshes, Redesigns Local.

They still haven’t gone as far as I expect them to one day do – integrate Flickr, del.icio.us, and Groups, and Maps into a cohesive whole, but the potential is there.

On the other side is Google, which recently launched its Business Referral Representative program.

Google will now pay you as an independent contractor to collect information on local businesses, telling them about Ad Words, and submitting them to Google Maps. You can read more about it from here and a recent SearchEngineWatch article.

Social Software Rule: “Personal Value Precedes Network Value”

Bokardo: The Del.icio.us Lesson:

The one major idea behind the Del.icio.us Lesson is that personal value precedes network value. What this means is that if we are to build networks of value, then each person on the network needs to find value for themselves before they can contribute value to the network. In the case of Del.icio.us, people find value saving their personal bookmarks first and foremost. All other usage is secondary.

As people use Del.icio.us more, and in order to gain more personal value, they use tags to be able to find their bookmarks later. Tagging isn’t even the primary function of Del.icio.us. Most of the tagging done on Del.icio.us is done secondarily, and for personal use.

The social value of tags on Del.icio.us is only a happy side-effect. Even though most of the ink spilled about Del.icio.us is about the social value, it’s really not the reason why people use it.

Similar to Google aggregating links that were originally created for taking readers from one document to another, Del.icio.us can aggregate tags in order to find out how people value content. If 1,000 people save and tag the same bookmark, for example, that’s a good sign that they find value in it. But to think that people tag so that this information can be aggregated is to give people a trait of altruism they just don’t possess.
Blinded by the Aggregation Light

Unfortunately, the ability to aggregate has blinded many software developers to think that tags are a cure-all to the success of their software.

Lead in Baby Bibs!!!!!!!!

Ya know, I’m sure this is a bit of fear mongering, but yesterday I cursed out loud “holy fuck” when I saw this headline in the New York Times: Some Baby Bibs Said to Contain Levels of Lead.

It would appear, every day a new story pops up to remind us that the infrastructure we rely on, to provide us the capacity to do seemingly ordinary things in our lives – from brushing our teeth, to crossing a bridge, to hanging out on a corner with friends in safety – isn’t all that reliable anymore.

We’re No Better Informed About Our World Than In 1989

Despite the information and communication revolutionary time we live in, Americans remain in the dark about our world.

Pew released a survey back in April detailing Americans knowledge of current affairs, comparing the status quo to that of 1989.

We’ve had a literal explosion of new media and communications services and tools come into being these past 15 years. They have completely reshaped how we get our news and how we connect with our communities.

Social Networks, Blogs, RSS, News Aggregators, Email, Email Lists, Message Boards, Websites, News portals, the Web, the Internet, Cable network 24/hr. news, talk radio, online magazines, collaborative news filters, algorithmic news filters, the list goes on and on.

You would think with so many choices, so many avenues to get informed, we’d actually be better informed.

You’d be wrong.

On average, today’s citizens are about as able to name their leaders, and are about as aware of major news events, as was the public nearly 20 years ago. The new survey includes nine questions that are either identical or roughly comparable to questions asked in the late 1980s and early 1990s. In 2007, somewhat fewer were able to name their governor, the vice president, and the president of Russia, but more respondents than in the earlier era gave correct answers to questions pertaining to national politics.

In 1989, for example, 74% could come up with Dan Quayle’s name when asked who the vice president is. Today, somewhat fewer (69%) are able to recall Dick Cheney. However, more Americans now know that the chief justice of the Supreme Court is generally considered a conservative and that Democrats control Congress than knew these things in 1989. Some of the largest knowledge differences between the two time periods may reflect differences in the amount of press coverage of a particular issue or public figure at the time the surveys were taken. But taken as a whole the findings suggest little change in overall levels of public knowledge.

The survey provides further evidence that changing news formats are not having a great deal of impact on how much the public knows about national and international affairs.

I’m among a bunch of folks who tend to trumpet online services as a cure-all for our past lack of information awareness and communications access.

On the opposite side of the bench have been those who have sounded alarm after alarm about how our ever growing media-and-communications-scape will fragment us ever further and result in ever tightening echo chambers, making us less informed about subject matter as a whole.

Turns out both perspectives are wrong.

Here we are, with so much new technology, so much new media, transforming the way we live our lives, and yet we are as informed, as ill informed, as we were in 1989.

Related:

Newsweek: Dunce-Cap Nation

Wired: Infoporn: Despite the Web, Americans Remain Woefully Ill-Informed

Moving to a Mac?

There’s a possibility I’ll be switching to a MacBook Pro as my development machine at work. A few years ago, before OS-X, a switch like this would have made me feel a little worried. I’m productive in Windows. One of the reasons why is how I arrange my Windows environment to mirror, in a sense, the Linux and Solaris machines I typically develop software for. OS-X eliminates that distinction. As for software, I use a stack of free and open source applications that have analogs on OS-X.

Eclipse (has a Mac distribution)

Sun’s Java SE SDK (has a Mac distribution)

Python (has a Mac distribution)

ActivePerl (has a Mac distribution)

WinSCP and PuTTY (Fugu and Cyberduck)

Notepad++ (Textwrangler)

Emacs (has a Mac distribution)

wikidPad (runs on a Mac)

Cygwin (OS-X has Terminal :))

IfranView (iPhoto)

Inkscape (has a Mac distribution)

Subversion (has a Mac distribution)

Trac (server side, browser accessed application)

Here’s a thing that’s been disturbing me about Facebook and Social Networking services…

Tim Berners-Lee, as quoted by Jon Udell in a piece that greatly influenced me back in the day, called the web “a shared information space through which people and machines could communicate.” . The original piece in which Tim Berners-Lee said that is still up for all to read, titled “The World Wide Web: Past, Present and Future”. I found the piece by typing the quote in Google. Give it a try.

As we share our knowledge, collectively with one another, across blogs, message forums, email lists, and any other services that permit indexing, and reinforce that knowledge via hyperlinking, we are, collectively, building a space that benefits humanity.

It is this collective space that helped me learn what I needed to learn to build a career.

And all this happens, not because of altruistic reasons, but because the architecture of the Web empowers, via the hyperlink, a certain form of communication and collaboration.

The conversations that occur on Facebook, and on most social networking services, happen in the public-private.

In places not indexed by Google, not indexed by Yahoo!, yet are public to selected communities that have access and privilege to them. Gated communities. Islands.

Certainly, there has always been places out of reach of search engines (and there will always be a need for some), but until the last few years, the call from the digerati was to surface these databases of knowledge to the public, behind whatever proprietary walls that may have kept them out of reach. Whether they be newspaper archives, or email lists.

Don’t get me wrong – there’s a lot celebrate when it comes to social networking services. I’m a participant in more than a few, to be sure.

But if they come to define the Web, as they are to some in the media, then I fear we are taking a great step backward.

Have you read ‘the dip’?

I’m in the process of writing a piece on Philly Future, about it’s future, titled, “Philly Future, is it in ‘the dip’ or in a cul-de-sac?”. If you’ve read Seth Godin’s “the dip” you would immediately get the reference.

The thing is, every time I start to write it, I can’t help but feel demoralized.

Depressed. Run down. Beaten up.

If I think about how things are at PF right now, it is full of unexplored and sometimes broken promise. It’s taken all the free time I’ve had just to keep it running.

It doesn’t meet my personal standards for what I expect a great service to be. And I’m never satisfied simply running in place. So things there need to change.

With my day job being as full tilt as it has become (in a good way, my team is building something to be proud of, I hope to share more sometime), with my body as wracked with pain as it has been on and off, I’ve felt stretched for time as I haven’t since I was maybe ten years ago, when I still working at Sears, putting all else aside so that I could learn software engineering.

Shoot – the pain is so frustrating that I haven’t played my guitar longer than five minutes the last six months. I’m good at managing it. I’m functional. And I’ve improved quite a bit since I earned the herniated disk. And for that I am thankful. I’m not forced into surgery they way some are.

But sometimes I find myself spinning.

The great thing – the unbelievable thing – is that I’ve learned that it’s easy to get centered again.

Sometimes it’s simply hearing a friend’s or my brother’s voice on the phone. Sometimes, all I need to do is turn to my wife, my daughter, and even my dog on the couch and smile at my blessings as my heart fills.

As long as I have that – I have everything in the world 🙂