This looks like it could be handy – or at least the recipe one to consider: PHPRESTSQL.
Tag Archives: REST
Python links for October 28th, 2009
Dive Into Python: Chapter 11. HTTP Web Services
IBM developerWorks: Working with Web server logs
ActiveState: Python recipes
PLEAC – Programming Language Examples Alike Cookbook: PLEAC – Python
Websites are RESTful resources
Jon Moore: Websites are also RESTFul Web Services
Google I/O 2008 – Apache Shindig presentation
Dare Obasanjo: “Don’t fight the Web, embrace it”
A must read: Dare Obasanjo: Explaining REST to Damien Katz:
There are other practical things to be mindful of as well to ensure that your service is being a good participant in the Web ecosystem. These include using GET instead of POST when retrieving a resource and properly utilizing the caching related headers as needed (If-Modified-Since/Last-Modified, If-None-Match/ETag, Cache-Control), learning to utilize HTTP status codes correctly (i.e. errors shouldn’t return HTTP 200 OK), keeping your design stateless to enable it to scale more cheaply and so on. The increased costs, scalability concerns and complexity that developers face when they ignore these principles is captured in blog posts and articles all over the Web such as Session State is Evil and Cache SOAP services on the client side. You don’t have to look hard to find them. What most developers don’t realize is that the problems they are facing are because they aren’t keeping RESTful principles in mind.
REST and Unix Pipes
http://www.linfo.org/pipe.html
http://www.xml.com/lpt/a/1644
http://www.sixapart.com/pronet/articles/shuffling_atom_.html
http://radar.oreilly.com/archives/2007/02/pipes-and-filters-for-the-inte.html
“Global naming leads to global network effects.”
First, a reminder about what makes the Web, the Web….
W3C.org: Architecture of the World Wide Web, Volume One: 2. Identification:
In order to communicate internally, a community agrees (to a reasonable extent) on a set of terms and their meanings. One goal of the Web, since its inception, has been to build a global community in which any party can share information with any other party. To achieve this goal, the Web makes use of a single global identification system: the URI. URIs are a cornerstone of Web architecture, providing identification that is common across the Web. The global scope of URIs promotes large-scale “network effects”: the value of an identifier increases the more it is used consistently (for example, the more it is used in hypertext links (§4.4)).
Principle: Global Identifiers
Global naming leads to global network effects.
This principle dates back at least as far as Douglas Engelbart’s seminal work on open hypertext systems; see section Every Object Addressable in [Eng90].
What are the global – public – URI’s of Facebook? What are they in regards to any social network for that matter?
This is an important train of thought to consider when debating how Facebook and other social networks influence our relationship with Google, and the entire Web.
Facebook’s growth devalues Google’s utility – it devalues the public Web – at least how it is described in “Small Pieces Loosely Joined” and the Web’s own architecture document.
This is why Scoble can’t be more wrong when he says “Why Mahalo, TechMeme, and Facebook are going to kick Google’s butt in four years” because Facebook and other social networks are going to not only affect how we use Google – but will eliminate the utility of the Mahalo’s and TechMeme’s of the world – because they too rely on a robust and growing *public* URI ecosystem.
Dare: Why Google Should be Scared of Facebook:
What Jason and Jeff are inadvertantly pointing out is that once you join Facebook, you immediately start getting less value out of Google’s search engine. This is a problem that Google cannot let continue indefinitely if they plan to stay relevant as the Web’s #1 search engine.
What is also interesting is that thanks to efforts of Google employees like Mark Lucovsky, I can use Google search from within Facebook but without divine intervention I can’t get Facebook content from Google’s search engine. If I was an exec at Google, I’d worry a lot more about the growing trend of users creating Web content where it cannot be accessed by Google than all the “me too” efforts coming out of competitors like Microsoft and Yahoo!.
The way you get disrupted is by focusing on competitors who are just like you instead of actually watching the marketplace. I wonder how Google will react when they eventually realize how deep this problem runs?
None of this invalidates Scott Karp’s riff on Scoble’s main point – there is a growing role for “Trusted Human Editors In Filtering The Web”. Our friends, our families, our communities. Not just machines and algorithms.
My favorite and fellow bloggers, Slashdot, Salon, the home page of the NYTimes, Philly Future, Shelley Powers, Scott himself, my news reader subscriptions, are all trusted humans, or representations of trusted humans, filtering the Web for me.
There’s nothing new to that fact that people play a direct role in how we discover what may interest us on the Web. It goes back to Yahoo!’s earliest days. Back to links.net, back to the NCSA What’s New page. It goes to the heart of what blogging is all about.
People have been way too hung up on Digg’s voting algorithms and forget that what makes Digg, Digg is its community of participants.
People forget Slashdot outright. As they do Metafilter.
So it still comes down to trust – What organizations do we trust? What systems do we trust? What communities do we trust? What people do we trust?
And just how do we share that with each other?