Okay, so maybe I overstated it a bit; let's look and see: Google held a programming contest to
get people to design new systems and applications relating to its mission (in honor of "more than three years" in existence).
The winner, a programmer named Dan Egnor, was awarded ten thousand dollars for his development, which allows
you "to search for web pages within a particular geographic locale" while engaging in "traditional keyword searching." "Great," you say, right? What better
than to be able to turn Web search engines into the Yellow Pages and search for "mauve pillow cases" in retail establishments near me?
Well, since I'm easily frightened, at first I had thoughts that maybe the French Court was right in the Yahoo! Nazi auction case and it was possible to block people geographically; but Ernie talked me out of that. I mean, this technology doesn't allow
the Web site to determine who the surfers are, it just lets people find what they want to in the real world while messing around in Cyberspace. But according to the
Wired article discussing the story, it wasn't the exact functionality that excited Google. No,
it was the scalability of the function. Put that with automation, and maybe we should be thinking here.
What do I mean? I am not so concerned that Egnor's development provides the ability to identify where Websites or Website owners are based; you can do that just by searching on a location name in many search engines or by looking at the page itself. I'm concerned more with two apparent developments reflected in this technology: 1) "automating" that process within search engines (which are, in my humble opinion, the keys to the direction network and information technology takes us in the future; if no one can find the information, does it matter that it's there? The tree falling in the woods and all that); and, 2) scaling that technology to "work across the whole Web."
While the issue in Yahoo! was identifying surfer locations and this development doesn't necessarily facilitate that, the scaling makes me wonder what the next logical step is. Doesn't this create the potential for [insert name of bad anti-democratic nation here] to say: "Forget white lists and filters; block non-domestically originating sources using this new technology"?
How does it work? The technology checks addresses found on Web pages (in various formats) against a census database to turn the addresses into geographic coordinates, and it does so in "real time" (I'm assuming, as they don't discuss delayed results in the Wired article). It is as to this last process that the implications of Moore's Law concern me. In action, this technology, combined with ever-increasing processing power, could more easily than ever facilitate the addition of potentially autonomy inhibiting processes to fundamental network services (even down to the TCP/IP layer).
This should, by the way, sound a bit like the filtering debate. Can good technology go bad? No. There is no good technology. There is no bad technology. Technology is neutral (even nuclear science is neutral). It's only when you put the science into a bomb and blow stuff up that its bad character comes out.
What does it all mean to us? Okay, no sky falling, I've just checked. But maybe there's a crack in the ozone layer that we should watch.