LawMeme LawMeme Yale Law School  
LawMeme
Search LawMeme [ Advanced Search ]
 
 
 
 
Features: 'Digital Cops in a Virtual Environment' Conference Report
Posted by James Grimmelmann on Monday, April 12 @ 22:24:21 EDT Computer Crime
The Information Society Project hosted a conference here at Yale a couple of weeks ago. Digital Cops in a Virtual Environment was March 26-28 and it was lots of fun. The panelists and audience were a nice mix of technologists and lawyers. Every single speaker was engaging and interesting (and I've been enough to enough conferences to know just how improbable that is.) Basically every panel had at least one good-natured but meaningful argument between two of the speakers. And the issues raised are only getting more urgent with every passing day.

Here're my thoughts on the event, cobbled together from the notes I made as the conference went on . . .

Dan Geer's Frontier Justice

Dan Geer has great mutton chops. They give him the look of a Yukon territory gold rush saloon-keeper. (I'm probably also being influenced by Robert William Service's The Shooting of Dan McGrew -- I keep on thinking of him as "Dangerous Dan Geer.") You don't want to cause trouble in his establishment: he's genial and charismatic, but he keeps a loaded shotgun behind the bar.

His presentation is entitled "The Physics of Online Law." Now, by physics, he means the study of fundamental truths: "facts that are simply facts," as he calls them. He plans to point out a bunch of realities that can't be ignored -- and then point out what happens when these hard truths meet our soft and flabby intuitions. This is metaphorical physics: he's not borrowing the actual content of physics (how fast signals propagate in a network, or how small we can shrink a computer chip), nor is he borrowing the methodology of physics (conservation laws or partial differential equations, for example). No, he's referring to the swagger of physicists: their utter conviction that all other human knowledge is soft and flabby when compared to the beefy brilliance of physics. So he also reminds me a little of Hans and Franz from Saturday Night Live.

I do have to admit, though, that he tosses off a number of great insights and easily-forgotten truths:

  • From the perspective of the outside world, there's nothing wrong with being infected by malware. What's wrong is passing along that infection. Thus, monitoring your outbound traffic -- as well as inbound attempts to break your security -- is a best practice. If you're going to criminalize something, criminalize passing an infection along, not contracting it.
  • Bankers understand security. They have the most to lose, and they have to think on long time scales.
  • There is no way to comply with every jurisdiction's computer security rules: there are too many inconsistencies among the various mandated and forbidden practices. All you can do is pick whose laws you'll ignore.
  • "On the Internet, every sociopath is your next-door neighbor." (Others will quote this line later on; most of them will mangle it horribly.)
  • In an age of zero-cost copies, our intuition that your having possession of something means depriving me of its benefits breaks down. Yeah, yeah, non-rival consumption is something people in the copyfights have been pointing out for forever, but it always bears repeating.
  • Infected machines may exhibit no symptoms. Even when it's spewing out spam and worms by the bushel, it may work perfectly well from the owner's perspective. (To make your head hurt, try to think through the implications of this point combined with the previous one.)
  • The goal of security professionals, most of the time, is keeping honest people honest.
  • The more complex your security architecture, the more vulnerable you are to denial of service. (The TSA springs to mind -- one person running through the carry-on screening can shut down six airports.)
  • As risk increases, the defensible perimeter decreases. This is one of those observations that I had never thought could be translated into practical computer terms. But no. Geer points out that the Internet itself used to be considered something we could "secure." Then we retreated behind firewalls. Now we're trying to create secure portions of computers (think of Palladium's secure sandbox).
  • All important attacks recruit soldiers. Your responsibility to others includes not letting your computer be recruited into someone else's attack. (Here, I start humming to myself the "Zombie Army of Evil" song from the dot-com musical a friend wrote. Yes, it really was about that sort of zombie, and yes, it has a damnably catchy tune.)
  • Reporting "incidents" in some ways missed the point. You need to scan the routine data to know what's going on; only if you compare your firewall data with someone else's can you figure out who's a target of choice and what's just Net-wide port-scanning.
  • In March 2000, The E911 virus caused computers to pick up the modem and dial 911, causing a limited DoS attack on 911 services. The Nimda worm rampaged across the Internet a week after September 11, leaving behind a back door for its creators to issue future commands. We are incredibly lucky that no one cross-bred the two and rewrote E911 to take over Nimda-infected computers and have them call 911.
  • Polymorphic computer worms -- the ones that attack multiple vulnerabilities -- are, in epidemiological terms, worst-case diseases. They infect new carriers quickly, they have infinitesimal incubation periods, they don't self-heal, and the multiple methods of spread mean that it's hard to get effective herd immunity against them.
  • When analyzing risks, insurers usually know (but sometimes forget) that the total lack of a loss history can mean one of two things. Either the thing you're insuring is riskless, or you're in fact aggregating lots of risks so that they'll all pay out at once. Companies selling what they thought was insurance against water heaters overheating turned out to be selling insurance against Hurricane Andrew.
It's not like I have much to add to this list. Geer doesn't draw any big conclusions from it; no one else does, either. It's just a bunch of things not to forget while you're in Geer's Saloon and Honky-Tonk Hall.

Good-Natured Argument Number One

The first panel on Saturday is called "The New Crime Scene." The various speakers discuss the meaning of "security" in a distributed environment, where every sociopath is your neighbor and zombie armies of evil are ripe for the recruiting. (NYU's Helen Nissenbaum does a nice deconstuction of the meaning of "security" itself in this context.) Stanford heroine Jennifer Granick talks about sensible policy for disclosing vulnerabilities: people are coming to agree that some delay between notifying the vendor and going public is appropriate; there's just not yet consensus on how long. And David Aucsmith (in the form of Philip Reitinger) and Tony Rutkowskidiscuss basic structural, nay, infra-structural issues of security -- respectively, designing systems to resist attack, and designing systems to help you figure out who dunnit once you been attacked.

Pretty much as soon as she can get the moderator (the ISP's Shlomit Wagman) to call on her, Yale's own Joan Feigenbaum rises to ask a question. (In fact, she rises to ask a helpful question in just about every panel.) "People keep saying the Internet wasn't designed to be secure, but that's just not the case."

Here follows a quick back-and-forth between her and Tony, over the relevant distinction between ARPANet's X.25 packet network and the Internet's IP-based system -- ARPANet was explicitly designed "securely," while IP was just thrown together, and it shows. This part involves them both talking loudly, as though arguing, but they converge very rapidly to agreement on the above distinction.

Survivability, she points out, is a security virtue. You can call it "availability" if you want, but the point is the same: since ARPANet, we've had big networks designed to stay up, no matter what. The Internet takes a licking and keeps on ticking. And that's a form of security. The network treats damage as damage and routes around it: the Internet has proven remarkably secure in its resistance to denial-of-service attacks against the network as a whole -- indeed, to denial-of-service against any non-twig node.

Dan Geer likes what's he's hearing, so he gets up next to preach some end-to-end religion. It's a feature, not a bug, that leaf nodes can figure out their own policies. If me and my buddy go off and do our own thing, it's a good thing that nobody can stop us and that we can route around attempts to cut us apart. In fact -- and this is where the veins in Phil and Tony's necks start to bulge -- the erosion of these end-to-end virtues by various functionality and security "improvements" in the infrastructure. NAT, firewalls, QoS -- these things may have security benefits to those who deploy them, but they undermine the securitizing properties of having peer-to-peer and end-to-end networks.

Tony takes the bait. "Can you tell Best Buy that? Firewall routers are flying off the shelves."

Dan's got a snappy comeback, though: "It's the path of least resistance . . . and least knowledge."

Tony: "Do you want to expose everything in your home?" (He leaves it hanging here, with the "to every sociopath in your neighborhood" remaining implicit.)

Dan: "If you want real security, turn off the firewalls. That's what MIT does; every desktop machine is a router. That's what you do if you're serious." I would like to point out that MIT requires two semesters of physics from all undergraduates.

Tony: "This is a scaling problem. When the network becomes critical infrastructure, the name of the game is different. You don't let people take anything on the highway that they want to." Of course, the classic statement of end-to-end in the Internet context is that you can plug anything you want into the Net, which will burp happily and play nice with its new friend.

Phil finally manages to jump into the back-and-forth at this point: "Although the Net is designed for survivability from external threats--"

And whoomf! It's Joan again, clocking Phil from his blind spot, as she says, "No! From failures of all kinds!" Phil's a real brawler, though, and shrugs off the hit. He continues, "--security against threats other than routing disruptions wasn't built in."

Finally, Shlomit calls a halt to the fracas -- one of those odd brawls in which everyone is grinning giddily -- and takes a question from a polite but jet-lagged chap in from the UK.

A Nice Point from Jennifer Granick

One estimate of MyDoom's effects puts its total damage at $38 billion. Oh, really? Well, Hurricane Isabel did "only" $4 billion. These oft-quoted estimates of virus damage are, shall we say, perhaps overstated?

A Quick Through the Wasteland of Criminal Law Theory

In ninety minutes, the "New Crimes" panel recapitulates two and a half centuries of development in the theory of criminal law. It's really quite stunning how standardized the moves -- the arguments and counter-arguments, the parries and double-binds -- have become.

Phil Reitinger, here played by Phil Reitinger, says that online criminal law should responds to the same concerns as regular criminal law, and he outlines through a position that any law student who's just come out the required course in criminal law will recognize. His "Ten Commandments of Online Criminal Law" could have been drafted by Jeremy Bentham. Bentham developed the first comprehensive theory of criminal law as utilitarian deterrence of harm. The right system of punishment is that which costs least to deter the most harmful conduct, and punishments need to be chosen based on their overall pragmatic usefulness.

Phil's Commandment Five -- "Criminal sanctions should where necessary deter costly anti-social conduct." -- sounds an awful lot like Bentham's "The general object which all laws have, or ought to have, in common, is . . . to exclude mischief." Similarly, Phil's Commandment Three -- "When traditional crime presents a greater harm to society because it is committed online, that crime should entail a heavier punishment, where possible through neutral means such as measuring the actual damage done" -- has a close resemblance to Bentham's "When two offences come in competition, the punishment for the greater offence must be sufficient to induce a man to prefer the less."

Now, this is all well and good, but Dan Solove then undermines these simple utilitarian calculations, in exactly the way that two centuries of law and economics have undermined Bentham's calm confidence. It turns out that optimal deterrence is indeterminate: it doesn't spit out clear answers all the time, because you can often make good deterrence arguments for lower punishments. This is what Solove is getting at when he says that constructing identity theft as "theft" undermines the importance of building secure architecture. Dan sees creating vulnerability itself as a harm that needs to be redressed: perhaps the people at "fault" are as much the people using social security numbers as database primary keys, as much as the crackers who steal those numbers.

This is the roadblock that the Benthamite project runs into in general. Will we have a better overall world with less identity theft and more good things if we punish identity thieves harshly, or if we punish them medium-well and put some of that freed-up energy and money into punishing people who are careless with sensitive data? Well, gosh, that totally depends on a bunch of empirical questions, and all sorts of complicated factors -- and we don't have anywhere enough data to know the right answer. Instead, we guess, and sorta hope we got it right,

But, wait! If utilitarian theory won't save our bacon, perhaps another theory can do better. Beryl Howell tells three stories in her presentation, of which one in particular makes people near me nod thoughtfully. There's strict liability for the possession of child pr0n. It's like heroin. But with that kind of liability floating around, lawyers and forensic technicians don't want to do computer examinations if there's any chance that the prosecutors might try to go after them. In the digital world, it's easy for your computer to take you for a ride. People who are just trying to do some good are in danger of being swept up by laws designed to catch bad people.

Although she doesn't say any more than this, it's easy to recognize the theory of individual desert (aka "retributivism") in her example. Think Immanuel Kant. People have reason, which they can use for good or for ill. The criminal is the person who uses that reason to make bad choices; "bad" meaning ones that don't properly respect the lives, property, autonomy, or other legitimate interests of others. It is right to punish the bad person, because they deserve punishment. But people who make morally good choices shouldn't be punished -- even if punishing them would serve a utilitarian social goal. The accidental "possessors" of child pr0n are a good example: it may be good for society to punish them, because then everyone else will be more careful to keep track of the contents of their computers, which is probably good for overall security. But the morally innocent victims of a worm which sets up a pr0n-trading supernode don't deserve punishment, because there's no bad moral choice vis-a-vis the pr0n in involved.

And if that's not enough, Alan Davidson winds up saying some things that sound an awful lot like the claims coming out of the "expressive condemnation" camp. These folks are comparatively recent entrants to the criminal-law-theory sweepstakes, although their general ideas aren't entirely new. Expressive condemnation is a Critical Legal Studies-ish approach, with a pragmatic, almost anti-theory slant. Criminal law is a tool used by a community to express its moral values: criminal law does, and perhaps should, punish those who demonstrate that they have deviant values. By condemning them, a community affirms its commitments to those values about which it does care.

Alan is talking, among other things, about the dangers of overcriminalizing copyright infringement. CDT and EFF and similar folks have been saying this for a while: the RIAA is trying to make criminals out of sixty million Americans, but anything that turns a fifth of the population into criminals is mighty questionable. While one can phrase this argument in deterrence-speak ("If it's that popular, occasional prosecutions will deter no one.") or in retributivist-speak ("Noncommercial copying for personal use is not such a serious offense against individual autonomy that it deserves criminal sanction."), it really sounds most natural in the idiom of expressive condemnation: "If that many people are doing something, it isn't wrong."

The question-and-answer period then exposes how futile any attempt to apply these theories can become. Nimrod Kozlovski, for example, observes that it's not clear which way the ubiquity of a particular class of conduct cuts. He points this out as a tension between Phil's idea of punishing worse stuff worse and Alan's idea of decriminalizing really common stuff. But look at how either theory can justify either result. If zillions of people are doing it, that could mean that the harm is greater (more punishment!) or that the individuals taking part are too numerous to be deterred (less punishment!). Expressively, it could mean that common conduct isn't wrong (less punishment!) or that a dangerous wave of self-reinforcing anti-social norms needs to be stemmed (more punishment!). Nimrod's partner-in-crime Shawn Chen makes exactly this last argument.

You get the same messes with the questions about whether cyberspace crimes are "different." No one on the panel thinks that laws written to deal with offline activities are useless; they mostly agree that existing crimes like fraud and theft cover most of the bad stuff people will do online. But what is the right response to the fact that cyberspace makes these crimes easier? Well, the "right" answer depends not just on whether the crime is easier in absolute terms, but also on whether it's comparatively easier than other conduct (both legal and illegal), on whether people see it as being more wrongful, on whether the damage caused has been going up and down, and on who the people availing themselves of online targets of opportunity are, among many other factors. People have ideas, or guesses, about all of these factors, but little hard data.

But this is what you understand after a semester of criminal law from a fast-talking law professor with a cynical streak: there's never enough data to make the theory stand up on its own. It tends to give you whatever answer you want: you come to it with your preconceptions and background ideas, and then you make the assumptions you need to get the right answer. Not "right" as in objectively correct, just "right" as in matching the rest of your beliefs.

I realize this rant has gotten far afield from the conference. But, I have to say, this is what I was thinking during the "New Crimes" panel.

On the New Cops Panel

I moderated this one. The less said the better.

Okay, Just a Little About the New Cops Panel

The real fireworks come from Curtis Karnow's "Launch on Warning" paper. He doesn't actually quite advocate taking preemptive technical countermeasures against people attacking your site; he just runs through the legalities of doing so. Self-defense -- you can wallop the guy who's a-gonna wallop you -- may not cut it, because there may well often be good alternatives to shooting back. (Here, the hindsight problem is especially severe: once the attack is over, you can almost always figure out what patch, applied in advance, would have prevented it. Applying that patch seems to be a better idea than striking back.)

But perhaps the old doctrine of "nuisance" might work. If someone else's cup runneth over, and it runneth over onto your land, at common law you could go onto their land to "abate the nuisance" and turn off the faucet. Built into this doctrine were ideas of necessity and proportionality, but it did authorize you to do something that would otherwise have been a trespass. There only a zillion and a half interpretive questions -- some of which are brought up in the form of very pointed questions -- which makes it a fun hypothetical to play with. Being a prudent practicing lawyer, Karnow openly admits that going for the launch-on-warning approach is something he wouldn't tell a client to do. But his presentation certainly generates a lot of discussion, with him defending the idea against all comers, on the panel and in the audience. Just the thing for the traditionally sleepy post-lunch time slot.

My question, more generally, though, is this. Out on the Net, where zombie armies of evil do the dirty work of your sociopathic neighbors, in many cases the actual harm is done by a large number of agents, each of whom is only marginally culpable and not especially careless. Is it right to pick out a few of them and make them suffer a great deal, if by so doing we can stop the armies? It's a classic moral conundrum that comes in lots of familiar variants. Torture the terrorist to find the bomb? Condemn the shack to build the bridge? Shall one suffer for the good of all?

Online, I can see several scenarios in which it's plausible that a few might be asked to carry the burden of a very large number of equally responsible parties:

  • The total economic damage done -- if at all -- by file-sharing, divided by the number of file-sharers (or files shared) is small. Wee, even. Perhaps infinitesimal. The number of infringing files shared, multiplied by the statutory damages per infringing file shared, on the other hand, could be unimaginably huge. The RIAA's litigation strategy, in effect, is a reverse lottery.
  • Launch-on-warning is a bad deal for the first members of the Zombie Army to launch attacks. Instead of being participants in a DDoS flood, they get zapped, perhaps quite harshly.
  • Same thing if the Feds decide to seize a computer from the Zombie Army to figure out how the attack worked. Lots of people were sitting on material evidence of the crime; too bad for you that yours will be the hard drive introduced in court.
Just something I'm pondering.

Fine, Fine, A Little More About the New Cops Panel

Nimrod, who's been asking questions all day but is finally on a panel, gives perhaps the most-referenced presentation of the conference, by talking about an accountability gap. People online just aren't properly accountable under many circumstances. Virus writers who can't be found? Unaccountable. Owners of members of the Zombie army? Unaccountable. Manufacturers of insecure firewalls and operating systems? Unaccountable. Large networks of copyright-infringing file-sharers? Unaccountable. ISPs who read all the traffic through their pipes? Unaccountable. And governments who build databases out of datasets "given" to them by all sorts of private parties? Unaccountable.

The genius insight in this description, I think, is that it treats law enforcement symmetrically with the users and other institutional players in the online space. When things are working well, everyone is accountable to everyone else. When things aren't going so well, no one is accountable to anyone. The inability for users to feel that the police are respecting basic privacy and autonomy interests is the same kind of accountability failure as the inability of ISPs to trust that people who launch denial-of-service attacks will be brought to justice. Nimrod's take-away is that the policing institutions -- which increasingly depend on public/private partnerships and the sharing of responsibilities and data -- need to be handled in accountable ways, so that citizens can appropriately monitor and direct law enforcement.

The paper is short on specifics of institution design, law, and technical proposals. But that's okay. It's not a Big Idea That Will Fix Everything. I've been reading a lot about spam lately (no pun intended there), and that's a field that's dangerously awash in BITWFEs. Nimrod's paper clears away the underbrush and maps out the problem in a cogent way, which means we're much better prepared to sanity-check BITWFEs as they come along.

Privacy and Dissensus

The really big (but still friendly) argument comes at the last panel on Saturday, "New Tools." New Tools turns out to mean surveillance. Lots of surveillance. Pretty much no one disputes that surveillance helps authorities find people doing bad stuff. The question is how much surveillance -- and how do we keep surveillors from doing bad things themselves with the data they dig up.

Before Sonya Katyal shows up, we've been wondering about the name. She won the writing competition with a paper on private cyber-surveillance, and there's another Katyal out there who writes smart stuff on cyber-surveillance. They turn out to be brother and sister. Smart family.

She gives a great presentation about "piracy surveillance:" the array of techniques that copyright holders use to look for pirates. (I'd rather call it "private copyright surveillance," but that's a small quibble.) All those lists of gangsta rap songs the RIAA swore up and down that computerless grandmothers were sharing online -- those were generated through piracy surveillance. All the stuff that ISPs and schools are doing to watch what flows on their networks, specifically to find copyrighted materials, that's piracy surveillance. So too, was the spider that found Professor Usher's a-capella song about the Swift gamma-ray satellite.

Basically, this stuff is a bit disturbing, in several ways. First, it turns copyright into something "predatory, invasive, even panoptic." And second, it's a whole enforcement regime carried out extrajudicially, without the oversight that ought to attend mass surveillance. Her closing point -- that copyright regimes should not force a tradeoff between privacy and property -- is a neat observation. But then again, as Larry Lessig has been saying recently, when you say "property" these days, people's eyes get that red bezerker mist, and nothing else -- not privacy, not freedom, not even basic common sense -- is safe.

And then, this point goes nowhere. The other panelists have government fish to fry, and Sonya's observations about private surveillance are entirely ignored. It's a shame, too, because the accountability concerns she raises are perfectly valid in the governmental context, and the tools that copyright holders are using to police the networks are, in some ways, more advanced than the tools that government is currently using. Also, in light of the "invisible handshake" of which Nimrod warns -- here, the governmental use of information gathered by private parties -- the forms of surveillance which she describes are indeed a whole other front when it comes to thinking about governmental snooping. What if the FBI subpoenaed the RIAA's complete index of things found on the P2P nets, with the claim that terrorist cells were passing information steganographically hidden in MP3 files on KaZaA?

Just a thought.

So, anyway, back in the main ring of the two-ring circus that the "New Tools" panel becomes, the first presentation on privacy as against governmental surveillance comes from Michael Froomkin. (The Froomk is a regular guest at Yale ISP conferences: he's been a speaker at the last three we've held. Total coincidence, he claims.) His presentation is about "what's at stake in the national ID card debate" and is basically a run-down of the things we're afraid They might do with a super-dossier on each of us, combined with a couple of recommended ways of resisting the dossierization.

Pace John Gilmore, Michael claims that the danger isn't the German guards barking "papers, please!" like the soldiers on the bus in The Great Escape. Nor is it a problem of totalitarian roundups: if the government is rounding up people by the zillion, we have bigger problems than a national ID database. No, the ID card is just an index into the big database where they keep our Permanent Records. But having that database at all causes problems in all sorts of ways. You get TIPS-like (or Stasi-like, for the more cynical) situations where people are encouraged to rat on each other; you get massive garbage-in/garbage-out problems, you make identity theft much worse, and your second-grade teacher really can mess you up for life by putting black marks in your Permanent Record for standing on your desk.

Basically, the standard list of centralization problems that the security guys have been warning us about for years now. Read a few back issues of CRYPTO-GRAM and you can reconstruct much of his presentation, indeed, much of the two that follow. Marc Rotenberg says some of the same things, together with a more pragmatic, trench-warfare in Washington, how-do-we-fight-this set of recommendations. And even Kim Taipale, who harshes on Marc's tactics as a form of Luddism, doesn't dispute too much the sort of thing that can go wrong if you have excessive data-gathering and poor data-handling systems and practices. No, where these guys light into each other is in their assessments of how to avoid these surveillance train wrecks.

Michael takes the technology as a given. Megabases are coming. The government will have one; private parties may well build their own. All we can do is give people reasonable protections to ask the courts for help when abuse is found. So on the one hand, people should get a property right in their personally-identifiable information (since I'm using acronyms all over the place, let's just go to PII, 'k?). Not that he thinks a property right is all that good, but asking for a liberty right is pushing it, politically. The point is that when property or liberty is implicated, due process rights attach. And due process rights are the way you're going to keep the government megabase from being used for all sorts of awful purposes by private actors. Similarly, putting restrictions on the uses that private parties can make of your United States of American PIN will keep it from being used as the index into every single database, at least.

Marc sees things very much in terms of fighting data at the collection stage, on the other hand. Build infrastructure that doesn't use PII at all -- stamps and cash being classic examples. (Marc, I would guess, is not a fan of the reporting requirements on cash transactions over $10,000.) Similarly, no governmental gathering of PII without legal safeguards: who can see it, where it can be shared, what other data it can be linked to. These are the statements of a man who sees the technology as something less inevitable than Michael does. Not that we're going to turn the clock back to tin-cans-connected by string -- but the technology is still something that can be institutionally cabined. I get the feeling that Michael really the data as something more corrosive, something that just eats through any institutional barriers.

And Kim? Whoo boy, does he think Marc is going about it all wrong. His motto is "Don't smash frames; build labor unions," and he thinks Marc &co. are engaged in classic Luddite frame-smashing. Defunding TIA was a horrible mistake for privacy-lovers: at least with TIA, you could keep track of what was going on. Now, the pieces are being rebuilt in a dozen different agencies (including, believe-it-or-not, NASA) and will be much harder to track. A whack-a-mole assault on technology, whenever it pops up, just creates conditions under which the data collection and correlation learns to hide itself, rather than learning to play accountably. He also thinks that the institutional barriers -- those rules about who can share data with whom -- that Marc loves are mostly terrible ideas. He cites the example, which sounds to me too bad to be true, of the FBI being prohibited from using 411.com when looking for the street addresses of people (while being allowed to place a phone call to 411).

Everything I'm saying in here is a gross oversimplification of someone's argument. Nowhere is that truer than when talking about Kim's presentation. He has 15 minutes; they let him run on to about 20 or 25. In that time, he tries to run through 73 slides. He's apologetic and shameless at once, and he'd never have gotten away with it if he weren't a charismatic motormouth.

Fast-forwarding through a whole bunch of interesting points -- the man really was talking faster than I could take notes -- I'd say that Kim trusts the technology, if it's designed right. That means databases that don't themselves contain the PII, but link to external databases that do. You know, PII escrow. That means using crypto and access controls and audit trails to watch who access data. You know, watching the watchers. And that means giving people proper anonymity and pseudonymity at a code level. You know, proper public-key infrastructure with governmental safeguards. The technology will save us if we just build it right.

(Now, this is an attractive vision. And sensitivity in the use of sensitive data is an important design virtue. But "the technology will save us" is what someone with a BITWFE says. And especially after Dan Geer's presentation and all of the discussions of vulnerabilities and the dangers of monoculture, I'm on edge when the discussion turns to fixing everything by fixing the tech. Especially if it's the government doing the fixing. The government tends to resolve the ancient "is open source or closed source software more secure?" debate by saying "They're both more secure -- more secure than our software.")

There's one great exchange near the end that I think pretty well encapsulates their different takes on the governmental privacy problem. The context: Helen Nissenbaum has just asked about the situation in which the cop stopping you on the street can just punch your name into the database, effectively end-running any Constitutional restrictions on how much the cop can ask you personally.

Michael takes this ball and says -- I think to be provocative -- that this situation is better with a national ID than without one. Because if it's some other database key -- face-recognition, say -- then the problem is diffuse, but with a national ID, then maybe there can be an effective rule on when the cop is allowed to ask you for the card or run a search against an ID. This is the kind of thing an admin law professor says: good institution design is your friend.

Marc says that Michael is being fatalistic about technological development, and that kind of fatalism leads to a bleak world. Speaking environmentally, it's like saying, "Oh, look, there's junk pouring into the lake. Guess we can't use the lake any more." Standing up to technology in small, appropriate, places can lead to much better results. This is the kind of thing a practicing lawyer says: good laws are your friend.

And Kim? Kim says that if we build technologies that only turn over necessary information (right when he says this, I think of the LOAF), things will turn out okay. This is the kind of thing a technology consultant says: good technology is your friend.

So here's the other question I don't get the chance to ask: Is the appropriate response to piracy surveillance technological, institutional, or legal?

Good Idea That'll Never Ever Happen

In his Saturday evening keynote, John Podesta suggests that the FBI headquarters -- the J. Edgar Hoover Building -- be renamed.

Fun with the Caselaw

Sunday's first panel turns out mostly to be a discussion of one case. One case that much of the audience had never heard about before. But it's a very cool case, and it leads to a great set of conversations.

We hear the story of the case from Joe Anastasi, who spins a good yarn. It's the story of two Russian crackers, Vassili and Aleksey. Vassilli and Aleksey were pretty good at what they did: breaking into e-commerce sites and copying databases of credit card numbers. Having stolen the digits, they'd run the scam two ways. First, they'd try to use the cards to order stuff, and second, they'd blackmail their corporate victims into paying hush money. Both angles quickly attracted FBI attention.

Now, the feds are also pretty good at what they do: tricking perps into screwing up, and then slapping on the cuffs. On the other hand, the Russian authorities in charge of such things, everyone at the conference agrees, aren't all that good at what they do. Either they're running around imprisoning people for no good reason, or they're tipping off criminals of investigations to help them cover their tracks. So the FBI decided, you want a job done right, you do it yourself. They set up a sting in Seattle, creating a fake computer security company. Said fake company was very impressed by the deeds of Vassilli and Aleksey, and decided that it would like to offer these fun upstanding young men jobs. Vassilli and Aleksey quickly yielded to the flattery, the promises of legitimate fame, and the smell of cash -- they agreed to fly to Seattle to meet with the Invita Corporation. (Invita? How obvious can you get? Not obvious enough to tip off Russian crackers, apparently.)

There was only one real reason not just to slap the cuffs on the A/V Gang as soon as they stepped off the plane: gathering evidence. The FBI wanted needed definitively to link these two gentlemen to the doings "subssta" -- the handle under which the various companies had been e-burgled. So the whole crew went back to the Invita "offices," where Aleksey and Vassilli were invited to give the assembled folks a little show of their skills. And so, these two fellows did something that no self-respecting cracker should ever do: go for a cracking run on someone else's computer. Especially someone else you've just met. The FBI was merrily logging their keystrokes and keeping a complete record of their activities. Add that to the various statements made by the pair to the Invita "executives" and the feds had tons of great Law and Order-worthy evidence.

But one thing the FBI did, in particular, opened up lots of fun questions. You see, during his last, ill-fated cracking run, Vassilli logged in to his home server back in Chelyabinsk. The server where he'd stashed the information on 500,000 credit card holders swiped from CD Universe. After a quick consultation with the assistant U.S. attorney on the case, the FBI agent downloaded the files from Chelyabinsk to make a local copy, which they then "sealed" by turning it into a tarfile.

So here's the thing. This was a search of a computer in Russia. Conducted by Americans. From America. Without the permission of the computer owner. Or of the Russian government. So guess who can't go to Russia, now? That's right, the FBI agent who carried out the search, and the AUSA who authorized it -- 'cuz they've been indicted in Russia for hacking. Enter Nicolai Seitz, who's the other winner of the conference writing competition. And his paper is on, guess what? That's right, "Transborder Search." Great timing, Nicolai.

He gives what is basically a straight-up international law analysis of the problem at stake in the Gorshkov-Ivanov case (for such indeed are Vassilli and Aleksey's last names -- you also get some other interesting links if you Google for "Gorshkov Ivanov"): what happens when law enforcement agents in one country want to search (remotely) computer systems located in another country. The easy case is when the ISP is willing to disclose the data: AOL.fr, for example, will probably be pretty good at cooperating with the gendarmes, even when the data itself is being stored in Virginia. As against an uncooperative ISP -- or one whom the first country's cops are afraid to tip off, since they can't very easily punish foreigners who violate confidentiality orders -- things are tougher. The semi-easy answer posits consent of the foreign government to the search. But that's not always present or possible -- indeed, there's some suggestion that one of the reasons the FBI didn't simply ring up someone in Chelyabinsk or Moscow was out of fear that Vassilli's ISP would get tipped off that it really ought to delete certain files. And there's also a not really all that easy traditional answer called "letters rogatory," which are basically a mixture of subpoena and diplomacy. Perhaps unsurprisingly, they seem uselessly slow for computer searches. (You know, that whole "letter" part.)

So this leaves us with the case of self-help electronic searches, like the one from three paragraphs ago. Since there are no explicit international conventions, say the international lawyers, we need to examine customary international practice. But there is basically no practice, either, since there's only one known case of an international electronic self-help search (the case of our good friends Vassilli and Aleksey, natch). And under the rules of international law, such as they are, when there is no accepted common practice for a given behavior, it's illegal. Thus, these kinds of transborder searches are presumptively illegal. Ho-kay, guess that's another one for the diplomats to hammer out the next time they meet up for sherry and treaty-signing.

Nicolai adds one provocative detail, though. Since the U.S. has engaged in a search of just this sort, it may be reasonable to say that as a general matter, the U.S. does consent to them. And since the government consents, under the semi-easy answer from above, self-help searches by other countries' law enforcement of servers in the U.S. are presumptively okay. Kind of ironic, no? Well, Paul Ohm from the DoJ shoots that one down pretty quickly. The first thing out of his mouth is, "My name is Paul Ohm, and my boss is John Ashcroft." And the second thing he says is, "I don't think the U.S. consents to such searches." (The rest of Paul's presentation is a very entertaining and informative discussion of places in which the legal system -- especially judges -- is still not really up to speed in its understanding of computer technologies.)

Now, the moderator of this panel is Yale's own Dan Kahan. He's not particularly a technology guy (though he is more comfortable using computer simulations, email, and courseware than most of his colleagues). No, he's just a straight-up law professor, which means a very smart guy who knows how to ask the tough questions. (There's also a rumor around the law school that Dan is the best Texas Hold-'Em player in Connecticut, but that one remains unconfirmed.) And he zeroes in on an aspect of the Gorshkov-Ivanov case that really is troubling when you think about it. As he puts it, "Is it the problem that the Americans didn't get the consent of the Russians, or that they invaded the privacy of Russian citizens?"

Take a moment. Think about it. Pick one. Which fact bothers you more (if at all)? That the Russian government wasn't even informed of the search until after the fact, or that this evidence from a warrantless search was usable as evidence in a U.S. court? Well, then, what if these guys were American? In that case, had it been the Libyan government searching their computers, we might be looking at an international diplomatic incident. But if it had been the U.S. government, the evidence would never have gotten anywhere near a courtroom, thanks to a little thing called the Fourth Amendment. Either way, Dan isn't a fan of Nicolai's beloved transborder searches. If you think that most governments are gangs of thugs -- and Joe has basically said that the reason we didn't go to the Russian authorities is that they were a little too thuggish for our taste -- then why should the government be the one to consent to an invasion of your privacy?

A fascinating discussion ensues.

A Nice Point from Lee Tien

The EFF's Lee Tien gives a great talk on the dangers of architectural regulation, specifically how bad architecture (both physical and electronic) can undermine our experience of privacy. Along the way, he makes one very elegant point I haven't seen put so well before.

Katz v. United States was the Supreme Court case that established a right to privacy that attaches to your conversations on pay phones. There, the FBI had attached a bug to the outside of the phone booth; in finding that the defendant had a "reasonable expectation of privacy," the Court relied on the fact that the phone booth had a door, which he shut behind him.

That is, his legally-protected privacy right didn't come from the actual soundproofing of the booth -- the FBI was able to circumvent his cone, er, booth of silence fairly easily. Nor did he have this reasonable expectation all the time -- if he'd just been walking down the street blabbing away, his words would have been fair game. No: he depended on the symbolism of shutting the door, on the sense, accurate or not, that he was entering into a private space.

Tien's point: what if the government required that phone booths not have doors? If you take away a technology that allows people to experience privacy, they may very well lose that form of privacy completely, both as a practical matter and in the eyes of the law.

Jonathan Zittrain

Is an extraordinarily funny speaker.

Oxblood Ruffin

Is a very funny name for a gentle and thoughtful man. He gives a fairly factual talk on the Cult of the Dead Cow and hacktivism, with one very interesting point about HESSLA, the Hacktivismo Enhanced-Source Software License Agreement. HESSLA is an open source license that is specifically not a "free software license; it incorporates specific prohibitions on certain uses of the software (and of any software derived from it). Under section 10.1 of HESSLA, you can't use any HESSLA-licensed software to "to violate or infringe any human rights or to deprive any person of human rights, including, without limitation, rights of privacy, security, collective action, expression, political freedom, due process of law, and individual conscience."

Also, Oxblood, despite his name, and despite his membership in the Cult of the Dead Cow, is a vegetarian.

Unified Theories of Everything

Jack Balkin, director of the ISP and head honcho of this here conference, is the last speaker of the last panel. His topic is hacktivism -- loosely, the use of electronic communications technologies for political purposes. As a scholar of constitutional law, he's inclined to look at hacktivism through the lens of the First Amendment. The problem that it raises is a very old one: the conflict between speech and property. "Your right to swing your arms ends where my nose begins" is one way of putting the tension; Eldred is another.

There are three phases, Jack claims, that accompany the rise of a new form of communication: survival, inquiry, and sophistication; he illustrates these phases by discussing the history of legal treatment of labor unions. Are they free speech or are they criminal conspiracy destroying expected profits? There was a time in the 19th century survival when the answer was "criminal conspiracy." In the 19th century, the big issue for labor unions was surviving. Then, mostly in the 1930s and 1940s, a series of key legal cases reconceived of union organizing as a form of protected speech, making it legal for organizers to inquire into unionizing employees. Now, we've specialized: labor law is its own field, with its own distinctive doctrines and rules, and we can think about labor unions without reasking basic free speech questions all the time.

Hacktivism and cyberprotest, Jack says, will go through the same three phases. People are fighting over the speech-property boundary all over the place, from website defacement to the DMCA. Currently , people look at Internet issues and they can see property all over the place (cf. above, to Sonya Katyal and Michael Froomkin's points), but maybe at some point society will wrap its head around the various claims being made and we'll all intuitively see the speech interests involved.

Now, actually, Jack doesn't use the words "survival," "inquiry," and "sophistication." They just popped into my head during his presentation, because his tripartite division reminded me of a similar set of three phases propounded by none other than Douglas Adams:

The history of every major Galactic Civilization tends to pass through three distinct and recognizable phases, those of Survival, Inquiry and Sophistication, otherwise known as the How, Why and Where phases. For instance, the first phase is characterized by the question How can we eat? the second by the question Why do we eat? and the third by the question Where shall we have lunch?
The answer to the that last question turns out to be Mamoun's, where various ISP fellows, affiliates, hangers-on, and fellow-travelers retire after the last panel. Truly heroic quantities of food are consumed; I will not soon forget the sight of Jonathan Zittrain sitting across from Jack Balkin, matching him pita-for-pita, kebab for kebab. JZ is about half Jack's size, is the key fact of note here.

Over lunch, Yale (College) alum JZ talks about his days eating at the Doodle (technically, the Yankee Doodle Coffee and Sandwich Shop), a counter-only eatery near the law school that serves up cheap and incredibly fatty fare. Burgers come with a pat of butter on the bun; donuts come grilled. JZ recalls one five-A.M. conversation with the proprietor over the nature of the Pig in a Blanket, which consists of a hot dog, stuffed with cheese and wrapped in bacon, grilled, and then served in a buttered bun. The question was whether the Pig is less kosher or less healthy; the conclusion they reached was that normally, it's less healthy, but that during Passover, the bun pushes it over the top into being less kosher.

See. I told you he's a funny guy.

Conclusion

And that was my experience of our spring conference, start to finish. I don't have any big conclusions; I'm still trying to digest everything said there.

So to speak.

UPDATED 11:00 AM April 13: Michael Froomkn goes by "Michael," not "Mike," so I changed the references to him appropriately.

UPDATED 8:25 PM April 21: Kim Taipale in his presentation actually said that the FBI couldn't use 411.com, not that they couldn't call 411. Changed above.

 
Related Links
· More about Computer Crime
· News by James Grimmelmann


Most read story about Computer Crime:
Life Sentences for Child Porn Busters?

Options

 Printer Friendly Page  Printer Friendly Page

 Send to a Friend  Send to a Friend

Threshold
  
The comments are owned by the poster. We aren't responsible for their content.

Re: 'Digital Cops in a Virtual Environment' Conference Report (Score: 0)
by Anonymous on Tuesday, April 13 @ 04:00:41 EDT
Fascinating. Nothing to comment on, as I'm still digesting the whole thing, but so far this is my favorite LawMeme article. Thanks.


[ Reply to This ]


Re: 'Digital Cops in a Virtual Environment' Conference Report (Score: 1)
by greglas on Tuesday, April 13 @ 20:45:45 EDT
(User Info | Send a Message) http://www.chaihana.com/pers.html
Thanks for this!

>Over lunch, Yale (College) alum JZ talks about his days eating at the Doodle

The D00dl3 rul2!


[ Reply to This ]


Re: 'Digital Cops in a Virtual Environment' Conference Report (Score: 1)
by JSGranick on Thursday, April 15 @ 12:30:52 EDT
(User Info | Send a Message)
Thank you for this. I had to leave early for a friends' wedding and missed the rest of the conference. Its a much appreciated treat when a smart, interesting person reports the smart, interesting things other people said.


[ Reply to This ]


Leges humanae nascuntur, vivunt, moriuntur
Human laws are born, live, and die

Contributors retain copyright interests in all stories, comments and submissions.
The PHP-Nuke engine on which LawMeme runs is copyright by PHP-Nuke, and is freely available under the GNU GPL.
Everything else is copyright copyright 2002-04 by the Information Society Project.

This material may be distributed only subject to the terms and conditions
set forth in the Open Publication License, v1.0 or later.
The latest version is currently available at http://www.opencontent.org/openpub/.

You can syndicate our news with backend.php



Page Generation: 0.502 Seconds