Friday was the Peer to Peer at the Crossroads (PDF) symposium at Seton Hall Law School in Newark. The conference has attracted some controversy over the fact that all of the speakers were lawyers. Ed Felten and others were concerned that it would be distorting to have a law and technology conference without any technologists on the program.
I promised to do two things when I went down: blog the conference, and do what I could to make sure that it didn't go too far off the rails, technologically speaking. I'm happy to report that on the latter, not much intervention seemed necessary: whenever someone veered off from technical reality, one of the lawyers was able to set them straight. And as for the former, well, some highly blog-worthy stuff took place . . .
Empirical Questions
It's been said that there are two kinds of empirical questions in the law: those that can be answered and those worth answering. In terms of the landscape of p2p systems out there, the symposium generated a fair number of questions of the second variety.
Here's one, courtesy of conference organizer David Opderbeck: who's driving the innovation in p2p systems? Is it the users, who are demanding services with better scaling, more files, less file corruption, and better privacy? Or is it the developers, who are pumping out all sorts of applications and making it trivial to become a file-sharer?
The question has serious implications for the strategy of trader-haters. If you think that user demand is the critical factor, then almost no amount of suing developers will work, because as long as there are even a few pumping out p2p apps, we'll have wildly popular services. Whereas if you think that p2p is an Field of Dreams phenomenon (if you build it, they will come), then squeezing on the developer side will have the same kind of effect as getting tobacco manufacturers to stop marketing to children.
Lior Strahilevitz asked another great one. He's been conducting empirical studies on the behavior of p2p users: how many files do they share, how many do they download, why do they choose one service over another? In this case, he went on download.com and looked at the statistics on downloads of file-sharing software. He then compared that data with the user ratings, and noticed a remarkable correlation: the more frequently a given app was downloaded, the less its users liked it. The kicker was that the major source of variable discontent he noticed in user comments was the prevalence of spoofed files. That is: the more popular the network, the more infested it was with spoofs. His question was why. Some possible answers from the discussion:
- The RIAA hasn't got its act together enough to try to infest the smaller networks.
- The RIAA simply hits the biggest networks as hard as it can, on a principle of most-bang-for-buck.
- The RIAA waits for networks to cross a certain threshold of popularity, and then slams them, as a signal to VCs who might be thinking of investing in rising p2p stars.
- The users of small networks are more probably elite, technically-sophisticated, file sharers, and the RIAA is afraid of driving them on to darknets. For now, it'll try to drive the non-elite users of the big networks out of the file-sharing game entirely.
- The small networks are more resistant to spoofing, because they have greater norms of cooperation (a variation on Lior's charismatic code paper).
- The elite, technically-sophisticated sharers are better at purging spoofs from their shared folders; they're also the ones who are more likely to be able/willing to switch networks at the drop of a hat.
It's an interesting puzzle.
Doctrinal Questions
Steven Tepp from the Copyright Office spoke first; UCLA's Justin Hughes spoke last. They set up a long-range dialogue on the state and direction of contributory copyright infringement doctrine.
The foundational moment for the modern doctrine is, of course, Sony v. Universal. There's no mention of contributory infringement in the 1976 Copyright Act, so the Sony Court imported the concept (roughly) from the Patent Act. Steven hit this transformation from one side, saying that copyright had a long and glorious caselaw of contributory infringement doctrine, and that there was no need to borrow from the patent world at all, especially if the borrowing had to include a limitation like the "significant noninfringing uses" test. Justin criticized the decision from the other side: by enacting the 1976 Copyright Act without a contributory infringement doctrine, Congress might have meant to eliminate it. After all, the Patent Act, passed in its modern form in 1952, had such language, so Congress certainly knew how to write in contributory infringement.
The other doctrinal difficulty created by Sony is the relationship between the familiar "noninfringing uses" test and the typical phrasing of accomplice/vicarious/contributory liability in the law more generally. If I'm going to be held liable for your wrongful actions, I typically need either to have a suitably controlling relationship with you that I should have stopped you, or to have helped you out while knowing that you were going to do it. The right-and-ability I-shoulda-stopped-you idea comes up in employer-employee cases: if a Wal-Mart clerk clocks you while on the job, you betcha that Wal-Mart will have to pony up. But it's not so applicable to most of the peer-to-peer cases: once the software has been downloaded, it's hard, even impossible, for the developer to control its subsequent uses.
And when you look closely at that second strand -- that I knew what you were going to do and helped out -- there's another problem. In the technology case, I generally don't know what you're going to do with my software. Unless you personally have said to me, "I intend to copy copyrighted sound records" before I give you the software, I don't have knowledge of your intended actions at the time of my legally-relevant actions. By the time I actually know about all the awful stuff you're doing, it's too late. This bit isn't unique to p2p or technology; it's an application of a general sense of what it means to say that someone has legal "knowledge" of someone else's intentions. Even if the last ten users of my software have used it to infringe copyright, that doesn't mean I know what the next one will do. I may be able to guess, but I don't know.
The Sony test, then, is a kind of substitute for knowledge. If you flunk the Sony test, it's as though you had knowledge of how your technology would be used, because it's not as though there was any other plausible use for it. Fail to have significant noninfringing uses, and we'll impute knowledge to you, even if you didn't have it.
(To be really technical, it's probably closer to an imputed intent, but almost all the time, if I intend something, I'll then be considered to have knowledge of it. But we're already in the la-la-land of legal terminology. Because of the uses my technology does or doesn't have, I'm thought to "intend" that it be used in a certain way, even if I don't intend it at all. Based on that "intent," I'm then considered to have "knowledge" that it will be used that way, even though it's possible to intend something without knowing that it'll happen, or even thinking it particularly likely. Of course, that "knowledge" is itself a construct, one that's more stringent than knowledge in the everyday sense. Isn't law fun?
Justin finished his presentation by claiming that you can see the various p2p courts -- especially the Napster and Aimster ones -- straining to put apply a straightforward "knowledge" test to the p2p makers, rather than going through the Sony analysis. Napster's actions smelled awful, goes the reasoning; we should nail them on that, rather than sorting through the empirical studies of how people use the service. He thinks that the Sony test will indeed collapse -- principally because, pretty soon, if you apply it literally, every p2p service will be perfectly legal -- and that courts are going to start going back to making liability turn on whether the developer knew that its software would be used to infringe.
Now, you can debate to what extent this change will matter to results. On the one hand, I don't think the Sony test really means what it says. You se this most obviously in the Aimster case, where Judge Posner just picks up the ball and walks a few feet with it, in the hopes that the zebras won't notice. What used to be a threshold test -- are the non-infringing uses "commercially significant?" -- becomes a balancing test -- do they outweigh the infringing uses. But I think that's just an open acknowledgement of what other judges do: Napster was totally capable of non-infringing uses, just not of "non-infringing uses" in a legal sense. (The danger with Posner's reformulation is that by codifying existing practice, it licenses the next round of judges to go much further, and find the non-infringing uses always to be outweighed, even when obviously significant.) And on the other hand, "knowledge" doesn't mean what it says, either. So we might just get the same exact decisions, based on how much sympathy the judge has for the technology developers, under a different doctrinal route.
But Justin dropped in one last point to show that this change could very well matter. "Non-infringing uses" is at least nominally an objective test. You look at the technology, and you ask how it could be used. Find enough non-infringing uses, and you're golden. "Knowledge," however, is inescapably subjective: you need to take evidence on what the technology-maker knew. That means that cases decided on a knowledge test can't be decided on summary judgment. The plaintiff will allege knowledge, the defendant will deny it, and the case will have to go to trial. This shift alters the calculus of chill and the equation of infringement, by making the lawsuits slower and more expensive.
Tim Wu's Elegant Reformulation
Tim Wu has an article forthcoming in the Michigan Law Review called Copyright's Communication Policy (PDF). His talk at Seton Hall was a compressed version of that article's thesis. I wouldn't call the basic insight stunningly new, but it's a helpful clarification of issues that are often debated in unhelpful ways.
We're all familiar, by now, with the argument that expansive copyright is bad because it's destructive to innovation and allows incumbent copyright industries to prevent the birth of new competitors. Content companies tied to old distribution models are, goes this argument, strangling new technologies in their crib. We're also familiar, by now, with the argument that changes in technology are destroying old, profitable, and socially-useful business, without creating anything stable, profitable, or beneficial in their place. In this strain of argument, technological Boston Stranglers roam free, wrecking the enormous investments that incumbents have made and ruining the incentives for them to put the needed money into building the services and networks of the future.
Tim's insight, to do it the injustice of a sound-bite summarization, is that these are not really arguments that are rooted in copyright policy. These are communications policy arguments; it just so happens that the relevant which happens to affect communications policy is copyright law. Where in the past we'd have argued about how far to turn the "antitrust exemption for ILECs" knob, or which "spectrum auction" buttons to push, now we're arguing about where to set the "copyright" slider for optimal communications policy. That means debates about copyright are being phrased in terms of a traditional political axis in communications law: whether to favor vertically-integrated (possibly monopolist) incumbents who will invest heavily because they can capture the profits from their investments, or to favor evolutionary competition with open standards in which the pressure for investment is driven by the need to stay ahead of one's competitors.
The punch line: right now, our official direction in communications policy is moving towards the latter model. The big 1996 act embraced these principles, and the FCC is talking them up big time. Copyright, to the extent that it is currently pushing towards the former model, is pushing us to a communications model that flourished in decades past but is now out of favor.
Discuss amongst yourselves.
I'm From New Jersey Too
For those of you who are not from New Jersey, our diners are great cultural icons -- it's one of the things that make it a great place to live. They're open twenty-four hours a day, with an enormous menu, none of it great, all of it edible.
-- Marc Friedman
Definitional Questions
Bill Heller, a practicing attorney from McCarter and English, talked about the way that the use of p2p technologies in business raises issues other than just copyright. His general theme was that p2p networks have fuzzier boundaries than traditional ones, and therefore raise more difficult questions about when information enters or leaves an organization. But a lot can turn on such questions, legally speaking:
- If information that's part of patentable invention flows to outsiders on a p2p network, that may count as disclosure for purposes of triggering the one-year "statutory bar" in patent law, after which the invention passes into the public domain if no application has been filed.
- Got a trade secret that flows through a p2p network? Maybe it's not secret for legal purposes any more.
- Shipping files around through p2p nets? Maybe one of your competitors will use the opportunity to sabotage the data.
- When consultants and temporary employees collaborate with regulars through a distributed network, have they just become joint authors for copyright purposes?
- Got disgruntled employees looking for another job? Their resumes may be more attractive if they "IM their way to better-paying jobs," sending confidential company information to their future employers using p2p technologies.
- Are you satisfying your HIPAA and other privacy obligations if your network is potentially open to unauthorized p2p traffic?
- Okay: one copyright infringement question. Are you allowed to monitor your office networks' p2p components to look for illegal file sharing? Are you legally required to do so?
- Have you thought about the technical security holes that p2p applications can punch in your firewalls?
- If you botch one of the above, and it costs you, and you didn't disclose the possibility in your SEC filings, are you hosed under the Sarbanes-Oxley corporate disclosure requirements?
A couple minutes into this list, I started getting nervous. I hope you're getting nervous, too, reading it. The way I phrased it to myself at the time was, I have no idea what he means by "p2p technology." Tim Wu, sitting next to me, perhaps seeing my discomfort, leaned over and said, "With his assumptions, email is a p2p technology." (I said in reply that everything would be, including the entire Internet. Tim, looking kindly on my outburst of technical imprecision, said, with a kind smile, "Peer-to-peer is a layer seven term.")
There are a lot of different ways you can spin the significance of Tim's point. Let me try a few. One of them is that if we think only about file-sharing networks, we're missing a lot of important technologies that may display legal questions very similar to them. In this sense, "the law of p2p" is very much a law of the horse(PDF. Another take might be that "peer-to-peer," taken in its broad sense, sweeps in so much that it's a misleading term to use. It's acquired such overtones of one specific set of applications that we forget about others to which it also applies, sort of, but which display very different properties. Or, we might say that those other apps display such similar properties to "core" p2p networks that using the term is a helpful reminder of the commonalities.
Or, we might take these different spins as a confirmation of a point from another Tim Wu paper: "Application-Centered Internet Analysis" argued that it's best to talk about the nature and effects of individual applications rather than reifying the legal consequences of some monolithic entity called the "Internet." Reifying "peer-to-peer" may be just as much of a mistake. Which strikes me as a pretty smart observation on Tim's part.
A History Lesson
Tim Casey is currently a partner at Fried Frank ("Fried, Frank, and Beans," as they called it during our law school parody show last spring), but from 1995 to 2000, he was Chief Technology Counsel at MCI. (He noted in an aside that he left in 2000 because he didn't get along with "a guy named Bernie Ebbers." Good timing, and good people judgment.) Before that, he was at SGI, where he worked under Ed McCracken.
The significance of these two prior jobs is that Tim turned out to have been very much involved with the drafting of a lil' piece of legislation called the DMCA. Ed McCracken had served in helping to draft the infamous NII White Paper that recommended strict liability for ISPs whose servers and wires were used for copyright infringement. In 1995, when he jumped to MCI, Tim had had his eyes opened to the danger posed to ISPs should something like the White Paper become law.
1995, however, was a bad time to be a telecom lawyer and worried about anything other than what eventually became the Telecommunications Act of 1996. MCI gave him half of a lobbyist's time and he starts making the rounds trying to amend the legislation being drafted to implement the White Paper. That is, the legislation that would eventually become the DMCA. He wrote up language that, after being rewritten by committee staff, eventually became the 512(c) notice-and-takedown safe harbor. The point was to provide a lightweight procedure that would convert a three-way fight between copyright holder, ISP, and user into a direct battle between holder and user, with both sides on record to the tune of sworn statements, allowing the ISP to drop out of a fight not its own. In any event, the legislation went nowhere in that first session. In the 1997-98 session of Congress, it was reintroduced, reamended, and rekilled.
At this point, the copyright industries took the tactically brilliant step of going over Congress's head -- they got WIPO to write DMCA-style copyright protection into an international treaty. In fact, they showed up at the last possible WIPO session for sailing on that particular boat; taking full advantage of the element of surprise, they wrote strict liability for ISPs into the treaty draft. The ISPs, lacking the political savvy and connections of the copyright folks, then found out that only those present at a WIPO meeting are entitled to be heard. Tim flew to Finland, met with the guy drafting the treaty, who couldn't alter the agreed-upon language, but could insert a comment to flag the issue. That was enough: they got NGO status, attended the next meeting, and so on, eventually managing to get a version more sympathetic to the ISPs' perspective.
Coming back to the U.S., then, the stage was set for the DMCA proper, and the third time was the charm. The ISPs, lacking the lobbying clout of the copyright folks, decided they had no hope of blocking the DMCA, but could at least fight a rear-guard action against as much overreaching as they could stop. As they saw it, they needed a reasonable, cost-effective way to deal with complaints from the content industries. But, given the scenarios everyone around the table envisioned, compromise seemed possible. After all, the big problem was going to be Joe Pirate, who gets an ISP webhosting account and sets up a warez server.
The ISPs didn't want to be in the business of monitoring Joe's communications. The content industry didn't then and still doesn't understand why such monitoring -- nowadays they call it "filtering" -- is technically infeasible, if not openly impossible. But the content industries demanded some way to go after Joe Pirate after repeated 512(c) notice and takedowns, if Joe kept on hopping from ISP to ISP. Thus was born the 512(h) subpoena. Compared with 512(c), the 512(h) subpoena was meant to be a heavyweight process, one that was slower and with more due process checks. It was meant to hit the webhosting infringer, rather than requiring Sherlock Holmes work on transitory transmissions over the ISP's wires.
The problem was that nobody around the table expected the rise of peer-to-peer, in which the standard "infringement" scenario now involved comparatively low-volume traffic sent by Joes who were, in effect, their own content hosts. The 512(h) subpoena, Tim claimed, was simply not meant to cover that scenario -- a position that seems, for now, to be gaining ground with the various federal courts asked to consider it. The unfortunate thing, Tim claimed, is that no one ever asked the drafters of the subpoena what it was intended to do; instead, they consulted the unhelpful official legislative history. Congressman Mitch Glazier's staff wrote the legislative history; now he works for the RIAA.
Remember this one, folks: legislative history is written by the victors.
Tim also had a few other choice salvos for the content industries:
- The main reason Sony and other major technology decisions are such a mess is that the copyright industries don't understand the technology and present confused descriptions of it in their briefs.
- The Grokster court appealed to Congress to find a solution. Say what you want about Congressionally-imposed technologies: courts are in a even worse position to be competent at mandating particular technologies.
- The copyright industry, having failed to implement technology to protect its copyrights -- think of the ease of ripping a CD -- is now demanding that the law force others to implement those technologies.
- You want filters? The DMCA has an on-point discussion of the process by which filtering technology should be required. It's in section 512(i). The RIAA is suspiciously silent about 512(i), perhaps because 512(i) requires that filters be approved through "broad consensus" in an "open" and "fair" process. The DMCA reflects an anti-filtering compromise.
- Audible Magic? Trivial to get around.
A Summary for Ed
No worries. Good conference, good presentations. And some pretty tech-savvy lawyers.