Techdirt: MySpace Blamed For Stupid Kids
Techdirt has a brief piece discussing a recent flare-up of the perpetual "anonymous inappropriate message on an online site" problem. In this case, threatening messages directed to a San Antonio high school were posted on the popular site MySpace. The messages said that two students were planning to show up at the school with guns. The result was that 2600 of the school's 3000 students skipped school the next day. Apparently some are contemplating suing MySpace for their role in this process, and at least one school official has said that "the owners of MySpace-dot-com should be held accountable [for not monitoring the messages]."
Many of these issues involve message boards on which defamatory or critical messages are posted anonymously; the free speech issues involved with these often result in court findings in favor of the message board operators. In fact, a recent such case was settled (link courtesy of p2pnet), granting over $100,000 to a New Jersey high school student who was punished by his school for operating such a message board. There are also many free speech cases in which the identity of an anonymous poster was sought unsuccessfully - see the Anonymity/Pseudonymity section of the EFF web page on recent litigation.
It is a little easier to see the harm in the San Antonio case than in the New Jersey one. After all, 86% of the school's students stayed home, and almost certainly would be attending school in fear for some time, causing overwhelming harm to their ability to fully participate in the educational process. To put it another way, there isn't a lot of normative support for the rights of students to post false threatening messages of this sort, except for the potential collateral damage on other speech.
The policy question is a tricky one. It seems like a lose-lose-lose scenario. There are three ways to respond to this effectively:
- Require MySpace to filter messages.
- Require MySpace to be able to identify posters so that they can be held accountable.
- Do nothing, and allow such messages to be posted and read.
None of these are satisfactory. The first is difficult to accomplish. Automatic tools can filter profanity reasonably well, but I seriously doubt that anything can detect this type of harmful message. This means that human filtering is needed, which would cripple the service, not just by discouraging free speech, but also by introducing significant delays. The second isn't much better - it avoids the delays, but it has a more obvious deterrent effect on free speech. The third is great from a technology libertarian perspective, but it doesn't solve the problem; people will continue to post damaging messages. Yes, in time, threatening messages such as the San Antonio one may be increasingly ignored, as we have learned to put up with spam email messages. But we cannot be certain of this, and we will incur a high cost as a society during the interim period.
I wonder how many of these decisions will be made by sheer historical contingency. Have we established precedent for protection of anonymity (through the defamation cases) that would carry over to a case such as the San Antonio one? Is this a good thing? What would have happened if this case had been brought before cases on defamation and other speech with more normative support?