Electronic Frontier Foundation has reported that the Senate Commerce Committee has approved a version of SESTA, the Stop Enabling Sex Traffickers Act, S. 1693. Elliot Harmon’s article calls it “still an awful bill”. Harmon goes into the feasibility of using automated filters to detect trafficking-related material, which very large companies like Google and Facebook might be half-way OK with. We saw this debate on the COPA trial, about filtering, more than a decade ago (I attended one day of that trial in Philadelphia in October 2006). No doubt, automated filtering would cause a lot of false positives and implicit self-censoring.
Apparently the bill contains or uses a “manager’s amendment” (text) floated by John Thune (R-SD) which tries to deal with the degree of knowledge that a platform may have about its users. The theory seems to be that it is easy to recognize the intentions of customers of Backpage but not of a shared hosting service. Sophia Cope criticizes the amendment here.
Elliot Harmon also writes that the Internet Association (which represents large companies like Google) has given some lukewarm support to modified versions of SESTA, which would not affect large companies as much as small startups that want user-generated content It’s important to note that SESTA (and a related House bill) could make it harder for victims of trafficking to discuss what happened to them online, an unintended consequence, perhaps. Some observers have said that the law regarding sex trafficking should be patterned after child pornography (where the law seems to work without too much interference of users) and that the law is already “there” now.
But “Law.com” has published a historical summary by Cindy Cohn and Jamie Williams that traces the history of Section 230 all the way back to a possibly libelous item in an AOL message board regarding Oklahoma City (the Zeran case). Then others wanted to punish Craigslist and other sites for allowing users to post ads that were discriminatory in a Civil Rights sense. The law need to recognize the difference between a publisher and a distributor (and a simple utility, like a telecom company, which can migrate us toward the network neutrality debate). Facebook and Twitter are arguably a lot more involved with what their users do than are shared hosting sites like BlueHost and Verio, an observation that seems to get overlooked. It’s interesting that some observers think this puts Wikipedia at particular risk.
I don’t have much an issue with my blogs, because the volume of comments I get is small (thanks to the diversion by Facebook) these days compared to 8 years ago. When I accept a guest post, I should add that Section 230 would not protect me, since I really have become the “publisher” so if a guest post is controversial, I tend to fact-check some of the content (especially accusations of crimes) myself online.
I’d also say that a recent story by Mitch Stoltz about Sci-Hub, relating to the Open Access debate which, for example. Jack Andraka has stimulated in some of his Ted Talks, gets to be relevant (in the sense that DMCA Safe Harbor is the analogy to Section 230 in the copyright law world). A federal court in Virginia ruled against Sci-Hub (Alexandra Elbakyan) recently after a complaint by a particular science journal, the American Chemical Society But it also put intermediaries (ranging from hosting companies to search engines) at unpredictable risk if they support “open access” sites like this. The case also runs some risk of conflating copyright issues with trademark, but that’s a bit peripheral to discussing 230 itself.
Again, I think we have a major break in our society over the value of personalized free speech (outside of the control of organizational hierarchy and aggregate partisan or identity politics). It’s particularly discouraging when you look at reports of surveys at campuses where students seem to believe that safe places are more important than open debate, and that some things should not be discussed openly (especially involving “oppressed” minorities) because debating them implies that the issues are not settled and that societal protections could be taken away again by future political changes (Trump doesn’t help). We’ve noted here a lot of the other issues besides defamation, privacy and copyright; they include bullying, stalking, hate speech, terror recruiting, fake news, and even manipulation of elections (am issue we already had an earlier run-in about in the mid 2000s over campaign finance reform, well before Russia and Trump and even Facebook). So it’s understandable that many people, maybe used to tribal values and culture, could view user-generated content as a gratuitous luxury for some (the more privileged like me) that diverts attention from remedying inequality and protecting minorities. Many people think everyone should operate only by participating in organized social structures run top-down, but that throws us back, at least slouching toward authoritarianism (Trump is the obvious example). That is how societies like Russia, China, and say Singapore see things (let alone the world of radical Islam, or the hyper-communism of North Korea).
The permissive climate for user-generated content that has evolved, almost by default, since the late 1990s, seems to presume individuals can speak and act on their own, without too much concern about their group affiliations. That idea from Ayn Rand doesn’t seem to represent how real people express themselves in social media, so a lot of us (like me) seem to be preaching to our own choirs, and not “caring” personally about people out of our own “cognitive” circles. We have our own kind of tribalism.
(Posted: Wednesday, Nov. 15, 2017 at 2 PM EST)