It is important to pause for a moment and take stock of another possible idea that can threaten freedom of speech and self-publication on the Internet without gatekeepers as we know it now, and that would be “implicit content”.
This concept refers to a situation where an online speaker publishes content that he can reasonably anticipate that some other party whom the speaker knows to be combative, un-intact, or immature (especially a legal minor) will in turn act harmfully toward others, possibly toward specific targets, or toward the self. The concept views the identity of the speaker and presumed motive for the speech as part of the content, almost as if borrowed from object-oriented programming.
The most common example that would be relatively well known so far occurs when one person deliberately encourages others using social media (especially Facebook, Twitter or Instagram) to target and harass some particular user of that platform. Twitter especially has sometimes suspended or permanently closed accounts for this behavior, and specifically spells this out as a TOS violation. Another variation might come from a recent example where a female encouraged a depressed boyfriend to commit suicide using her smartphone with texts and was convicted of manslaughter, so this can be criminal. The concept complicates the normal interpretation of free speech limitation as stopping where there is direct incitement of unlawful activity (like rioting).
I would be concerned however that even some speech that is normally seen as policy debate could fall under this category when conducted by “amateurs” because of the asymmetry of the Internet with the way search engines can magnify anyone’s content and make it viral or famous. This can happen with certain content that offends others of certain groups, especially religious (radical Islam), racial, or sometimes ideological (as possibly with extreme forms of Communism). In extreme cases, this sort of situation could cause a major (asymmetric) national security risk.
A variation of this problem occurred with me when I worked as a substitute teacher in 2005 (see pingback hyperlink here on July 19, 2016). There are a couple of important features of this problem. One is that it is really more likely to occur with conventional websites with ample text content and indexed by search engines in a normal way (even allowing for all the algorithms) than with social media accounts, whose internal content is usually not indexed much and which can be partially hidden by privacy settings or “whitelisting”. That would have been true pre-social media with, for example, discussion forums (like those on AOL in the late 1990s). Another feature is that it may be more likely with a site that is viewed free, without login or subscription. One problem is that such content might be viewed as legally problematic if it wasn’t paid for (ironically) but had been posted only for “provocateur” purposes, invoking possible “mens rea”.
I could suggest another example, of what might seem to others as “gratuitous publication”. I have often posted video and photos of demonstrations, from BLM marches to Trump protests, as “news”. Suppose I posted a segment from an “alt-right” march, from a specific group that I won’t name. Such a march may happen in Washington DC next weekend (following up Charlottesville). I could say that it is simply citizen journalism, reporting what I see. Others would say I’m giving specific hate groups a platform, which is where TOS problems could arise. Of course I could show counterdemonstrations from the other “side”. I don’t recognize the idea that, among any groups that use coercion or force, that one is somehow more acceptable to present than another (Trump’s problem, again.) But you can see the slippery slope.
When harm comes to others after “provocative” content is posted, the hosting sites or services would normally be protected by Section 230 in the US (I presume). However, it sounds like there have been some cases where litigation has been attempted. Furthermore, we know that very recently, large Internet service platforms have cut off at least one (maybe more) website associated with extreme hate speech or neo-Nazism. Service platforms, despite their understandable insistence that they need the downstream liability protections of Section 230, have become more pro-active in trying to eliminate users publishing what they consider (often illegal) objectionable material. This includes, of course, child pornography and probably sex trafficking, and terrorist group recruiting, but it also could include causing other parties to be harassed, and could gradually expand to subsumed novel national security threats. But it now seems to include “hate speech”, which I personally think ought to be construed as “combativeness” or lawlessness. But that brings us to another point: some extreme groups would consider amateur policy discussions that take a neutral tone and try to avoid taking sides (that is, avoiding naming some groups as enemies instead of others, as with Trump’s problems after Charlottesville), as implicitly “hateful” by default when the speaker doesn’t put his own skin in the game. This (as Cloudflare’s CEO pointed out) could put Internet companies in a serious ethical bind.
Timothy B. Lee recently published in Ars-Technica, an update on the “Backpage” bills in Congress, which would weaken Section 230 protections. Lee does seem to imply that the providers most at risk remain isolated to those whose main content is advertisements, rather than discussions; and so far he hasn’t addressed with shared hosting providers could be put at risk. (I asked him that on Twitter.) But some observer believe that the bills could lead states to require that sites with user-logon provide adult-id verification. We all know that this was litigated before with the Child Online Protection Act (COPA), which was ruled unconstitutional finally in early 2007. I was a party to that litigation under Electronic Frontier Foundation sponsorship. Ironically, the judge mentioned “implicit content” the day that I sat in on the arguments (in Philadelphia).
I wanted to add a comment here that probably could belong on either of my two previous posts. That is, yes, our whole civilization has become very dependent on technology, and, yes, a determined enemy could give us a very rude shock. Born in 1943, I have lived through years that have generally been stable, surviving the two most serious crises (the Vietnam military draft in the 1960s and then HIV in the 1980s) that came from the outside world. A sudden shock like that in NBC’s “Revolution” is possible. But I could imagine being born around 1765, living as a white landowner in the South, having experienced the American Revolution and then the Constitution as a teen, and only gradually coming to grips with the idea that my world would be expropriated from me because an underlying common moral evil, before I died (if I was genetically lucky enough to live to 100 without modern medicine). Yet I would have had no grasp of the idea of a technological future, that itself could be put it risk because, for all its benefits in raising living standards, still seemed to leave a lot of people behind.
(Posted: Saturday, September 9, 2017 at 9 PM EDT)