SESTA clears Senate committee, and Congress seems serious about stopping trafficking, even if it requires sacrifices from Internet users — and it seems superluous

Electronic Frontier Foundation has reported that the Senate Commerce Committee has approved a version of SESTA, the Stop Enabling Sex Traffickers Act, S. 1693.  Elliot Harmon’s article calls it “still an awful bill”.   Harmon goes into the feasibility of using automated filters to detect trafficking-related material, which very large companies like Google and Facebook might be half-way OK with. We saw this debate on the COPA trial, about filtering, more than a decade ago (I attended one day of that trial in Philadelphia in October 2006). No doubt, automated filtering would cause a lot of false positives and implicit self-censoring.

Apparently the bill contains or uses a “manager’s amendment”  (text) floated by John Thune (R-SD) which tries to deal with the degree of knowledge that a platform may have about its users.  The theory seems to be that it is easy to recognize the intentions of customers of Backpage but not of a shared hosting service. Sophia Cope criticizes the amendment here.

Elliot Harmon also writes that the Internet Association (which represents large companies like Google) has given some lukewarm support to modified versions of SESTA, which would not affect large companies as much as small startups that want user-generated content   It’s important to note that SESTA (and a related House bill) could make it harder for victims of trafficking to discuss what happened to them online, an unintended consequence, perhaps.  Some observers have said that the law regarding sex trafficking should be patterned after child pornography (where the law seems to work without too much interference of users) and that the law is already “there” now.

But “Law.com” has published a historical summary by Cindy Cohn and Jamie Williams that traces the history of Section 230 all the way back to a possibly libelous item in an AOL message board regarding Oklahoma City (the Zeran case).  Then others wanted to punish Craigslist and other sites for allowing users to post ads that were discriminatory in a Civil Rights sense. The law need to recognize the difference between a publisher and a distributor (and a simple utility, like a telecom company, which can migrate us toward the network neutrality debate).   Facebook and Twitter are arguably a lot more involved with what their users do than are shared hosting sites like BlueHost and Verio, an observation that seems to get overlooked.   It’s interesting that some observers think this puts Wikipedia at particular risk.

I don’t have much an issue with my blogs, because the volume of comments I get is small (thanks to the diversion by Facebook) these days compared to 8 years ago.  When I accept a guest post, I should add that Section 230 would not protect me, since I really have become the “publisher” so if a guest post is controversial, I tend to fact-check some of the content (especially accusations of crimes) myself online.

I’d also say that a recent story by Mitch Stoltz about Sci-Hub, relating to the Open Access debate which, for example. Jack Andraka has stimulated in some of his Ted Talks, gets to be relevant (in the sense that DMCA Safe Harbor is the analogy to Section 230 in the copyright law world). A federal court in Virginia ruled against Sci-Hub (Alexandra Elbakyan) recently after a complaint by a particular science journal, the American Chemical Society  But it also put intermediaries (ranging from hosting companies to search engines) at unpredictable risk if they support “open access” sites like this. The case also runs some risk of conflating copyright issues with trademark, but that’s a bit peripheral to discussing 230 itself.

Again, I think we have a major break in our society over the value of personalized free speech (outside of the control of organizational hierarchy and aggregate partisan or identity politics).  It’s particularly discouraging when you look at reports of surveys at campuses where students seem to believe that safe places are more important than open debate, and that some things should not be discussed openly (especially involving “oppressed” minorities) because debating them implies that the issues are not settled and that societal protections could be taken away again by future political changes (Trump doesn’t help). We’ve noted here a lot of the other issues besides defamation, privacy and copyright; they include bullying, stalking, hate speech, terror recruiting, fake news, and even manipulation of elections (am issue we already had an earlier run-in about in the mid 2000s over campaign finance reform, well before Russia and Trump and even Facebook). So it’s understandable that many people, maybe used to tribal values and culture, could view user-generated content as a gratuitous luxury for some (the more privileged like me) that diverts attention from remedying inequality and protecting minorities.  Many people think everyone should operate only by participating in organized social structures run top-down, but that throws us back, at least slouching toward authoritarianism (Trump is the obvious example). That is how societies like Russia, China, and say Singapore see things (let alone the world of radical Islam, or the hyper-communism of North Korea).

The permissive climate for user-generated content that has evolved, almost by default, since the late 1990s, seems to presume individuals can speak and act on their own, without too much concern about their group affiliations.  That idea from Ayn Rand doesn’t seem to represent how real people express themselves in social media, so a lot of us (like me) seem to be preaching to our own choirs, and not “caring” personally about people out of our own “cognitive”  circles.  We have our own kind of tribalism.

(Posted: Wednesday, Nov. 15, 2017 at 2 PM EST)

 

Cato Institute covers many First Amendment topics in day long forum; what about downstream liability concerns?

Last Thursday, September 28, 2017, I attended a day-long event at the Cato Institute in Washington DC, “The Future of the First Amendment”.  I could call it aka “the future of free speech” in the U.S.

Cato has a link for the event and has now uploaded all the presentations, which you can view here. The videos include embeds of the slides and of the audience members asking questions as professionally filmed, better than I can do on my own at an event.

The “table of contents” in the link shows the topics covered as well as identifying the credentialing the many invited speakers, and indeed the presentation was segmented and topical and tended to focus on many narrow, separate issues.  I’ll come back at the end of this piece as to what I would like to have seen covered more explicitly.

The earliest morning session focuses particularly on partisan political speech related to elections (the “Citizen’s United” problem) and on commercial speech, including whether companies or commercial entities are separate persons.  One concept that stuck out was that listeners or receivers of messages are entitled to First Amendment protections. I would wonder how that concept would play out given more recent reports of Russian attempts not only to influence the 2016 elections but also to spur social instability and resentment in American society, based particularly on the idea of relative collective deprivation (which is not the same idea as “systematic oppression”).  There are understandable concerns over wanting to regulate paid political ads (especially if supplied by foreign agents), but we should remember back around 2005 when there were concerns based on a particular court interpretation of the McCain-Feingold Campaign Finance Reform Act that even free blogs (written without compensation and without ads) could be construed as “political contribution” if they expressed political viewpoints.  The discussion of commercial speech recognizes that advertisements sometimes do express points of view going beyond immediate ad content, and that valuable speech, such as well-made studio Hollywood movies about major historical events, made with good faith, can express political viewpoints while being funded through the open securities markets available to publicly traded companies.  But one auxiliary idea not explicitly mentioned was something I encounter: that speech available to the public should pay its own way.

The second segment dealt with “religious liberty in the post-Obama era”.  Here we have the dubious idea that an employee of a business open to the public is engaging in religiously-connected “speech” when she sells certain products or services to a person of a different faith or who engages in certain intimate personal relationships as now recognized by law (especially same-sex marriage).  One speaker in particular (Robin Fretwell Wilson) suggested that states should carve out laws that require public accommodations to serve all customers but allow individual employees (even in government agencies, such as with Kim Davis in Kentucky) to turn over the duties to someone else.  While I would support such a solution, if can mean an unequal workplace (such as the catse when some employees observe Sabbath’s explicitly and others cover them without getting any compensation in return, which I have done – an extreme extension of this idea is the “conscientious objector” problem with the past military draft).  It’s also true that sometimes “religious speech” can serve as a mask for personal moral ideas that in fact are not really founded in recognized interpretations of scripture, for example, political aversion to working with inherited wealth.

The keynote speaker for the second floor luncheon(well catered with deli sandwiches) was Eugene Volokh, of UCLA Law School and the Volokh conspiracy blog.  Volokh gave a spirited presentation on how the Internet has accelerated the application of libel law (well before Donald Trump noticed) because the Internet allows speakers with no deep pockets and little formal publishing law experience to be heard, and also because the “online reputation” damage from defamation, as propagated by search engines, is permanent, as opposed to newspaper defamation in the past.  Volokh made the interesting point that sometimes cases are settled with court injunctions that could prohibit a blogger from mentioning a particular person online again anywhere.  (That could matter to bloggers who review films or music performances, for example). At 41:07 on this tape, I ask a question about Backpage and Section 230. Volokh’s answer was thorough and more reassuring that it might have been, as he indicated that “knowingly” standard could be included in service provider downstream liability exposures. (He also explained the distinctions among utility transmission, distribution, and publication.) He also got into the question as to whether fake news could be libel.  Usually, because it largely involves politicians, in the U.S. it does not. But it might when applied to celebrities and companies.

The afternoon session featured a presentation by Emily Ekins on the 2017 Free Speech National Survey. A number of startling conclusions were presented, showing partisan divides on what is viewed as hate speech, and also a lack of understanding that most hate speech is constitutionally protected. There is a tendency among many voters and especially many college students to view words as weapons, and to view speakers as morally accountable for the actions of the recipients of their speech, even when there is no direct incitement for rioting or lawless action. Many respondents showed a shocking dislike of journalists as “watchers” who don’t have their own skin in the game.  A majority seemed to take the pseudo-populist position that a heckler’s veto on speakers was morally OK, and a shocking substantial minority thought that government should heavily sponsor speech to protect special groups.  A shocking minority accepted the idea that hate speech should sometimes be met with political violence.

The final session talked about censorship and surveillance.  The speakers included Flemming Rose (“The Tyranny of Silence” and the cartoon controversy).  Rose mentioned, in an answer to an audience question, that in some countries speakers were arrested for “qualification of terrorism” in public statements.  All the speakers noted a desire from the EU to force tech companies to export their rules to the US, especially the supposed “right to be forgotten”.  Daniel Keats Citron from the University of Maryland Law School mentioned the Section 230 controversy in an answer, as she talked about  distinguishing “good Samaritans” from “bad Samaritans”

At the reception afterward, a speaker from Cloudflare noted that Hollywood has been lobbying heavily on Congress to force service providers to prescreen content, as motivated by the Backpage controversy. Hollywood, he said, has been pressuring agents and Wilshire Blvd law firms to join in the effort. He mentioned the DMCA Safe Harbor, which has a similar downstream liability concept but applies to copyright, not to libel or privacy.  The tone of his remarks suggested that this goes way beyond piracy;  Hollywood does not like dealing with the low cost competition of very independent film that is much less capital intensive, and taking up much larger audience share than in the past..  Even Mark Cuban admitted that to me once in an email.  Cloudflare also said that the law, unchanged, would today handle sex trafficking the way it handles child pornography, with a “knowingly” standard, which seems adequate already.

All of this brings me back to what might not have been hit hard enough in the conference, the idea, as I said indicated in the title of my third book, of “a privilege of being listened to” (my 2005 essay), which sounds a little scary to consider and seems to lie beneath authoritarian control of speech.

I insist on managing my own speech, much of which is posted as “free content”.  I get pestered that I don’t sell more physical copies of my books than I do and don’t try to be “popular” or manipulative in order to sell. (That helps other people have jobs,  I guess.)   I get told that my own skin should be in the game.  I get sent into further deployments of the subjunctive mood (“could’a, should’a, would’a”), like in high school French class. – I should have children, or special needs dependents, or be in the trenches myself before I get heard from.  (This could affect how I handle the estate that I inherited, which can get to be a Milo-Dangerous topic.)   Content should pay its own way (which, ironically, might encourage porn.)  Individual speakers weaken advocacy groups by competing with them and not participating.  Before I get heard from myself, I should join somebody else’s cause against “systematic oppression” and not be above walking and shouting in their demonstrations. I should run fundraisers for other people on my webpage. I should support other publications’ fund raisers who claim (on both the right and left) to be my voice, as if I were incompetent to speak for myself.  Or, as if that capacity will be taken away from me by force.  Even the world of writers. I get confrontational ideas, that “real writers” get hired to portray other people’s narratives other than their own. (Okay, I might really have had a chance once go “ghost-write” so-to-speak one of the other “don’t ask don’t tell” soldier’s stories.)

One of the most serious underreported controversies is indeed the idea that speakers should be held responsible for what their readers might do, particularly because “you” are the speaker and not someone else.  This is related to the notion of “implicit content” (Sept. 10). This concept was behind my own experience in October 2005 when working as a substitute teacher, see July 19, 2016 pingback hyperlink).  That certainly comports with the idea that Section 230 should not exist, and that people should not speak out on their own until they have a lot of accountability to a peer group (family or not).  This is far from what the First Amendment says but seems to be what a lot of people have been brought up to believe in their own home and community environments. It goes along with ideas of personal right-sizing, fitting in to the group, and a certain truce on social justice.  In the past two or three decades (compared to when I was in high school and college), there has been a weakened presentation of the First Amendment (and Bill of Rights in general) in the way it is taught in high schools and to undergraduates.  I could even say based on my own substitute teaching experience from 2004-2007 that even public school staff (including administration) is poorly informed on the actual law today, so you would not expect students to be getting the proper learning on these matters.

Individuals have natural rights, just as individuals;  but people don’t have to belong to oppressed groups or claim “relative deprivation” to claim their natural rights.

(Posted: Tuesday, October 3, 2017 at 12 noon)

“Implicit content” may become the next big Internet law controversy; more on Backpage and Section 230

It is important to pause for a moment and take stock of another possible idea that can threaten freedom of speech and self-publication on the Internet without gatekeepers as we know it now, and that would be “implicit content”.

This concept refers to a situation where an online speaker publishes content that he can reasonably anticipate that some other party whom the speaker knows to be combative, un-intact, or immature (especially a legal minor) will in turn act harmfully toward others, possibly toward specific targets, or toward the self. The concept views the identity of the speaker and presumed motive for the speech as part of the content, almost as if borrowed from object-oriented programming.

The most common example that would be relatively well known so far occurs when one person deliberately encourages others using social media (especially Facebook, Twitter or Instagram) to target and harass some particular user of that platform.  Twitter especially has sometimes suspended or permanently  closed accounts for this behavior, and specifically spells this out as a TOS violation. Another variation might come from a recent example where a female encouraged a depressed boyfriend to commit suicide using her smartphone with texts and was convicted of manslaughter, so this can be criminal.  The concept complicates the normal interpretation of free speech limitation as stopping where there is direct incitement of unlawful activity (like rioting).

I would be concerned however that even some speech that is normally seen as policy debate could fall under this category when conducted by “amateurs” because of the asymmetry of the Internet with the way search engines can magnify anyone’s content and make it viral or famous.  This can happen with certain content that offends others of certain groups, especially religious (radical Islam), racial, or sometimes ideological (as possibly with extreme forms of Communism).  In extreme cases, this sort of situation could cause a major (asymmetric) national security risk.

A variation of this problem occurred with me when I worked as a substitute teacher in 2005 (see pingback hyperlink here on July 19, 2016).  There are a couple of important features of this problem.  One is that it is really more likely to occur with conventional websites with ample text content and indexed by search engines in a normal way (even allowing for all the algorithms) than with social media accounts, whose internal content is usually not indexed much and which can be partially hidden by privacy settings or “whitelisting”.  That would have been true pre-social media with, for example, discussion forums (like those on AOL in the late 1990s). Another feature is that it may be more likely with a site that is viewed free, without login or subscription. One problem is that such content might be viewed as legally problematic if it wasn’t paid for (ironically) but had been posted only for “provocateur” purposes, invoking possible “mens rea”.

I could suggest another example, of what might seem to others as “gratuitous publication”.  I have often posted video and photos of demonstrations, from BLM marches to Trump protests, as “news”.  Suppose I posted a segment from an “alt-right” march, from a specific group that I won’t name.  Such a march may happen in Washington DC next weekend (following up Charlottesville).  I could say that it is simply citizen journalism, reporting what I see.  Others would say I’m giving specific hate groups a platform, which is where TOS problems could arise. Of course I could show counterdemonstrations from the other “side”. I don’t recognize the idea that, among any groups that use coercion or force, that one is somehow more acceptable to present than another (Trump’s problem, again.)  But you can see the slippery slope.

When harm comes to others after “provocative” content is posted, the hosting sites or services would normally be protected by Section 230 in the US (I presume).  However, it sounds like there have been some cases where litigation has been attempted.  Furthermore, we know that very recently, large Internet service platforms have cut off at least one (maybe more) website associated with extreme hate speech or neo-Nazism. Service platforms, despite their understandable insistence that they need the downstream liability protections of Section 230, have become more pro-active in trying to eliminate users publishing what they consider (often illegal) objectionable material.  This includes, of course, child pornography and probably sex trafficking, and terrorist group recruiting, but it also could include causing other parties to be harassed, and could gradually expand to subsumed novel national security threats. But it now seems to include “hate speech”, which I personally think ought to be construed as “combativeness” or lawlessness.  But that brings us to another point:  some extreme groups would consider amateur policy discussions that take a neutral tone and try to avoid taking sides (that is, avoiding naming some groups as enemies instead of others, as with Trump’s problems after Charlottesville), as implicitly “hateful” by default when the speaker doesn’t put his own skin in the game.   This (as Cloudflare’s CEO pointed out) could put Internet companies in a serious ethical bind.

Timothy B. Lee recently published in Ars-Technica, an update on the “Backpage” bills in Congress, which would weaken Section 230 protections. Lee does seem to imply that the providers most at risk remain isolated to those whose main content is advertisements, rather than discussions; and so far he hasn’t addressed with shared hosting providers could be put at risk.  (I asked him that on Twitter.)  But some observer believe that the bills could lead states to require that sites with user-logon provide adult-id verification.  We all know that this was litigated before with the Child Online Protection Act (COPA), which was ruled unconstitutional finally in early 2007.  I was a party to that litigation under Electronic Frontier Foundation sponsorship. Ironically, the judge mentioned “implicit content” the day that I sat in on the arguments (in Philadelphia).

I wanted to add a comment here that probably could belong on either of my two previous posts.  That is, yes, our whole civilization has become very dependent on technology, and, yes, a determined enemy could give us a very rude shock.  Born in 1943, I have lived through years that have generally been stable, surviving the two most serious crises (the Vietnam military draft in the 1960s and then HIV in the 1980s) that came from the outside world.  A sudden shock like that in NBC’s “Revolution” is possible.  But I could imagine being born around 1765, living as a white landowner in the South, having experienced the American Revolution and then the Constitution as a teen, and only gradually coming to grips with the idea that my world would be expropriated from me because an underlying common moral evil, before I died (if I was genetically lucky enough to live to 100 without modern medicine). Yet I would have had no grasp of the idea of a technological future, that itself could be put it risk because, for all its benefits in raising living standards, still seemed to leave a lot of people behind.

(Posted: Saturday, September 9, 2017 at 9 PM EDT)

Will user-generated public content be around forever? The sex-trafficking issue and Section 230 are just the latest problem

It used to be very difficult to “get published”.  Generally, a third party would have to be convinced that consumers would really pay to buy the content you had produced.  For most people that usually consisted of periodical articles and sometimes books.  It was a long-shot to make a living as a best-selling author, as there was only “room at the top” for so many celebrities.  Subsidy “vanity” book publishing was possible, but usually ridiculously expensive with older technologies.

That started to change particularly in the mid 1990s as desktop publishing became cheaper, as did book manufacturing, to be followed soon by POD, print on demand, by about 2000.  I certainly took advantage of these developments with my first “Do Ask Do Tell” book in 1997.

Furthermore, by the late 1990s, it had become very cheap to have one’s own domain and put up writings for the rest of the world to find with web browsers.  And the way search engine technology worked by say 1998, amateur sites with detailed and original content had a good chance of being found passively and attracting a wide audience.  In addition to owned domains, some platforms, such as Hometown AOL at first, made it very easy to FTP content for unlimited distribution.  At the same time, Amazon and other online mass retail sites made it convenient for consumers to find self-published books, music, and other content.

Social media, first with Myspace and later with the much more successful Facebook, was at first predicated on the idea of sharing content with a known whitelisted audience of “friends” or “followers”.  In some cases (Snapchat), there was an implicit understanding that the content was not to be permanent. But over time, many social media platforms (most of all, Facebook, Twitter, and Instagram) were often used to publish brief commentaries and links to provocative news stories on the Web, as well as videos and images of personal experiences.  Sometimes they could be streamed Live.  Even though friends and followers were most likely to see it (curated by feed algorithms somewhat based on popularity in the case of Facebook) many of them were public for all to see,  Therefore, an introverted person like me who does not like “social combat” or hierarchy or does not like to be someone else’s voice (or to need someone else’s voice) could become effective in influencing debate.   It’s also important that modern social media were supplemented by blogging platforms, like Blogger, WordPress and Tumblr, which, although they did use the concept of “follower”,  were more obviously intended generally for public availability. The same was usually true of a lot of video content on YouTube and Vimeo.

The overall climate regarding self-distribution of one’s own speech to a possibly worldwide audience seemed permissive, in western countries and especially the U.S.   In authoritarian countries, political leaders would resist.  It might seem like an admission of weakness that an amateur journalist could threaten a regime, but we saw what happened, for example, with the Arab Spring.  A permissive environment regarding distribution of speech seemed to undercut the hierarchy and social command that some politicians claimed they needed to protect “their own people.”

Gradually, challenges to self-distribution evolved.   There was an obvious concern that children could find legitimate (often sexually oriented) content aimed for cognitive adults.  The first big problem was the Communications Decency Act of 1996.  The censorship portion of this would be overturned by the Supreme Court in 1997 (I had attended the oral arguments).  Censorship would be attempted again with the Child Online Protection Act, or COPA, for which I was a sublitigant under the Electronic Frontier Foundation.  It would be overturned in 2007 after a complicated legal battle, in the Supreme Court twice.  But the 1996 Communications Decency Act, or more properly known as the Telecommunications Act, also contained a desirable provision, that service providers (ranging from Blogging or video-sharing platforms to telecommunications companies and shared hosting companies) would be shielded from downstream liability for user content for most legal problems (especially defamation). That is because it was not possible for a hosting company or service platform to prescreen every posting for possible legal problems (which is what book publishers do, and yet require author indemnification!)  Web hosting and service companies were required to report known (as reported by users) child pornography and sometimes terrorism promotion.

At the same time, in the copyright infringement area, a similar provision developed, the Safe Harbor provision of the Digital Millennium Copyright Act of 1998, which shielded service providers from secondary liability for copyright infringement as long as they took down offending content from copyright owners when notified.  Various threats have developed to the mechanism, most of all SOPA, which got shot down by user protests in early 2012 (Aaron Swartz was a major and tragic figure).

The erosion of downstream liability protections would logically become the biggest threat to whether companies can continue to offer users the ability to put up free content without gatekeepers and participate in political and social discussions on their own, without proxies to speak for them, and without throwing money at lobbyists.  (Donald Trump told supporters in 2016, “I am your voice!”  Indeed.  Well, I don’t need one as long as I have Safe Harbor and Section 230.)

So recently we have seen bills introduced in the House (ASVFOSTA, “Allow States and Victims to Fight Online Trafficking Act”) in April (my post), and SESTA, Stop Enabling of Sex Traffickers Act” on Aug. 1 in the Senate (my post). These bills, supporters say, are specifically aimed at sex advertising sites, most of all Backpage..  Under current law, plaintiffs (young women or their parents) have lost suits because Backpage can claim immunity under 230.  There have been other controversies over the way some platforms use 230, especially Airbnb.  The companies maintain that they are not liable for what their users do.

Taken rather literally, the bills (especially the House bill) might be construed as meaning that any blogging platform or hosting provider runs a liability risk if a user posts a sex trafficking ad or promotion on the user’s site.  There would be no reasonable way Google or Blue Host or Godaddy or any similar party could anticipate that a particular user will do this.  Maybe some automated tools could be developed, but generally most hosting companies depend on users to report illegal content.  (It’s possible to screen images for water marks for known child pornography, and it’s possible to screen some videos and music files for possible copyright, and Google and other companies do some of this.)

Bob Portman, a sponsor of the Senate bill, told CNN and other reporters that normal service and hosting companies are not affected, only sites knowing that they host sex ads.  So he thinks he can target sites like Backpage, as if they were different.  In a sense, they are:  Backpage is a personal commerce-facilitation site, not a hosting company or hosting service (which by definition has almost no predictive knowledge of what subject matter any particular user is likely to post, and whether that content may include advertising or may execute potential commercial transactions, although use of “https everywhere” could become relevant).  Maybe the language of the bills could be tweaked to make this clearer. It is true that some services, especially Facebook, have become pro-active in removing or hiding content that flagrantly violates community norms, like hate speech (and that itself gets controversial).

Eric Goldman, a law professor at Santa Clara, offered analysis suggesting that states might be emboldened to try to pass laws requiring pre-screening of everything, for other problems like fake news.  The Senate bill particularly seems to encourage states to pass their own add-on laws. They could try to require pre-secreening.  It’s not possible for an ISP to know whether any one of the millions of postings made by customers could contain sex-trafficking before the fact, but a forum moderator or blogger monitoring comments probably could.  Off hand, it would seem that allowing a comment with unchecked links (which I often don’t navigate because of malware fears) could run legal risks (if the link was to a trafficking site under the table).  Again, a major issue should be whether the facilitator “knows”.  Backpage is much more likely to “know” than a hosting provider.  A smaller forum host might “know” (but Reddit would not).

From a moral perspective, we have something like the middle school problem of detention for everybody for the sins of a few.  I won’t elaborate here on the moral dimensions of the idea that some of us don’t have our own skin in the game in raising kids or in having dependents, as I’ve covered that elsewhere.  But you can see that people will perceive a moral tradeoff, that user-generated content on the web, the way the “average Joe” uses it, has more nuisance value (with risk of cyberbullying, revenge porn, etc) than genuine value in debate, which tends to come from people like me with fewer immediate personal responsibilities for others.

So, is the world of user-generated content “in trouble”?  Maybe.  It would sound like it could come down to a business model problem.  It’s true that shared hosting providers charge annual fees for hosting domains, but they are fairly low (except for some security services).  But free content service platforms (including Blogger, WordPress, YouTube, and Facebook and Twitter) do say “It’s free” now – they make their money on advertising connected to user content.   A world where people use ad blockers and “do not track” would seem grim for this business model in the future.  Furthermore, a  lot of people have “moral” objections to this model – saying that only authors should get the advertising revenue – but that would destroy the social media and UGC (user-generated content) world as we know it.  Consider the POD book publishing world. POD publishers actually do perform “content evaluation” for hate speech and legal problems, and do collect hefty fees for initial publication.  But lately they have become more aggressive with authors about books sales, a sign that they wonder about their own sustainability.

There are other challengers for those whose “second careers” like mine are based on permissive UGC.  One is the weakening of network neutrality rules, as I have covered here before.  The second comment period ends Aug. 17.  The telecom industry, through its association, has said there is no reason for ordinary web sites to be treated any differently than they have been, but some observers fear that some day new websites could have to pay to be connected to certain providers (beyond what you pay for a domain name and hosting now).

There have also been some fears in the past, which have vanished with time.  One flare-up started in 2004-2005 when some observers that political blogs could violate federal election laws by being construed as indirect “contributions”.   A more practically relevant problem is simply online reputation and the workplace, especially in a job where one has direct reports, underwriting authority, or the ability to affect a firm to get business with “partisanship”.  One point that gets forgotten often is that, indeed, social media sites can be set up with full privacy settings so that they’re not searchable.  Although that doesn’t prevent all mishaps (just as handwritten memos or telephone calls can get you in trouble at work in the physical world) it could prevent certain kinds of workplace conflicts.  Public access to amateur content could also be a security concern, in a situation where an otherwise obscure individual is able to become “famous” online, he could make others besides himself into targets.

Another personal flareup occurred in 2001 when I tried to buy media perils insurance and was turned down for renewal because of the lack of a third-party gatekeeper. This issue flared into debate in 2008 briefly but subsided.  But it’s conceivable that requirements could develop that sites (at least through associated businesses) pay for themselves and carry media liability insurance, as a way of helping account for the community hygiene issue of potential bad actors.

All of this said, the biggest threat to online free expression could still turn out to be national security, as in some of my recent posts.  While the mainstream media have talked about hackers and cybersecurity (most of all with elections), physical security for the power grid and for digital data could become a much bigger problem than we thought if we attract nuclear or EMP attacks, either from asymmetric terrorism or from rogue states like North Korea.  Have tech companies really provided for the physical security of their clouds and data given a threat like this?

Note the petition and suggested Congressional content format suggested by Electronic Frontier Foundation for bills like SESTA. It would be useful to know how British Commonwealth and European countries handle the downstream liability issues, as a comparison point. It’s also important to remember that a weakened statutory downstream liability protection for a service provider does not automatically create that liability.

(Posted: Thursday, Aug. 3, 2017 at 10:30 PM EDT)

Families of San Bernadino terror attack victims sue Facebook, Twitter, Google over “propaganda” arguments that evade Section 230

Families of victims of the fall 2015 terror attack in San Bernadino, CA are suing the three biggest social media companies (that allow unmonitored broadcast of content in public mode), that is Facebook, Twitter, and Google. Similar suits have been filed by victims of the Pulse attack in Orlando and the 2015 terror attacks in Paris.

Station WJLA in Washington DC, a subsidiary of the “conservative” (perhaps mildly so) Sinclair Broadcast Group in Baltimore, put up a news story Tuesday morning, including a Scribd PDF copy of the legal complaint in a federal court in central California, here. I find it interesting that Sinclair released this report, as it did so last summer with stories about threats to the power grids, which WJLA and News Channel 8 in Washington announced but then provided very little coverage of to local audiences (I had to hunt it down online to a station in Wisconsin).

Normally, Section 230 protects social media companies from downstream liability for the usual personal torts, especially libel, and DNCA Safe Harbor protects them in a similar fashion from copyright liability if they remove content when notified.

However, the complaint seems to suggest that the companies are spreading propaganda and share in the advertising revenue earned from the content, particularly in some cases from news aggregation aimed at user “Likenomics”.

Companies do have a legal responsibility to remove certain content when brought to their attention, including especially child pornography and probably sex trafficking, and probably clearcut criminal plans. They might have legal duties in wartime settings regarding espionage, and they conceivably could have legal obligations regarding classified information (which is what the legal debate over Wikileaks and Russian hacking deals with).

But “propaganda” by itself is ideology. Authoritarian politicians on both the right and left (Vladimir Putin) use the word a lot, because they rule over populations that are less individualistic in their life experience than ours, where critical thinking isn’t possible, and where people have to act together. The word, which we all learn about in high school civics and government social studies classes (and I write this post on a school day – and I used to sub), has always sounded dangerous to me.

But the propagation of ideology alone would probably be protected by the First Amendment, until it is accompanied by more specific criminal or military (war) plans. A possible complication could be the idea that terror ideology regards civilians as combatants.

Facebook recently announced it would add 3000 associates to screen for terror or hate content, but mainly on conjunction with Facebook Live broadcasts of crimes or even suicide. I would probably be a good candidate for one of these positions, but I am so busy working for myself I don’t have time (in “retirement”, which is rather like “in relief” in baseball).

Again, the Internet that we know with unfiltered user-generated content is not possible today if service companies have to pre-screen what gets published for possible legal problems. Section 230 will come under fire for other reasons soon (the Backpage scandal).

I have an earlier legacy post about Section 230 and Backpage here.

(Posted: Tuesday, May 9, 2017 at 1 PM EDT)