Why user-generated content (mine at least) seems to be near a precipice

Recently, Facebook announced it would make various changes to its newsfeed algorithms and policies to encourage people to interact personally more online and engage less in passive news posting and -gathering behavior. We can debate exactly what they want to accomplish and whether this policy change will reduce fake news (there are signs from overseas it might not, and other criticisms), but it is right to stop and wonder how we balance broadcasting our thoughts to others online (or in other vanity efforts like self-published books or vlogs) with real interactions.

Recently, a good friend on Facebook (whom I do see personally and whose professional career has him dealing with some of the national security questions I pose on this blog – and I don’t know any specifics) wrote an in-line post critical of the gratuitous nature of free content on the Internet.  We expect our writers to work for free, he essentially said.  We can’t expect that of plumbers or electricians or people with “real jobs”.  Oh, I can recall debates back in the 1980s as to whether (then mainframe) “data processing” gave us “real jobs”.

My friend’s post begs the question, what is a “writer” anyway?  Is he/she someone who writes what others want so that it will sell (like Joan Didion or Armistead Maupin, both the subjects of indie film biographies last year)?  Or can someone who wants to write a personal manifesto and achieve fame with it a real writer?  Manifestos, however “from on high” they seem, remember, have a bad rap;  a few authors of these screeds have then done some very bad things (like with guns).

So that comes to my own content, which appears to be “free” in the most anti-competitively abusive sense.  I think of Reid Ewing’s 2012 short film “It’s Free” set in a public library (to be followed by “Free Fish”).  Most of my online content appears in four WordPress blogs (set up in 2014 and then 2016) or one of sixteen “Blogger” blogs (starting in 2006).  But there is also a lot of older legacy content on “doaskdotell.com”, all flat html, and this includes all the text of my books.  And, yes, “it’s free”. Like attending my first gay talk group in February 1973.

It’s true that I have Google Adsense on Blogger, but right now my WordPress blogs and flat sites have no advertising, no pop-ups,, no donation jars, no “calls to action”, and no email lists  (The WordPress does invite the user to share on Facebook, Twitter. Or Google-Plus when brought up, with comments, as an individual post).  I don’t run “other people’s” donation (or political candicacy) campaigns on my sites, and I don’t pimp causes from a partisan stance. To a lot of people, it seems, that means I won’t “play ball” with them.

Yet, I’m a fan of Australian blogging guru Ramsay Taplan’s “Blogtyrant” world, and most of his recommendations do apply to small, niche businesses that want to reach consumers, sometimes even some “real” authors (like what Author’s Guild means) and musicians (who sell on Bandcamp as well as Amazon).  Aggression with mailing lists and promotions pays if you have legitimate customers whose needs you can really meet. Otherwise it would fall into spam.

So that brings me to the question, how can I sustain this?  The transparent answer is that I have other money, so it hasn’t had to pay its own way. A lot of it was saved when I was working, because I was able to avoid debt.  (Not having kids means no big mortgage is necessary.)  Some of it is inherited (and that gets into the issue of my own and mom’s trusts, out of scope here).  And I got lucky in 2008.  I probably benefited from it. (Seeing it coming, and some conservative values, helps.)  So call me a rentier, an abusive capitalist, ripe for expropriation by Antifa if you like.

It’s useful for me to go back and recall how I got into self-publishing, long before the Internet became available to newbies.  I probably got my first little article published in 1974, where I argued for gay rights from a libertarian perspective, a “mind your own business” plea to the world.

In the 1980s, I did network with the medical and public health community, the Dallas Gay Alliance, and right wing elements, all by mailed letters, trying to get some sort of political compromise, during a time when Texas (in early 1983) considered passing a very draconian anti-gay law.  I was quite concerned about the shallowness of arguments sometimes put out by traditional “activists” seeming to expect to be viewed as victims merely by belonging to a “class”.  I was particularly attentive to the clinical information as it unfolded.  There was a period when the conventional way of resisting was “don’t take the test” once an HIV test was available.  I did volunteer as a “baby buddy” at the Oak Lawn Counseling Center during that time.

In the 1990s the issue of gays in the military came onto center stage.  The components of the debate at the time (such as “privacy” in the barracks, as well as “unit cohesion”, not quite the same thing) cut across many other issues in an unusual way. I began getting published in some LGBT and libertarian journals (list).  I wanted to get the arguments right at an individual level, without appeals to morally dubious claims of group oppression. Because of my own situation and personal history, I entered the debate, and in August 1994 I decided firmly, while on vacation in Colorado, to write my first DADT book, which I finally issued in July 1997.  Partly to avoid a public conflict of interest which I have explained elsewhere (as in the DADT III book), I took a convoluted corporate transfer to Minneapolis at about the same time. I actually did sell copies of the book reasonably well for the first 18 months or so, but by the middle of 1998 I had discovered I could draw a lot more attention to my work by simply placing the book text online and letting the search engines find it, which they did.  (I paid nothing to do this, other than the nominal fees for a domain – the guy operating the service was a personal friend through work – and I did not need to code metatags or secure SEO to get it found.  It seemed use of free content online for self-promotion was rather novel at the time;  during the dot-com boom, not that many people really did it this way.)   The search engines proved to be effective.  On a few occasions, when I made a controversial addition to material on the site, I got email feed back the next day.  My use of the “It’s free” technique seemed very effective but came under threat from the 1998 “Child Online Protection Act” for which I would become a sub-litigant under the Electronic Frontier Foundation’s sponsorship.

Over time, my commentary would cross over many other issues, particularly with regard to libertarianism for most social and economic issues, and expand out after 9/11 into how you protect personal liberty in a world with external threats, sometimes borne out of populist “politics of resentment” as well as religious fundamentalism (by no means limited to radical Islam) and possibly resurgence of communism (North Korea now). After 9/11, one or the proponents of Bill Clinton’s “Don’t Ask, Don’t Tell, Don’t Pursue”, Charles Moskos, argued publicly for resuming the military draft (to include women), and dropping the military ban altogether.  That fit into my arguments perfectly.  As personal and job circumstances changed over the years (DADT III again) I kept my material online, and my staying out there so long played a significant repeal in the eventual repeal of DADT in 2011 with Obama in office.

I have contemplated ideas like “opposing viewpoints” automation (book series), which sites like Kialo and Better Angels take on, and I well look into these. Hubpages could provide another opportunity.

Over the years, there have been various threats to the sustainability of the way I work.  These include the undoing of network neutrality and the weakening of Section 230 (the Backpage controversy) as well as various efforts by established media to tighten copyright and trademark laws, not only to combat real piracy (a legitimate concern) but to undermine competition from people (like me) who could compete with them with much lower costs by staying outside the union and guild world.  Another issue, less important in the US than in Europe, is the supposed “right to be forgotten”, which my own use of search engines confounds. As this gets back to libertarian issues (right to work) and to the SOPA debate in 2011.  A critical concept behind all of this is that social media companies and hosting companies not share undo downstream responsibility or liability exposurefor the actions of their users, otherwise they could not let us create user-created content without gatekeepers.

Another possibly grave threat could be personal targeting from (foreign) enemies, or causing others (family members) associated with a speaker like me to be targeted.  I actually was concerned about this while my mother was alive.  This has not happened to me as I don’t seem to be as visible a target as, say, Milo Yiannopoulos (or Pam Geller or Mary Norris), even though I share and communicate some similar beliefs.  But, if you think about this with a Tom Clancy-type novelist’s mind, you can imagine this as another way an enemy could subvert American democracy.  That’s the Sony hack issue at the end of 2014 from North Korea.  Instead, Russia, in particular, noticed that speakers like me tended to be noticed by the “choir” (other academics and policy makers) but not by the “average joe’s”, whose everyday needs we seemed oblivious to.  So the Russians pumped Facebook and Twitter with fake news which gullible people would believe and such a way that Asperger-like people like me (not quite the same as schizoid), trying to influence policy with passive search engine strategy, wouldn’t even notice or care.  For them it worked, and Trump won.

I think a fair criticism of me would be that I don’t actually have anything to sell to customers that meet their needs, so no “Blogtyrant” strategy of playing ball could work. Do I have content that people would “want” and would pay for?  Well, that’s the novel (and to some extent the fiction in DADT-III, which could make a nice two-part indie film), and the music.  In fact, I have worked on my own composed music (finishing what I had started in high school and the early college years, at about the time of the William and Mary expulsion) and, because it is post-romantic, it may actually be capable of “crowd pleasing” in a way that a lot of the manipulative music from established young composers today (under 40) does not.

I do need to “stay on point” with my own work, so it is very difficult for me to respond to pleas from other parties to join their efforts, in activism and resistance.  It is also difficult to give away time in “service” unless I find niche-like service opportunities that are closer to my own skill set.   A good example could be directing chess tournaments which invite underprivileged youth, or arranging concerts for other musicians.

I do get concerned over two big questions.  One is that the permissive environment that has allowed so much user-generated content to reach readers and consumers may not be sustainable for a combination of reasons:  rampant user abuse, security, and the ability of companies to make money legitimately without fake news, bots, intrusive ads, and all kinds of questionable technique.  I don’t know if, for example, Google and WordPress would find it profitable to keep their free platforms forever.  And I can imagine ways it could become much harder in the future to get reasonable hosting than it has been until today.  The recent incidents where alt-right sites (at least one) were banned by most hosts over their content is part of my concern.  You can have a specific objection to, say, neo-Nazism, but then it’s a slippery slope:  radical Islam, communism (Stalinism or gulag-ism, which is where Antifa could find itself headed), all kinds of other complaints based on “intersectionality” or “populism” threaten the whole expectation of legitimacy of free speech.  You could, for example, require that every website, by certain accounting rules, show that it pays its own freight (although that would seem to invite porn back, wouldn’t it).   It’s hard to “pay your own way” without admitting to group preferences and “partisanship”, and showing social “loyalty” and even “community engagement”.  All of this is in tension with my insistence on looking at human rights as an individual’s property, regardless of any membership in a group that claims some sort of systematic oppression (and eventual intersectionality).  But there is no constitutional principle that guarantees that anyone has the right to distribute his own personalized speech without the cooperation of others.

This brings me back to the whole idea of social contract between the individual and his society.  You can call it “rightsizing”, but that’s a dangerous idea that leads to authoritarianism, either on the far right (or alt-right) or far left.  (Yup, a smaller country like Singapore can get away with this, and China is trying to come up with some way to grade people’s social compatibility by 2020!)  Yet, on a personal level, there’s something wrong when we think of others as “unworthy” of becoming prioritized to enter our lives because they aren’t “good enough” and didn’t “make it”.  That used to be hidden more, but there is an implicit understanding that if too many of us think that way, we invite especially right-wing totalitarianism in the door (consider Logan Paul’s movie “Thinning” as a warning).  That may be one reason why I do see so much “pimping” of “other people’s causes” with appeals for “calls to action” all the time.  On one level, I resist getting involved with all these public “knocks on the door” but I probably can’t avoid them forever.  As Martin Fowler wrote in his 2014 book, everyone belongs “somewhere” in some group, and has to bond with people who are imperfect, far less than teen Clark Kent’s.  Everyone’s karma, and whatever fragmentary after life follows (and I think there is one, however fleeting and combinatorial) is greatly affected by what they depended on – and that means groups.  I resist “joining” resistances (and marching and shouting in demonstrations for specific groups), but I know that eventually there comes a point where it is probably impossible to survive without doing so, even without coming in your shorts.

There is a political point here.  If legal or practical considerations made it impossible for businesses to allow me my own platforms, changing what has has been the case since late 1996, I would be forced to work through groups, and advocate for or personally assist people who individually I did not approve of apart from the group.  But this could be better for a lot of people and could address some of the underlying causes of inequality.  This all relates to the “implicit content” problem with free speech, or the “skin in the game argument”.

Perhaps what I am seeing is something like an attack on introversion, a demand that every endeavor somehow relate to other people’s needs. Yet, as “The Good Doctor” shows us, every introverted people sometimes meet real needs, and save us.

Earlier legacy piece on the “free content” idea.

(Published: Sunday, January 14, 2018 at 6:30 PM EST)

Community engagement v. individualism, with authoritarians watching

I have a friend in the Virginia libertarian circles, Rick Sincere, who recently has run some interesting guests posts on his blog, like this recent one on Masterpiece Cakeshop.

I do have a few guest posts on my two newer WordPress blogs (“Blogtyrant” really encourages the practice) but this one will be a pseudo-guest post, a Smerconish-like compendium of some feedback from a friend in the past twenty four hours after a typical social in the “gay establishment” with all the usual abstract trappings about equality.

He shared with me the parable of Rebekah Mercer (think, Mercer County New Jersey, where I lived for my first job with RCA, in Princeton, starting in 1970), daughter of the hedge fund billionaire Robert Mercer, conveyed in this Washington Post article January 5 by Kyle Swenson.  My friend’s narrative focused on the role of pollster and political operative Patrick Cladell in convincing the family that Donald Trump needed to become their Mr. Smith who would go to Washington and wreck the establishment.

The article focuses on the resentment of the elites by just part of the far right.  True, the Left had carried opposition to pipelines and drilling too far, if the nation really needs to go to autarky on energy. True, foreign competition had destroyed a lot of manufacturing jobs – and the hedge fund managers didn’t recognize the irony of their opposing seeing the middle class follow them into the world of hucksterism (as I found out in many job interviews in the 2000’s) when we didn’t make enough of our own stuff.  Indeed, that’s a legitimate national security concern.  Up to some point, the nationalism of Steve Bannon had to make sense to them.  And, true enough, the meddlesomeness of Obamacare hurt a lot of young adults, who were forced to pay higher premiums to take care of “other people’s problems” (like opioid) that they might be unlikely to encounter themselves.

The Mercers probably didn’t care so much about the social issues:  they just resented the idea of people fighting for different treatment for different groups instead of fighting for themselves as individuals. (Maybe that means it’s OK to be a charismatic superhero-like cis gay man [even a comic book space alien] but not a sissy  and not an earthly immigrant.)  But Robert, like Donald, shared a personal revulsion for personal involvement with “losers”. A man’s real worth was his financial network, like a grade for one’s life.

But then something else happened. Trump carried his authoritarian streak (and need for control and self-gratification as the leader) much further than the Mercers probably wanted.  But he was the best “Mr. Smith Goes to Washington” (Frank Capra’s 1939 film for Columbia, legacy review) that they could find.

But what happened, as we know, that Trump played to a base who see things more in terms of a strong politician taking care of them than in terms of actual policy fixes.  And as Michael Moore pointed out, a lot of people just wanted a “Blow Up”, a revolution – to disrupt the lives of the elites, even if you destroyed the country in the process.

All of this indeed leads to a county in increased danger, particularly from one particular enemy, and detracts from orderly solutions to all of our inequality problems.

Yes, it puts me on the spot.  While I leverage asymmetry online to establish myself as an individual, apart for the group, I probably ask for new dangers, from combative enemies could can also leverage the same asymmetry.

There are many existential threats out there to my continuing my own style of free speech, as I’ve covered before (the gratuitousness problem).  I’ll be coming back to some of the details (probably the Section 230 issues are more important than network neutrality) soon, but I wanted to revisit the idea of “the privilege of being listened to” as in my DADT III book.  One idea is that, before someone is “heard” as an individual he (or she or “they”) needs to show some kind of community engagement.

That sounds like almost “forced” volunteerism, a step down from national service, supervised by the bureaucracy of charities and nonprofits.

Now, there are two kinds of volunteerism to start.  One is really volunteering for political activism.  A friend suggested volunteering a little a HRC or some similar group (NLGTF) to learn what “group identity” sensitivity is all about (given all my criticism of “trigger warnings”, “microaggressions”, and “intersectionality”).  Now, like in the movie “Rebirth”, I think there is something wrong with volunteering to “look” or “spectate”.  I wouldn’t do that unless I was completely with the goals of the group (as opposed to the liberty interests of individuals in the group, which Rick Sincere’s blog above deals with).  My own father used to deploy the phrase “as a group” when he talked about race (unfortunately quoting the Bible wrong). Bill Clinton had to deny that lifting the military ban would be about “group rights”.

That said, I do engage of activism of sorts with my blogs – these days, mostly on sustainability for our civilization, where, yes, I’ve focused on the EMP issue as possibly posing a singularity-type threat.  Along the lines of the work I have done (I don’t mean with a therapist), I would love to work for a news organization and have a press pass.  Then, yes, I might be able to cover HRC activism with some objectivity.  But I can see covering events regarding, for example, net neutrality or Section 230. I don’t see marching on picket lines over these issues, however.

The other kinds of volunteerism is to help people – with real needs.  But that forks in a few direction.

I did this in the 1980s and less in the 1990s with the AIDS crisis, because it had reared up in my own life (although I didn’t get infected because of reverse Darwinism – “The Normal Heart”).  I was a “baby buddy” for a time in 1986-87 at the Oak Lawn Counseling Center in Dallas.  I was also the pain that questioned the gay politicians for wanting to get out of some of the “extended personal responsibility” issues, which got dangerous  (the “don’t take the test” crowd).  In the 1990s, I volunteered one night a month for a while at Food and Friends counting donations when it was located in the Navy Yard-Waterfront (Washington).

I have spot-volunteered, like at a local church’s monthly “community assistance” dinners and handout sessions, but not found it terribly meaningful.  Some volunteer activities ask for more help than they need because they may or may not need the bodies for a short time.

Now, as with the examples I gave, you can focus volunteerism on “groups” to which you have “belonged” (whether or not you “chose to”).  You can focus on whether giving goes to that group, or to any individuals in need.  And I can’t blow off the group idea completely.  Consider Trump’s joke about Pence’s past attitude toward “LGBT people” (as a group”), “Oh, he wants to hang ‘em all”. (I remember the 1968 Clint Eastwood movie “Hang ‘em High”).  It sounds funny even on the “gay right”.  But there’s a point where it isn’t.  You can be in the wrong group whether you chose to or not.  Imagine living in Germany in the 1930s. That does help grasp the sensitivities surrounding Charlottesville.

The effectiveness of volunteerism depends on the skills you have. I could imagine directing chess tournaments in underprivileged areas – but it would be desirable to be as effective a chess player as possible first. I can imagine helping people not fall for phishing scams.

But a lot of times charities want volunteers to go out of their own boxes.  The Red Cross, for example, wants volunteers to install smoke detectors in low income homes.  That would make more sense if I had kept the trust house.

There is another direction that “real needs” can fork to — actually taking responsibility for supporting or hosting someone.

So, the bottom line is, I have to finish my own work, on my issues as I have laid them out, before I’m much good on “somebody else’s” problems and supervision.  I have my own goals and path and self-direction and strategy. It takes time and freedom from disruption to carry out. I can’t let it be negotiable.  Yet I realize that if I didn’t have this, I’d have to be more amenable to “groups” to “survive”. Maybe that is better for a lot of other people.

I’ve had some discussion with the friend telling me he cannot be open online about controversial topics. This gets back to what I’ve called “conflict of interest” over publicly available speech. I’ve covered this before with links, but it’s good to reiterate a couple things.  If someone has direct reports on the job or the ability to pass “underwriting” judgments on others, then off-the-job policy opinions that can easily be found by others (as by search engines or by public social media pages) put the relationship between the associate and stakeholders at potential risk, even legally (like hostile workplace). One way to handle this is for an employer to insist that the person’s only public social media presence be the official work one, and that all private social media communications be under full privacy settings. If you have certain kinds of jobs, you relinquish the right of “self-publication” (or self-distribution).

(Posted: Saturday, Jan. 6, 2017 at 9 PM EST)

SESTA clears Senate committee, and Congress seems serious about stopping trafficking, even if it requires sacrifices from Internet users — and it seems superfluous

Electronic Frontier Foundation has reported that the Senate Commerce Committee has approved a version of SESTA, the Stop Enabling Sex Traffickers Act, S. 1693.  Elliot Harmon’s article calls it “still an awful bill”.   Harmon goes into the feasibility of using automated filters to detect trafficking-related material, which very large companies like Google and Facebook might be half-way OK with. We saw this debate on the COPA trial, about filtering, more than a decade ago (I attended one day of that trial in Philadelphia in October 2006). No doubt, automated filtering would cause a lot of false positives and implicit self-censoring.

Apparently the bill contains or uses a “manager’s amendment”  (text) floated by John Thune (R-SD) which tries to deal with the degree of knowledge that a platform may have about its users.  The theory seems to be that it is easy to recognize the intentions of customers of Backpage but not of a shared hosting service. Sophia Cope criticizes the amendment here.

Elliot Harmon also writes that the Internet Association (which represents large companies like Google) has given some lukewarm support to modified versions of SESTA, which would not affect large companies as much as small startups that want user-generated content   It’s important to note that SESTA (and a related House bill) could make it harder for victims of trafficking to discuss what happened to them online, an unintended consequence, perhaps.  Some observers have said that the law regarding sex trafficking should be patterned after child pornography (where the law seems to work without too much interference of users) and that the law is already “there” now.

But “Law.com” has published a historical summary by Cindy Cohn and Jamie Williams that traces the history of Section 230 all the way back to a possibly libelous item in an AOL message board regarding Oklahoma City (the Zeran case).  Then others wanted to punish Craigslist and other sites for allowing users to post ads that were discriminatory in a Civil Rights sense. The law need to recognize the difference between a publisher and a distributor (and a simple utility, like a telecom company, which can migrate us toward the network neutrality debate).   Facebook and Twitter are arguably a lot more involved with what their users do than are shared hosting sites like BlueHost and Verio, an observation that seems to get overlooked.   It’s interesting that some observers think this puts Wikipedia at particular risk.

I don’t have much an issue with my blogs, because the volume of comments I get is small (thanks to the diversion by Facebook) these days compared to 8 years ago.  When I accept a guest post, I should add that Section 230 would not protect me, since I really have become the “publisher” so if a guest post is controversial, I tend to fact-check some of the content (especially accusations of crimes) myself online.

I’d also say that a recent story by Mitch Stoltz about Sci-Hub, relating to the Open Access debate which, for example. Jack Andraka has stimulated in some of his Ted Talks, gets to be relevant (in the sense that DMCA Safe Harbor is the analogy to Section 230 in the copyright law world). A federal court in Virginia ruled against Sci-Hub (Alexandra Elbakyan) recently after a complaint by a particular science journal, the American Chemical Society  But it also put intermediaries (ranging from hosting companies to search engines) at unpredictable risk if they support “open access” sites like this. The case also runs some risk of conflating copyright issues with trademark, but that’s a bit peripheral to discussing 230 itself.

Again, I think we have a major break in our society over the value of personalized free speech (outside of the control of organizational hierarchy and aggregate partisan or identity politics).  It’s particularly discouraging when you look at reports of surveys at campuses where students seem to believe that safe places are more important than open debate, and that some things should not be discussed openly (especially involving “oppressed” minorities) because debating them implies that the issues are not settled and that societal protections could be taken away again by future political changes (Trump doesn’t help). We’ve noted here a lot of the other issues besides defamation, privacy and copyright; they include bullying, stalking, hate speech, terror recruiting, fake news, and even manipulation of elections (am issue we already had an earlier run-in about in the mid 2000s over campaign finance reform, well before Russia and Trump and even Facebook). So it’s understandable that many people, maybe used to tribal values and culture, could view user-generated content as a gratuitous luxury for some (the more privileged like me) that diverts attention from remedying inequality and protecting minorities.  Many people think everyone should operate only by participating in organized social structures run top-down, but that throws us back, at least slouching toward authoritarianism (Trump is the obvious example). That is how societies like Russia, China, and say Singapore see things (let alone the world of radical Islam, or the hyper-communism of North Korea).

The permissive climate for user-generated content that has evolved, almost by default, since the late 1990s, seems to presume individuals can speak and act on their own, without too much concern about their group affiliations.  That idea from Ayn Rand doesn’t seem to represent how real people express themselves in social media, so a lot of us (like me) seem to be preaching to our own choirs, and not “caring” personally about people out of our own “cognitive”  circles.  We have our own kind of tribalism.

(Posted: Wednesday, Nov. 15, 2017 at 2 PM EST)

Update: Monday, Nov 27, 10 AM EST

I’ve said that this doesn’t sound like a direct problem for bloggers moderating comments, but could it mean legal liability if a blogger approved a comment that linked to a site trying to sell sex trafficking. Normally I don’t go to links from many comments out of fear of malware, and I don’t guarantee that commenter’s own embedded hyperlinks are “safe”.  Some comments are in foreign languages, and I generally don’t translate them (I usually insist that they use the normal alphabet).   Could this change?  I suppose however that issue could exist with child pornography now.  This concern applies even though I use a webhosting partner service (Akismet) to filter spam comments.

 

House Judiciary Committee holds major hearing on Section 230 modifications due to sex-trafficking and Backpage

On Tuesday, October 3, 2017 the House of Representatives Judiciary Committee, Chairman Bob Goodlatte, held a two-hour hearing on a House bill HR 1865, Allow States and Victims to Fight Online Sex Trafficking Act of 2017.   The Senate has a similar bill, SESTA, Stop Enabling Sex Traffickers Act, S1693 .  Ann Wagner (R-MO) had a press release in April 2017, with this commentary. Govtrack also offers this provocative editorial.

Electronic Frontier Foundation has a blog posting by Elliot Harmon, Sophia Cope, and India McKinney. The actual session starts at about 23 minutes in.

The hearings were chaired by Steve Chabot, R-OH. Jackson Lee (D-TX) gave a long statement.

There were four speakers.  All of them recognized that Section 230 had been essential for the growth of user-generated content by relieving service providers of much potential downstream liability that would require prescreening of content before it could be published.

Chris Cox, a former SEC Chairman during Reagan, explained how Section 230 in the Communications Decency Act after a decision (Stratton Oakmont v. Prodigy) held that a service provider who tried to do any “good Samaritan” editing of user content became a publisher of the content and liable for all user content from the facility, forcing pre-screening everything. Cox explained that the law should encourage sample monitoring for content that is grossly illegal, without penalizing for content that cannot be caught when providers act in good faith.

Cox would also later explain that right now there is no “knowing” standard for most illegal content (except probably child pornography). A website operator loses section 230 protection only when it participates in creating or curating illegal content.

USA Naval Academy Cybersecurity Professor Jeff Kosseff spoke, relaying similar concerns.   He said that with the House bill as proposed now, he would advise clients not to take the risk of inviting user-generated content at all.

Catholic University Columbus School of Law professor Mary Leary testified that the sex trafficking problem had become an emergency, extending beyond very reasonable parallel concerns about promoting terrorism or providing murder for hire.   Leary works with the National Center for Missing and Exploited Children.

Also testifying was Engine executive Evan Engstrom, urging Congress be very cautious.

There was mention of the long running Roommates.com case, where the site was sued for allowing users to post requests discriminating in what sounds like a personal choice of roommates.

There was a suggestion that Sex Trafficking should be handled just like child pornography, where there is a knowing standard.

There was incidental mention of the Las Vegas shootings, with talk of stricture laws on gun add-ons to make them into machine guns.  There was also a suggested that any undocumented victims would not be pursued by USCIS.

HR

(Posted: Thursday, Oct. 5, 2017 at 1:45 PM EDT)

Cato Institute covers many First Amendment topics in day long forum; what about downstream liability concerns?

Last Thursday, September 28, 2017, I attended a day-long event at the Cato Institute in Washington DC, “The Future of the First Amendment”.  I could call it aka “the future of free speech” in the U.S.

Cato has a link for the event and has now uploaded all the presentations, which you can view here. The videos include embeds of the slides and of the audience members asking questions as professionally filmed, better than I can do on my own at an event.

The “table of contents” in the link shows the topics covered as well as identifying the credentialing the many invited speakers, and indeed the presentation was segmented and topical and tended to focus on many narrow, separate issues.  I’ll come back at the end of this piece as to what I would like to have seen covered more explicitly.

The earliest morning session focuses particularly on partisan political speech related to elections (the “Citizen’s United” problem) and on commercial speech, including whether companies or commercial entities are separate persons.  One concept that stuck out was that listeners or receivers of messages are entitled to First Amendment protections. I would wonder how that concept would play out given more recent reports of Russian attempts not only to influence the 2016 elections but also to spur social instability and resentment in American society, based particularly on the idea of relative collective deprivation (which is not the same idea as “systematic oppression”).  There are understandable concerns over wanting to regulate paid political ads (especially if supplied by foreign agents), but we should remember back around 2005 when there were concerns based on a particular court interpretation of the McCain-Feingold Campaign Finance Reform Act that even free blogs (written without compensation and without ads) could be construed as “political contribution” if they expressed political viewpoints.  The discussion of commercial speech recognizes that advertisements sometimes do express points of view going beyond immediate ad content, and that valuable speech, such as well-made studio Hollywood movies about major historical events, made with good faith, can express political viewpoints while being funded through the open securities markets available to publicly traded companies.  But one auxiliary idea not explicitly mentioned was something I encounter: that speech available to the public should pay its own way.

The second segment dealt with “religious liberty in the post-Obama era”.  Here we have the dubious idea that an employee of a business open to the public is engaging in religiously-connected “speech” when she sells certain products or services to a person of a different faith or who engages in certain intimate personal relationships as now recognized by law (especially same-sex marriage).  One speaker in particular (Robin Fretwell Wilson) suggested that states should carve out laws that require public accommodations to serve all customers but allow individual employees (even in government agencies, such as with Kim Davis in Kentucky) to turn over the duties to someone else.  While I would support such a solution, if can mean an unequal workplace (such as the catse when some employees observe Sabbath’s explicitly and others cover them without getting any compensation in return, which I have done – an extreme extension of this idea is the “conscientious objector” problem with the past military draft).  It’s also true that sometimes “religious speech” can serve as a mask for personal moral ideas that in fact are not really founded in recognized interpretations of scripture, for example, political aversion to working with inherited wealth.

The keynote speaker for the second floor luncheon(well catered with deli sandwiches) was Eugene Volokh, of UCLA Law School and the Volokh conspiracy blog.  Volokh gave a spirited presentation on how the Internet has accelerated the application of libel law (well before Donald Trump noticed) because the Internet allows speakers with no deep pockets and little formal publishing law experience to be heard, and also because the “online reputation” damage from defamation, as propagated by search engines, is permanent, as opposed to newspaper defamation in the past.  Volokh made the interesting point that sometimes cases are settled with court injunctions that could prohibit a blogger from mentioning a particular person online again anywhere.  (That could matter to bloggers who review films or music performances, for example). At 41:07 on this tape, I ask a question about Backpage and Section 230. Volokh’s answer was thorough and more reassuring that it might have been, as he indicated that “knowingly” standard could be included in service provider downstream liability exposures. (He also explained the distinctions among utility transmission, distribution, and publication.) He also got into the question as to whether fake news could be libel.  Usually, because it largely involves politicians, in the U.S. it does not. But it might when applied to celebrities and companies.

The afternoon session featured a presentation by Emily Ekins on the 2017 Free Speech National Survey. A number of startling conclusions were presented, showing partisan divides on what is viewed as hate speech, and also a lack of understanding that most hate speech is constitutionally protected. There is a tendency among many voters and especially many college students to view words as weapons, and to view speakers as morally accountable for the actions of the recipients of their speech, even when there is no direct incitement for rioting or lawless action. Many respondents showed a shocking dislike of journalists as “watchers” who don’t have their own skin in the game.  A majority seemed to take the pseudo-populist position that a heckler’s veto on speakers was morally OK, and a shocking substantial minority thought that government should heavily sponsor speech to protect special groups.  A shocking minority accepted the idea that hate speech should sometimes be met with political violence.

The final session talked about censorship and surveillance.  The speakers included Flemming Rose (“The Tyranny of Silence” and the cartoon controversy).  Rose mentioned, in an answer to an audience question, that in some countries speakers were arrested for “qualification of terrorism” in public statements.  All the speakers noted a desire from the EU to force tech companies to export their rules to the US, especially the supposed “right to be forgotten”.  Daniel Keats Citron from the University of Maryland Law School mentioned the Section 230 controversy in an answer, as she talked about  distinguishing “good Samaritans” from “bad Samaritans”

At the reception afterward, a speaker from Cloudflare noted that Hollywood has been lobbying heavily on Congress to force service providers to prescreen content, as motivated by the Backpage controversy. Hollywood, he said, has been pressuring agents and Wilshire Blvd law firms to join in the effort. He mentioned the DMCA Safe Harbor, which has a similar downstream liability concept but applies to copyright, not to libel or privacy.  The tone of his remarks suggested that this goes way beyond piracy;  Hollywood does not like dealing with the low cost competition of very independent film that is much less capital intensive, and taking up much larger audience share than in the past..  Even Mark Cuban admitted that to me once in an email.  Cloudflare also said that the law, unchanged, would today handle sex trafficking the way it handles child pornography, with a “knowingly” standard, which seems adequate already.

All of this brings me back to what might not have been hit hard enough in the conference, the idea, as I said indicated in the title of my third book, of “a privilege of being listened to” (my 2005 essay), which sounds a little scary to consider and seems to lie beneath authoritarian control of speech.

I insist on managing my own speech, much of which is posted as “free content”.  I get pestered that I don’t sell more physical copies of my books than I do and don’t try to be “popular” or manipulative in order to sell. (That helps other people have jobs,  I guess.)   I get told that my own skin should be in the game.  I get sent into further deployments of the subjunctive mood (“could’a, should’a, would’a”), like in high school French class. – I should have children, or special needs dependents, or be in the trenches myself before I get heard from.  (This could affect how I handle the estate that I inherited, which can get to be a Milo-Dangerous topic.)   Content should pay its own way (which, ironically, might encourage porn.)  Individual speakers weaken advocacy groups by competing with them and not participating.  Before I get heard from myself, I should join somebody else’s cause against “systematic oppression” and not be above walking and shouting in their demonstrations. I should run fundraisers for other people on my webpage. I should support other publications’ fund raisers who claim (on both the right and left) to be my voice, as if I were incompetent to speak for myself.  Or, as if that capacity will be taken away from me by force.  Even the world of writers. I get confrontational ideas, that “real writers” get hired to portray other people’s narratives other than their own. (Okay, I might really have had a chance once go “ghost-write” so-to-speak one of the other “don’t ask don’t tell” soldier’s stories.)

One of the most serious underreported controversies is indeed the idea that speakers should be held responsible for what their readers might do, particularly because “you” are the speaker and not someone else.  This is related to the notion of “implicit content” (Sept. 10). This concept was behind my own experience in October 2005 when working as a substitute teacher, see July 19, 2016 pingback hyperlink).  That certainly comports with the idea that Section 230 should not exist, and that people should not speak out on their own until they have a lot of accountability to a peer group (family or not).  This is far from what the First Amendment says but seems to be what a lot of people have been brought up to believe in their own home and community environments. It goes along with ideas of personal right-sizing, fitting in to the group, and a certain truce on social justice.  In the past two or three decades (compared to when I was in high school and college), there has been a weakened presentation of the First Amendment (and Bill of Rights in general) in the way it is taught in high schools and to undergraduates.  I could even say based on my own substitute teaching experience from 2004-2007 that even public school staff (including administration) is poorly informed on the actual law today, so you would not expect students to be getting the proper learning on these matters.

Individuals have natural rights, just as individuals;  but people don’t have to belong to oppressed groups or claim “relative deprivation” to claim their natural rights.

(Posted: Tuesday, October 3, 2017 at 12 noon)

“Implicit content” may become the next big Internet law controversy; more on Backpage and Section 230

It is important to pause for a moment and take stock of another possible idea that can threaten freedom of speech and self-publication on the Internet without gatekeepers as we know it now, and that would be “implicit content”.

This concept refers to a situation where an online speaker publishes content that he can reasonably anticipate that some other party whom the speaker knows to be combative, un-intact, or immature (especially a legal minor) will in turn act harmfully toward others, possibly toward specific targets, or toward the self. The concept views the identity of the speaker and presumed motive for the speech as part of the content, almost as if borrowed from object-oriented programming.

The most common example that would be relatively well known so far occurs when one person deliberately encourages others using social media (especially Facebook, Twitter or Instagram) to target and harass some particular user of that platform.  Twitter especially has sometimes suspended or permanently  closed accounts for this behavior, and specifically spells this out as a TOS violation. Another variation might come from a recent example where a female encouraged a depressed boyfriend to commit suicide using her smartphone with texts and was convicted of manslaughter, so this can be criminal.  The concept complicates the normal interpretation of free speech limitation as stopping where there is direct incitement of unlawful activity (like rioting).

I would be concerned however that even some speech that is normally seen as policy debate could fall under this category when conducted by “amateurs” because of the asymmetry of the Internet with the way search engines can magnify anyone’s content and make it viral or famous.  This can happen with certain content that offends others of certain groups, especially religious (radical Islam), racial, or sometimes ideological (as possibly with extreme forms of Communism).  In extreme cases, this sort of situation could cause a major (asymmetric) national security risk.

A variation of this problem occurred with me when I worked as a substitute teacher in 2005 (see pingback hyperlink here on July 19, 2016).  There are a couple of important features of this problem.  One is that it is really more likely to occur with conventional websites with ample text content and indexed by search engines in a normal way (even allowing for all the algorithms) than with social media accounts, whose internal content is usually not indexed much and which can be partially hidden by privacy settings or “whitelisting”.  That would have been true pre-social media with, for example, discussion forums (like those on AOL in the late 1990s). Another feature is that it may be more likely with a site that is viewed free, without login or subscription. One problem is that such content might be viewed as legally problematic if it wasn’t paid for (ironically) but had been posted only for “provocateur” purposes, invoking possible “mens rea”.

I could suggest another example, of what might seem to others as “gratuitous publication”.  I have often posted video and photos of demonstrations, from BLM marches to Trump protests, as “news”.  Suppose I posted a segment from an “alt-right” march, from a specific group that I won’t name.  Such a march may happen in Washington DC next weekend (following up Charlottesville).  I could say that it is simply citizen journalism, reporting what I see.  Others would say I’m giving specific hate groups a platform, which is where TOS problems could arise. Of course I could show counterdemonstrations from the other “side”. I don’t recognize the idea that, among any groups that use coercion or force, that one is somehow more acceptable to present than another (Trump’s problem, again.)  But you can see the slippery slope.

When harm comes to others after “provocative” content is posted, the hosting sites or services would normally be protected by Section 230 in the US (I presume).  However, it sounds like there have been some cases where litigation has been attempted.  Furthermore, we know that very recently, large Internet service platforms have cut off at least one (maybe more) website associated with extreme hate speech or neo-Nazism. Service platforms, despite their understandable insistence that they need the downstream liability protections of Section 230, have become more pro-active in trying to eliminate users publishing what they consider (often illegal) objectionable material.  This includes, of course, child pornography and probably sex trafficking, and terrorist group recruiting, but it also could include causing other parties to be harassed, and could gradually expand to subsumed novel national security threats. But it now seems to include “hate speech”, which I personally think ought to be construed as “combativeness” or lawlessness.  But that brings us to another point:  some extreme groups would consider amateur policy discussions that take a neutral tone and try to avoid taking sides (that is, avoiding naming some groups as enemies instead of others, as with Trump’s problems after Charlottesville), as implicitly “hateful” by default when the speaker doesn’t put his own skin in the game.   This (as Cloudflare’s CEO pointed out) could put Internet companies in a serious ethical bind.

Timothy B. Lee recently published in Ars-Technica, an update on the “Backpage” bills in Congress, which would weaken Section 230 protections. Lee does seem to imply that the providers most at risk remain isolated to those whose main content is advertisements, rather than discussions; and so far he hasn’t addressed with shared hosting providers could be put at risk.  (I asked him that on Twitter.)  But some observer believe that the bills could lead states to require that sites with user-logon provide adult-id verification.  We all know that this was litigated before with the Child Online Protection Act (COPA), which was ruled unconstitutional finally in early 2007.  I was a party to that litigation under Electronic Frontier Foundation sponsorship. Ironically, the judge mentioned “implicit content” the day that I sat in on the arguments (in Philadelphia).

I wanted to add a comment here that probably could belong on either of my two previous posts.  That is, yes, our whole civilization has become very dependent on technology, and, yes, a determined enemy could give us a very rude shock.  Born in 1943, I have lived through years that have generally been stable, surviving the two most serious crises (the Vietnam military draft in the 1960s and then HIV in the 1980s) that came from the outside world.  A sudden shock like that in NBC’s “Revolution” is possible.  But I could imagine being born around 1765, living as a white landowner in the South, having experienced the American Revolution and then the Constitution as a teen, and only gradually coming to grips with the idea that my world would be expropriated from me because an underlying common moral evil, before I died (if I was genetically lucky enough to live to 100 without modern medicine). Yet I would have had no grasp of the idea of a technological future, that itself could be put it risk because, for all its benefits in raising living standards, still seemed to leave a lot of people behind.

(Posted: Saturday, September 9, 2017 at 9 PM EDT)

Cloudflare’s action against neo-Nazi site complicates debate about service provider responsibilities and capabilities

The responsibility and capability of large private companies to decide what stays on the Internet or can be accessed by ordinary users seems to be coming into focus as a real controversy.

Just recently (Aug. 4), I’ve discussed how recent well-motivated bills in Congress aimed at inhibiting sex trafficking (usually of underage girls) could jeopardize much of the downstream liability exclusion (Section 230) that allow user-generated content to be posted on the Web (and that allow individuals to express themselves on their own through social media, blogs, and their own share-hosted websites) without expensive and bureaucratic third-party gatekeepers. This is tied with an undertone, not often argued openly, of controversy over whether “amateur” web content needs to be able to pay its own way . That latter-day proposition becomes dubious at the outset when you consider the observation made recently on CNN’s series “The 90s” that the first businesses to make money with web sites were pornography, which even was the first content source to set up credit card use and merchant accounts online.

But judging from the quick reaction of offense in the tech community to the extreme right wing march in Charlottesville, leading to a tragic death of a peaceful counter protester at the hands of a right-wing domestic terrorist who showed up. Companies do know a lot about what is getting posted. Matthew Prince of Cloudflare wrote a disturbing op-ed in the Wall Street Journal, about his second thoughts after pulling the plug on Daily Stormer. Prince, while admitting that no service provider can possibly screen every user-generated item on its site, implies that providers do have a great deal of knowledge of what is going on and can censor offensive content (like racism) if they think they have to, Prince also makes the hyperbolic and alarming statement that almost any site with even mildly controversial content will eventually get hacked (or perhaps draw a SLAPP suit). Yet Prince’s own article would qualify the WSJ as such a site.

Prince argues that there needs to be some sort of international “due process” body regarding kicking sites or content off; it’s easy to imagine how a group like Electronic Frontier Foundation will react. In fact, I see that Jeremy Malcolm, Cindy Cohn and Danny O’Brien have a thorough discussion of the private “due process” issue and all its possible components here. Particularly important is that people understand the domain name system as standing apart from content hosting. EFF also points out that relaxing net neutrality rules could allow telecom companies to refuse connection to content that they see as politically subservice.

Indeed, there are many ways for content to be objectionable. Donald Trump, in a teleprompted speech to veterans from Reno today, mentioned the need to stop terror recruiting on the Internet . (Is this just ISIS, or would it include neo-Nazi’s and “anarchists”). Twitter’s controversy over this is well known, and we should not forget that most of this process happens off-shore with encrypted messaging apps, not just websites and social media. Other problems include cyberbullying (including revenge porn), fake news (and the way social media platforms can manipulate it – again a sign that providers do know what they are doing sometimes) and also possibly asymmetrically triggering foreign national security threats (hint: the Sony Pictures hack, as well as attracting steganography). “Free speech” may indeed become a very subjective concept.

(Posted: Wednesday, Aug. 23, 2017 at 7 PM EDT)

Will user-generated public content be around forever? The sex-trafficking issue and Section 230 are just the latest problem

It used to be very difficult to “get published”.  Generally, a third party would have to be convinced that consumers would really pay to buy the content you had produced.  For most people that usually consisted of periodical articles and sometimes books.  It was a long-shot to make a living as a best-selling author, as there was only “room at the top” for so many celebrities.  Subsidy “vanity” book publishing was possible, but usually ridiculously expensive with older technologies.

That started to change particularly in the mid 1990s as desktop publishing became cheaper, as did book manufacturing, to be followed soon by POD, print on demand, by about 2000.  I certainly took advantage of these developments with my first “Do Ask Do Tell” book in 1997.

Furthermore, by the late 1990s, it had become very cheap to have one’s own domain and put up writings for the rest of the world to find with web browsers.  And the way search engine technology worked by say 1998, amateur sites with detailed and original content had a good chance of being found passively and attracting a wide audience.  In addition to owned domains, some platforms, such as Hometown AOL at first, made it very easy to FTP content for unlimited distribution.  At the same time, Amazon and other online mass retail sites made it convenient for consumers to find self-published books, music, and other content.

Social media, first with Myspace and later with the much more successful Facebook, was at first predicated on the idea of sharing content with a known whitelisted audience of “friends” or “followers”.  In some cases (Snapchat), there was an implicit understanding that the content was not to be permanent. But over time, many social media platforms (most of all, Facebook, Twitter, and Instagram) were often used to publish brief commentaries and links to provocative news stories on the Web, as well as videos and images of personal experiences.  Sometimes they could be streamed Live.  Even though friends and followers were most likely to see it (curated by feed algorithms somewhat based on popularity in the case of Facebook) many of them were public for all to see,  Therefore, an introverted person like me who does not like “social combat” or hierarchy or does not like to be someone else’s voice (or to need someone else’s voice) could become effective in influencing debate.   It’s also important that modern social media were supplemented by blogging platforms, like Blogger, WordPress and Tumblr, which, although they did use the concept of “follower”,  were more obviously intended generally for public availability. The same was usually true of a lot of video content on YouTube and Vimeo.

The overall climate regarding self-distribution of one’s own speech to a possibly worldwide audience seemed permissive, in western countries and especially the U.S.   In authoritarian countries, political leaders would resist.  It might seem like an admission of weakness that an amateur journalist could threaten a regime, but we saw what happened, for example, with the Arab Spring.  A permissive environment regarding distribution of speech seemed to undercut the hierarchy and social command that some politicians claimed they needed to protect “their own people.”

Gradually, challenges to self-distribution evolved.   There was an obvious concern that children could find legitimate (often sexually oriented) content aimed for cognitive adults.  The first big problem was the Communications Decency Act of 1996.  The censorship portion of this would be overturned by the Supreme Court in 1997 (I had attended the oral arguments).  Censorship would be attempted again with the Child Online Protection Act, or COPA, for which I was a sublitigant under the Electronic Frontier Foundation.  It would be overturned in 2007 after a complicated legal battle, in the Supreme Court twice.  But the 1996 Communications Decency Act, or more properly known as the Telecommunications Act, also contained a desirable provision, that service providers (ranging from Blogging or video-sharing platforms to telecommunications companies and shared hosting companies) would be shielded from downstream liability for user content for most legal problems (especially defamation). That is because it was not possible for a hosting company or service platform to prescreen every posting for possible legal problems (which is what book publishers do, and yet require author indemnification!)  Web hosting and service companies were required to report known (as reported by users) child pornography and sometimes terrorism promotion.

At the same time, in the copyright infringement area, a similar provision developed, the Safe Harbor provision of the Digital Millennium Copyright Act of 1998, which shielded service providers from secondary liability for copyright infringement as long as they took down offending content from copyright owners when notified.  Various threats have developed to the mechanism, most of all SOPA, which got shot down by user protests in early 2012 (Aaron Swartz was a major and tragic figure).

The erosion of downstream liability protections would logically become the biggest threat to whether companies can continue to offer users the ability to put up free content without gatekeepers and participate in political and social discussions on their own, without proxies to speak for them, and without throwing money at lobbyists.  (Donald Trump told supporters in 2016, “I am your voice!”  Indeed.  Well, I don’t need one as long as I have Safe Harbor and Section 230.)

So recently we have seen bills introduced in the House (ASVFOSTA, “Allow States and Victims to Fight Online Trafficking Act”) in April (my post), and SESTA, Stop Enabling of Sex Traffickers Act” on Aug. 1 in the Senate (my post). These bills, supporters say, are specifically aimed at sex advertising sites, most of all Backpage..  Under current law, plaintiffs (young women or their parents) have lost suits because Backpage can claim immunity under 230.  There have been other controversies over the way some platforms use 230, especially Airbnb.  The companies maintain that they are not liable for what their users do.

Taken rather literally, the bills (especially the House bill) might be construed as meaning that any blogging platform or hosting provider runs a liability risk if a user posts a sex trafficking ad or promotion on the user’s site.  There would be no reasonable way Google or Blue Host or Godaddy or any similar party could anticipate that a particular user will do this.  Maybe some automated tools could be developed, but generally most hosting companies depend on users to report illegal content.  (It’s possible to screen images for water marks for known child pornography, and it’s possible to screen some videos and music files for possible copyright, and Google and other companies do some of this.)

Bob Portman, a sponsor of the Senate bill, told CNN and other reporters that normal service and hosting companies are not affected, only sites knowing that they host sex ads.  So he thinks he can target sites like Backpage, as if they were different.  In a sense, they are:  Backpage is a personal commerce-facilitation site, not a hosting company or hosting service (which by definition has almost no predictive knowledge of what subject matter any particular user is likely to post, and whether that content may include advertising or may execute potential commercial transactions, although use of “https everywhere” could become relevant).  Maybe the language of the bills could be tweaked to make this clearer. It is true that some services, especially Facebook, have become pro-active in removing or hiding content that flagrantly violates community norms, like hate speech (and that itself gets controversial).

Eric Goldman, a law professor at Santa Clara, offered analysis suggesting that states might be emboldened to try to pass laws requiring pre-screening of everything, for other problems like fake news.  The Senate bill particularly seems to encourage states to pass their own add-on laws. They could try to require pre-secreening.  It’s not possible for an ISP to know whether any one of the millions of postings made by customers could contain sex-trafficking before the fact, but a forum moderator or blogger monitoring comments probably could.  Off hand, it would seem that allowing a comment with unchecked links (which I often don’t navigate because of malware fears) could run legal risks (if the link was to a trafficking site under the table).  Again, a major issue should be whether the facilitator “knows”.  Backpage is much more likely to “know” than a hosting provider.  A smaller forum host might “know” (but Reddit would not).

From a moral perspective, we have something like the middle school problem of detention for everybody for the sins of a few.  I won’t elaborate here on the moral dimensions of the idea that some of us don’t have our own skin in the game in raising kids or in having dependents, as I’ve covered that elsewhere.  But you can see that people will perceive a moral tradeoff, that user-generated content on the web, the way the “average Joe” uses it, has more nuisance value (with risk of cyberbullying, revenge porn, etc) than genuine value in debate, which tends to come from people like me with fewer immediate personal responsibilities for others.

So, is the world of user-generated content “in trouble”?  Maybe.  It would sound like it could come down to a business model problem.  It’s true that shared hosting providers charge annual fees for hosting domains, but they are fairly low (except for some security services).  But free content service platforms (including Blogger, WordPress, YouTube, and Facebook and Twitter) do say “It’s free” now – they make their money on advertising connected to user content.   A world where people use ad blockers and “do not track” would seem grim for this business model in the future.  Furthermore, a  lot of people have “moral” objections to this model – saying that only authors should get the advertising revenue – but that would destroy the social media and UGC (user-generated content) world as we know it.  Consider the POD book publishing world. POD publishers actually do perform “content evaluation” for hate speech and legal problems, and do collect hefty fees for initial publication.  But lately they have become more aggressive with authors about books sales, a sign that they wonder about their own sustainability.

There are other challengers for those whose “second careers” like mine are based on permissive UGC.  One is the weakening of network neutrality rules, as I have covered here before.  The second comment period ends Aug. 17.  The telecom industry, through its association, has said there is no reason for ordinary web sites to be treated any differently than they have been, but some observers fear that some day new websites could have to pay to be connected to certain providers (beyond what you pay for a domain name and hosting now).

There have also been some fears in the past, which have vanished with time.  One flare-up started in 2004-2005 when some observers that political blogs could violate federal election laws by being construed as indirect “contributions”.   A more practically relevant problem is simply online reputation and the workplace, especially in a job where one has direct reports, underwriting authority, or the ability to affect a firm to get business with “partisanship”.  One point that gets forgotten often is that, indeed, social media sites can be set up with full privacy settings so that they’re not searchable.  Although that doesn’t prevent all mishaps (just as handwritten memos or telephone calls can get you in trouble at work in the physical world) it could prevent certain kinds of workplace conflicts.  Public access to amateur content could also be a security concern, in a situation where an otherwise obscure individual is able to become “famous” online, he could make others besides himself into targets.

Another personal flareup occurred in 2001 when I tried to buy media perils insurance and was turned down for renewal because of the lack of a third-party gatekeeper. This issue flared into debate in 2008 briefly but subsided.  But it’s conceivable that requirements could develop that sites (at least through associated businesses) pay for themselves and carry media liability insurance, as a way of helping account for the community hygiene issue of potential bad actors.

All of this said, the biggest threat to online free expression could still turn out to be national security, as in some of my recent posts.  While the mainstream media have talked about hackers and cybersecurity (most of all with elections), physical security for the power grid and for digital data could become a much bigger problem than we thought if we attract nuclear or EMP attacks, either from asymmetric terrorism or from rogue states like North Korea.  Have tech companies really provided for the physical security of their clouds and data given a threat like this?

Note the petition and suggested Congressional content format suggested by Electronic Frontier Foundation for bills like SESTA. It would be useful to know how British Commonwealth and European countries handle the downstream liability issues, as a comparison point. It’s also important to remember that a weakened statutory downstream liability protection for a service provider does not automatically create that liability.

(Posted: Thursday, Aug. 3, 2017 at 10:30 PM EDT)

Families of San Bernadino terror attack victims sue Facebook, Twitter, Google over “propaganda” arguments that evade Section 230

Families of victims of the fall 2015 terror attack in San Bernadino, CA are suing the three biggest social media companies (that allow unmonitored broadcast of content in public mode), that is Facebook, Twitter, and Google. Similar suits have been filed by victims of the Pulse attack in Orlando and the 2015 terror attacks in Paris.

Station WJLA in Washington DC, a subsidiary of the “conservative” (perhaps mildly so) Sinclair Broadcast Group in Baltimore, put up a news story Tuesday morning, including a Scribd PDF copy of the legal complaint in a federal court in central California, here. I find it interesting that Sinclair released this report, as it did so last summer with stories about threats to the power grids, which WJLA and News Channel 8 in Washington announced but then provided very little coverage of to local audiences (I had to hunt it down online to a station in Wisconsin).

Normally, Section 230 protects social media companies from downstream liability for the usual personal torts, especially libel, and DNCA Safe Harbor protects them in a similar fashion from copyright liability if they remove content when notified.

However, the complaint seems to suggest that the companies are spreading propaganda and share in the advertising revenue earned from the content, particularly in some cases from news aggregation aimed at user “Likenomics”.

Companies do have a legal responsibility to remove certain content when brought to their attention, including especially child pornography and probably sex trafficking, and probably clearcut criminal plans. They might have legal duties in wartime settings regarding espionage, and they conceivably could have legal obligations regarding classified information (which is what the legal debate over Wikileaks and Russian hacking deals with).

But “propaganda” by itself is ideology. Authoritarian politicians on both the right and left (Vladimir Putin) use the word a lot, because they rule over populations that are less individualistic in their life experience than ours, where critical thinking isn’t possible, and where people have to act together. The word, which we all learn about in high school civics and government social studies classes (and I write this post on a school day – and I used to sub), has always sounded dangerous to me.

But the propagation of ideology alone would probably be protected by the First Amendment, until it is accompanied by more specific criminal or military (war) plans. A possible complication could be the idea that terror ideology regards civilians as combatants.

Facebook recently announced it would add 3000 associates to screen for terror or hate content, but mainly on conjunction with Facebook Live broadcasts of crimes or even suicide. I would probably be a good candidate for one of these positions, but I am so busy working for myself I don’t have time (in “retirement”, which is rather like “in relief” in baseball).

Again, the Internet that we know with unfiltered user-generated content is not possible today if service companies have to pre-screen what gets published for possible legal problems. Section 230 will come under fire for other reasons soon (the Backpage scandal).

I have an earlier legacy post about Section 230 and Backpage here.

(Posted: Tuesday, May 9, 2017 at 1 PM EDT)

Facebook, and other social media companies and publishing platforms, come under more scrutiny as “attractive nuisances” for unstable people

The New York Times has a front page story about social media perils with a blunt headline, “Video of killing casts Facebook in a harsh light”.   (Maybe, in comparison to the tort manual, it’s a “false light”).  The story, by Mike Isaac and Christopher Mele, has a more expansive title online, “A murder on Facebook provokes outrage and questions over responsibility.”

This refers to a recent brazen random shooting of a senior citizen in Cleveland Easter Sunday (on Facebook Live), but there have been a few other such incidents, including the gunning of two reporters on a Virginia television station during a broadcast in 2015, after which the perpetrator committed suicide. Facebook Live has also been used to record shooting by police, however (as in Minnesota).

The Wall Street Journal has a similar story today by Deep Seetharaman (“Murder forces scrutiny at Facebook”) and Variety includes a statement by Justin Osofsky, Facebook’s VP of global operations.  Really, is it reasonable the AI or some other tool can detect violent activity being filmed prospectively?

At the outset, it’s pretty easy to ask why the assailants in these cases had weapons.  Obviously, they should not have passed background checks – except that some may have had no previous records.

As the articles point out, sometimes the possibility of public spectacle plays into the hands of “religious enemies”, that is lone wolf actors motivated by radical Islam or other ideologies. But at a certain psychological level, religion is a secondary contributing factor.  Persons who commit such acts publicly (or covertly) have found that this world if modernism, abstraction and personal responsibility makes no sense to them.  So ungated social media may, in rare cases, provoke a “15 minutes of fame” motive along with a “nothing to lose” attitude (and maybe a belief in martyrdom).  This syndrome seems very personal and usually goes beyond the portrayal of an authoritarian religious or political message.

It is easy, of course, to invoke a Cato-like statistical argument (which often applies to immigration).  In a nation of over 300 million people (or a world of billions), instant communication will rarely, but perhaps predictably with some very low probability, provoke such incidents.  You can make the same arguments about the mobility offered by driving cars.

Ungated user content offers new forms of journalism, personal expression and self-promotion, and new checks on political powers, but it comes with some risks, like fake news and crazy people seeking attention.

For me, the history is augmented by the observation that most of my own “self-promotion” came through search engines on flat sites, in the late 90s and early 00’s, before modern social media offered friending and news aggregation.  As with an incident when I was substitute teaching in late 2005, the possibility of search engine discovery carried its own risks, leading to the development of the notion of “online reputation.”

Still, the development of user-generated content, that did not have to pay its own freight the way old fashioned print publications did in the pre-Internet days when the bottom line controlled what could be published, is remarkable in the moral dilemmas it can create.

It’s ironic. How social media allows us to experience being “alone together”, but makes up for it by encouraging individuals to ask for help online by crowdfunding the meeting of their own needs – something I am usually hesitant to jump into.

This is a good place to mention a new intrusion onto Section 230, a bill by Anne Wagner (R-MO), “Allow States and Victims to Fight Online Sex Trafficking Act of 2017”, partly in response to the Backpage controversy, congressional link here. No doubt discussion of this bill will cause more discussion of the expectations for proactive screening by social media.

There’s an additional note: the perpetrator of the Cleveland incident has ended his own life after police attempted to apprehend him (Cleveland Plain Dealer story).

(Posted: Tuesday, April 18, 2017 at 1 PM EDT)

Update: Thursday, April 27, 2017 at 10:45 AM EDT

There has been a major crime deliberately filmed on Facebook Live in Thailand, story here.

Facebook has announced plans to hire 3000 more people to screen complaints for inappropriate content.  These jobs probably often require bilingual skills.