CNN columnist compares user-generated content to conventional media and warns amateurs on freedom of the press

Brian Stelter offers a very constructive op-ed on CNN today , “Whose Freedom Is It?” in a series, “Free Press: What’s at Stake”.

Stelter takes the practical position (as have I) that many social media users and bloggers have become quasi-establishment journalists, supplementing the major media, and helping with “keeping them honest”, as Anderson Cooper often says.  So amateurs need to take fact-checking seriously.

This freedom may well be undermined by a number of concerns explored here recently. These include erosion of downstream liability protections for service providers (the Backpage-Section 230 problem), increasing legal exposure to “amateur” journalists for certain kinds of hyperlinks and embeds, the fake news scandals of the past year (really, the observation that “average joe” social media users tend to follow tribal crowds rather than read critically), and particularly the ease with which teens and young adults seem to be recruited into violence, which includes but is by no means limited to radical Islam and gang activity.  As I’ve noted here before, these kinds of concerns can make amateur journalism seem “gratuitous” (e.g unnecessary and capable of being shut down) although Trump seems much more concerned about the establishment (Fourth Estate) press than the newbies (Fifth Estate).

But you have to take seriously he demands made on social media platform and search engines to “pre-censor” user ouput.

Consider this article by Karl McDonald, “The Daily Mail Fundamentally Understands What Google Is”    Search engines are particularly having to deal with “the right to be forgotten” outside the US (as well as “digital laundry”).

Speakers on the Internet benefit in different ways from search engines, social media sites (some like Facebook create more opportunity for permanent “publication” than do others, like Snapchat), and shared or dedicated third-party hosting for conventional or blog sites; these providers also usually provide domain name registration. Users  also benefit from security services like Cloudflare and SiteLock.   Generally, social media sites are taking more “responsibility” for certain kinds of damaging speech (hate speech, bullying, or terror recruiting) than are neutral site hosts.   However, after the Daily Stormer matter (post Charlottesville), a few hosts participated in kicking off at least one neo-Nazi site from domain registration.

The “Mediator” Jim Rutenberg wrote a piece “Terrorism Is Faster than Twitter” Nov. 5 in which he traces how NYC bicycle lane terrorist Sayfullo Saipov followed terror recipes exactly, and tries to explain where he found them.  There are supporting details in a Nov. 2 story by Rukmin Callimachi   There is reference to the magazine Rumiyah (related to Dabiq).  A web operation called “Site Intel Group” tries to trace how this material is distributed on the web.  Much of it moves to the Dark Web or P2P.  Generally, it appears that material from these groups disappears quickly from better known social media and from conventionally hosted sites and moves around on offshore providers a lot.  There are articles on the Internet Archive (“WayBack”) which require specific logon (rather uncommon for less controversial material). In general, it does not appear that the sort of material that the Boston Marathon or other domestic “lone wolf” or small cell terrorists tried to use came from the more conventionally accessed and indexed parts of the Web.  Most of it seems pretty underground (after initial recruitment) with various encrypted apps.  We’re left to ponder what is making some of these young men (and sometimes women) tick, and have to face that modern civilization, with its individualized hypercompetitiveness, seems to offer them only failure and shame.

(Posted: Sunday, November 12, 2017 at 6:45 PM EST)

 

Cloudflare’s action against neo-Nazi site complicates debate about service provider responsibilities and capabilities

The responsibility and capability of large private companies to decide what stays on the Internet or can be accessed by ordinary users seems to be coming into focus as a real controversy.

Just recently (Aug. 4), I’ve discussed how recent well-motivated bills in Congress aimed at inhibiting sex trafficking (usually of underage girls) could jeopardize much of the downstream liability exclusion (Section 230) that allow user-generated content to be posted on the Web (and that allow individuals to express themselves on their own through social media, blogs, and their own share-hosted websites) without expensive and bureaucratic third-party gatekeepers. This is tied with an undertone, not often argued openly, of controversy over whether “amateur” web content needs to be able to pay its own way . That latter-day proposition becomes dubious at the outset when you consider the observation made recently on CNN’s series “The 90s” that the first businesses to make money with web sites were pornography, which even was the first content source to set up credit card use and merchant accounts online.

But judging from the quick reaction of offense in the tech community to the extreme right wing march in Charlottesville, leading to a tragic death of a peaceful counter protester at the hands of a right-wing domestic terrorist who showed up. Companies do know a lot about what is getting posted. Matthew Prince of Cloudflare wrote a disturbing op-ed in the Wall Street Journal, about his second thoughts after pulling the plug on Daily Stormer. Prince, while admitting that no service provider can possibly screen every user-generated item on its site, implies that providers do have a great deal of knowledge of what is going on and can censor offensive content (like racism) if they think they have to, Prince also makes the hyperbolic and alarming statement that almost any site with even mildly controversial content will eventually get hacked (or perhaps draw a SLAPP suit). Yet Prince’s own article would qualify the WSJ as such a site.

Prince argues that there needs to be some sort of international “due process” body regarding kicking sites or content off; it’s easy to imagine how a group like Electronic Frontier Foundation will react. In fact, I see that Jeremy Malcolm, Cindy Cohn and Danny O’Brien have a thorough discussion of the private “due process” issue and all its possible components here. Particularly important is that people understand the domain name system as standing apart from content hosting. EFF also points out that relaxing net neutrality rules could allow telecom companies to refuse connection to content that they see as politically subservice.

Indeed, there are many ways for content to be objectionable. Donald Trump, in a teleprompted speech to veterans from Reno today, mentioned the need to stop terror recruiting on the Internet . (Is this just ISIS, or would it include neo-Nazi’s and “anarchists”). Twitter’s controversy over this is well known, and we should not forget that most of this process happens off-shore with encrypted messaging apps, not just websites and social media. Other problems include cyberbullying (including revenge porn), fake news (and the way social media platforms can manipulate it – again a sign that providers do know what they are doing sometimes) and also possibly asymmetrically triggering foreign national security threats (hint: the Sony Pictures hack, as well as attracting steganography). “Free speech” may indeed become a very subjective concept.

(Posted: Wednesday, Aug. 23, 2017 at 7 PM EDT)

Activism, watcherism, and subtle vigilantism: those just outside the “systematic oppression” zones

CNN has run an op-ed by John Blake, “White Supremacists by Default: How ordinary people made Charlottesville possible.”

Yes, to some extent, this piece is an “I am my brother’s keeper” viewpoint familiar from Sunday School. But at another level the piece has major moral implications regarding the everyday personal choices we make, and particularly the way we speak out or remain silent.

I grew up in a way in which I did not become conscious of class or race or belonging to a tribe, or people. I was not exposed to the idea of “systematic oppression” against people who belong to some recognizable group. My self-concept was pretty separated from group identity.

I gradually became aware that I would grow up “different” especially with respect to sexuality. But I believed it was incumbent on me to learn to perform in a manner commensurate with my gender, because the welfare of others in the family or community or country could depend on that capacity. My sense of inferiority was driven first by lack of that performance, which then morphed into other ideas about appearance and what makes a male (or then female) look desirable.

I remember, back in the mid 1990s, about the time I was starting to work on my first DADT book, an African-American co-worker (another mainframe computer programmer) where I worked in northern Virginia said that he was teaching his young son to grow up to deal with discrimination. Another African-American coworker who had attended West Point said I had no idea what real discrimination was like, because I could just pass. (That person thought I lived “at home” with my Mother since I was never married.)  I would subsequently be a witness in litigation by a former black employee whom I replaced with an internal transfer, and the “libertarianism” in my own deposition seemed to be noticed by the judge dismissing the case.

Indeed, the activism in the gay community always had to deal with the “conduct” vs. “group identity” problem, particularly during the AIDS crisis of the 1990s. Libertarians and moderate conservatives like me (I didn’t formally belong to Log Cabin Republicans but tended to like a lot of things about Reagan and personally fared well when he was in office) were focused on privacy (in the day when double lives were common) and personal responsibility, whereas more radical activists saw systematic oppression as related to definable gender-related class. Since I was well within the upper middle class and earned a good income with few debts and could pay my bills, both conservatives with large families and radical activists born out of disadvantage saw me as a problem.

The more radical commentators today are insisting that White Nationalists have an agenda of re-imposing or augmenting systematic oppression by race, even to be ultimate end of overthrowing normal civil liberties, reintroducing racial subjugation and other forms of authoritarian order. The groups on the extreme right are enemies (of people of color) as much as radical Islam has made itself an enemy of all civilization. Radicals insist that those who normally want to maintain some objectivity and personal distance must be recruited to actively fighting with them to eliminate this one specific enemy.  This could lead to vigilantism (especially online) to those who speak out on their own but who will not join in with them. Ii do get the idea of systemic oppression, but I think that meeting has a lot more to do with the integrity of individual conduct. But this goes quite deep. Refusing to date a member of a different race could be viewed as active racism (June 26).

The possibility of including ordinary independent speakers or observers (or videographers) among the complicit indirect systematic “oppressors” should not be overlooked. Look at the comments and self-criticism of Cloudfare CEO Matthew Prince, about the dangers of new forms of pro-active censorship by Internet companies. This does bear on the Backpge-Section 230 problem, and we’ll come back to this again. In a world with so many bizarre asymmetric threats, I can imagine that Internet companies could expand the list of certain speech content that they believe they cannot risk allowing to stay up (hint: Sony).

I want to add, I do get the idea that many left-wing activists (not just limited to Antifa) believe that Trump was elected in large part by white supremacists and that there is a more specific danger to everyone else in what he owes this part of his base. I have not taken this idea very seriously before, but now I am starting to wonder.

(Posted: Saturday, August 19, 2017 at 6:15 PM EDT)

Will user-generated public content be around forever? The sex-trafficking issue and Section 230 are just the latest problem

It used to be very difficult to “get published”.  Generally, a third party would have to be convinced that consumers would really pay to buy the content you had produced.  For most people that usually consisted of periodical articles and sometimes books.  It was a long-shot to make a living as a best-selling author, as there was only “room at the top” for so many celebrities.  Subsidy “vanity” book publishing was possible, but usually ridiculously expensive with older technologies.

That started to change particularly in the mid 1990s as desktop publishing became cheaper, as did book manufacturing, to be followed soon by POD, print on demand, by about 2000.  I certainly took advantage of these developments with my first “Do Ask Do Tell” book in 1997.

Furthermore, by the late 1990s, it had become very cheap to have one’s own domain and put up writings for the rest of the world to find with web browsers.  And the way search engine technology worked by say 1998, amateur sites with detailed and original content had a good chance of being found passively and attracting a wide audience.  In addition to owned domains, some platforms, such as Hometown AOL at first, made it very easy to FTP content for unlimited distribution.  At the same time, Amazon and other online mass retail sites made it convenient for consumers to find self-published books, music, and other content.

Social media, first with Myspace and later with the much more successful Facebook, was at first predicated on the idea of sharing content with a known whitelisted audience of “friends” or “followers”.  In some cases (Snapchat), there was an implicit understanding that the content was not to be permanent. But over time, many social media platforms (most of all, Facebook, Twitter, and Instagram) were often used to publish brief commentaries and links to provocative news stories on the Web, as well as videos and images of personal experiences.  Sometimes they could be streamed Live.  Even though friends and followers were most likely to see it (curated by feed algorithms somewhat based on popularity in the case of Facebook) many of them were public for all to see,  Therefore, an introverted person like me who does not like “social combat” or hierarchy or does not like to be someone else’s voice (or to need someone else’s voice) could become effective in influencing debate.   It’s also important that modern social media were supplemented by blogging platforms, like Blogger, WordPress and Tumblr, which, although they did use the concept of “follower”,  were more obviously intended generally for public availability. The same was usually true of a lot of video content on YouTube and Vimeo.

The overall climate regarding self-distribution of one’s own speech to a possibly worldwide audience seemed permissive, in western countries and especially the U.S.   In authoritarian countries, political leaders would resist.  It might seem like an admission of weakness that an amateur journalist could threaten a regime, but we saw what happened, for example, with the Arab Spring.  A permissive environment regarding distribution of speech seemed to undercut the hierarchy and social command that some politicians claimed they needed to protect “their own people.”

Gradually, challenges to self-distribution evolved.   There was an obvious concern that children could find legitimate (often sexually oriented) content aimed for cognitive adults.  The first big problem was the Communications Decency Act of 1996.  The censorship portion of this would be overturned by the Supreme Court in 1997 (I had attended the oral arguments).  Censorship would be attempted again with the Child Online Protection Act, or COPA, for which I was a sublitigant under the Electronic Frontier Foundation.  It would be overturned in 2007 after a complicated legal battle, in the Supreme Court twice.  But the 1996 Communications Decency Act, or more properly known as the Telecommunications Act, also contained a desirable provision, that service providers (ranging from Blogging or video-sharing platforms to telecommunications companies and shared hosting companies) would be shielded from downstream liability for user content for most legal problems (especially defamation). That is because it was not possible for a hosting company or service platform to prescreen every posting for possible legal problems (which is what book publishers do, and yet require author indemnification!)  Web hosting and service companies were required to report known (as reported by users) child pornography and sometimes terrorism promotion.

At the same time, in the copyright infringement area, a similar provision developed, the Safe Harbor provision of the Digital Millennium Copyright Act of 1998, which shielded service providers from secondary liability for copyright infringement as long as they took down offending content from copyright owners when notified.  Various threats have developed to the mechanism, most of all SOPA, which got shot down by user protests in early 2012 (Aaron Swartz was a major and tragic figure).

The erosion of downstream liability protections would logically become the biggest threat to whether companies can continue to offer users the ability to put up free content without gatekeepers and participate in political and social discussions on their own, without proxies to speak for them, and without throwing money at lobbyists.  (Donald Trump told supporters in 2016, “I am your voice!”  Indeed.  Well, I don’t need one as long as I have Safe Harbor and Section 230.)

So recently we have seen bills introduced in the House (ASVFOSTA, “Allow States and Victims to Fight Online Trafficking Act”) in April (my post), and SESTA, Stop Enabling of Sex Traffickers Act” on Aug. 1 in the Senate (my post). These bills, supporters say, are specifically aimed at sex advertising sites, most of all Backpage..  Under current law, plaintiffs (young women or their parents) have lost suits because Backpage can claim immunity under 230.  There have been other controversies over the way some platforms use 230, especially Airbnb.  The companies maintain that they are not liable for what their users do.

Taken rather literally, the bills (especially the House bill) might be construed as meaning that any blogging platform or hosting provider runs a liability risk if a user posts a sex trafficking ad or promotion on the user’s site.  There would be no reasonable way Google or Blue Host or Godaddy or any similar party could anticipate that a particular user will do this.  Maybe some automated tools could be developed, but generally most hosting companies depend on users to report illegal content.  (It’s possible to screen images for water marks for known child pornography, and it’s possible to screen some videos and music files for possible copyright, and Google and other companies do some of this.)

Bob Portman, a sponsor of the Senate bill, told CNN and other reporters that normal service and hosting companies are not affected, only sites knowing that they host sex ads.  So he thinks he can target sites like Backpage, as if they were different.  In a sense, they are:  Backpage is a personal commerce-facilitation site, not a hosting company or hosting service (which by definition has almost no predictive knowledge of what subject matter any particular user is likely to post, and whether that content may include advertising or may execute potential commercial transactions, although use of “https everywhere” could become relevant).  Maybe the language of the bills could be tweaked to make this clearer. It is true that some services, especially Facebook, have become pro-active in removing or hiding content that flagrantly violates community norms, like hate speech (and that itself gets controversial).

Eric Goldman, a law professor at Santa Clara, offered analysis suggesting that states might be emboldened to try to pass laws requiring pre-screening of everything, for other problems like fake news.  The Senate bill particularly seems to encourage states to pass their own add-on laws. They could try to require pre-secreening.  It’s not possible for an ISP to know whether any one of the millions of postings made by customers could contain sex-trafficking before the fact, but a forum moderator or blogger monitoring comments probably could.  Off hand, it would seem that allowing a comment with unchecked links (which I often don’t navigate because of malware fears) could run legal risks (if the link was to a trafficking site under the table).  Again, a major issue should be whether the facilitator “knows”.  Backpage is much more likely to “know” than a hosting provider.  A smaller forum host might “know” (but Reddit would not).

From a moral perspective, we have something like the middle school problem of detention for everybody for the sins of a few.  I won’t elaborate here on the moral dimensions of the idea that some of us don’t have our own skin in the game in raising kids or in having dependents, as I’ve covered that elsewhere.  But you can see that people will perceive a moral tradeoff, that user-generated content on the web, the way the “average Joe” uses it, has more nuisance value (with risk of cyberbullying, revenge porn, etc) than genuine value in debate, which tends to come from people like me with fewer immediate personal responsibilities for others.

So, is the world of user-generated content “in trouble”?  Maybe.  It would sound like it could come down to a business model problem.  It’s true that shared hosting providers charge annual fees for hosting domains, but they are fairly low (except for some security services).  But free content service platforms (including Blogger, WordPress, YouTube, and Facebook and Twitter) do say “It’s free” now – they make their money on advertising connected to user content.   A world where people use ad blockers and “do not track” would seem grim for this business model in the future.  Furthermore, a  lot of people have “moral” objections to this model – saying that only authors should get the advertising revenue – but that would destroy the social media and UGC (user-generated content) world as we know it.  Consider the POD book publishing world. POD publishers actually do perform “content evaluation” for hate speech and legal problems, and do collect hefty fees for initial publication.  But lately they have become more aggressive with authors about books sales, a sign that they wonder about their own sustainability.

There are other challengers for those whose “second careers” like mine are based on permissive UGC.  One is the weakening of network neutrality rules, as I have covered here before.  The second comment period ends Aug. 17.  The telecom industry, through its association, has said there is no reason for ordinary web sites to be treated any differently than they have been, but some observers fear that some day new websites could have to pay to be connected to certain providers (beyond what you pay for a domain name and hosting now).

There have also been some fears in the past, which have vanished with time.  One flare-up started in 2004-2005 when some observers that political blogs could violate federal election laws by being construed as indirect “contributions”.   A more practically relevant problem is simply online reputation and the workplace, especially in a job where one has direct reports, underwriting authority, or the ability to affect a firm to get business with “partisanship”.  One point that gets forgotten often is that, indeed, social media sites can be set up with full privacy settings so that they’re not searchable.  Although that doesn’t prevent all mishaps (just as handwritten memos or telephone calls can get you in trouble at work in the physical world) it could prevent certain kinds of workplace conflicts.  Public access to amateur content could also be a security concern, in a situation where an otherwise obscure individual is able to become “famous” online, he could make others besides himself into targets.

Another personal flareup occurred in 2001 when I tried to buy media perils insurance and was turned down for renewal because of the lack of a third-party gatekeeper. This issue flared into debate in 2008 briefly but subsided.  But it’s conceivable that requirements could develop that sites (at least through associated businesses) pay for themselves and carry media liability insurance, as a way of helping account for the community hygiene issue of potential bad actors.

All of this said, the biggest threat to online free expression could still turn out to be national security, as in some of my recent posts.  While the mainstream media have talked about hackers and cybersecurity (most of all with elections), physical security for the power grid and for digital data could become a much bigger problem than we thought if we attract nuclear or EMP attacks, either from asymmetric terrorism or from rogue states like North Korea.  Have tech companies really provided for the physical security of their clouds and data given a threat like this?

Note the petition and suggested Congressional content format suggested by Electronic Frontier Foundation for bills like SESTA. It would be useful to know how British Commonwealth and European countries handle the downstream liability issues, as a comparison point. It’s also important to remember that a weakened statutory downstream liability protection for a service provider does not automatically create that liability.

(Posted: Thursday, Aug. 3, 2017 at 10:30 PM EDT)

Ransomware attack could provoke anti-tech reaction from Trump, but this particular attack may be easier to meet than it sounds

The “Ooops” page that many workplace computer users saw, displayed by hackers from the WannaCry worm last Friday, seemed almost cordial, as if making a mock of the Brexit vote last year, or of Donald Trump’s election.  It looked like a customer service page.  Can I get my data back?  Sure, if you pay up in time.

This almost looks like a hostile takeover.  Or is it a rebellion against the behavioral and personal performance norms of the civilized world in the digital age (and post)?  We’re in charge now, the welcome screen says;  you do what we tell you to do, and you’ll be OK.  The bullies win.  Might makes right, because there was no right before.

There are a lot of remarkable facts about this one.  First of all, the problem seems to have come from a leak of one of the NSA’s own tools, through Snowden and Wikileaks-like mechanisms.  The government wants its own back door, and it got left open.

Second, it seems to have affected certain kinds of businesses the most, mainly those overseas that happen to be less tech oriented and have less incentive to keep up.  It’s remarkable that one of the most visible victims was Britain’s National Health Service, and it’s easy to imagine how libertarians can use this fact to argue against single-payer and socialized medicine systems.  The government-run system didn’t give employees a personal incentive to stay tech-current.  (The what about intelligence services and the military?  They’re still government.)

But it is true, individuals and tech-oriented small businesses know how to keep up and do keep operating systems and security patches updated.  So do larger businesses with a core interest in tech infrastructure.  Your typical bank, insurance company, brokerage house or other financial institution usually keeps the actual consumer accounts on legacy mainframes, which are much harder for “enemies”  to attack (although insider vulnerabilities are possible, as I learned in my own 30-year career).  Typically they have mid-tiers or presentation layers on Unix systems, not Windows, and these are harder to attack.  Publishing service providers and hosting companies usually put their customer’s content on Unix servers (although Windows is possible, and my legacy “doaskdotell” site is still on Windows, and seems unaffected).

On the other hand, in Europe, most of all in Russia and former Soviet republics, there is a culture of cutting corners and sometimes using pirated software, which is much easier to attack.

A typical workplace infection might destroy all the data on employees’ own desktops (like Word memos) but not source code on a mainframe or Unix server, and not customer data.

This kind of ransomware cannot directly affect the power grids.  The computers that control distribution of power  run on proprietary systems (not Windows) normally not accessible to hackers.  However, in the book “Lights Out” (2015), Ted Koppel had described some ways a very determined hacker could try to corrupt power distribution and overload critical transformers.

There are other particulars in this incident.  Microsoft patched its latest server against the NSA vulnerability in mid March 2017.  All modern companies and ISPs or hosts would have applied this patch.  But there could have been a risk of this worm getting unleashed before the patch.

Windows 10 does not have the vulnerability, but apparently all previous versions did.  While media reports focuses on Britain’s NHS using Windows XP, it would seem that any PC with an earlier Windows operating system could be vulnerable it not patched after May 13, 2017.  Even the monthly update, applied May 12, might not have the fix.

From the best that I know, Carbonite or other cloud backups are not affected.  But users who do not network their Windows machines at home and who make physical backups (like on Seagate drives or even thumb drives) regularly are not the same danger of losing data.  I haven’t seen much information on how quickly the major security companies like Trend, Webroot or Kaspersky update their detection capabilities.

The fact that the worm spread among Windows computers in a network, without action by any users after the first one as attracted attention. It seems as though the original infection usually comes from email attachments disguised to look as if they came from inside the workplace.  But it is possible for an unprotected computer to be infected merely by visiting a fake website (the way scareware infections can take over a computer, often based on misspellings of real sites with “System Response” and 800 numbers for fake support). There are reports that infection is possible in unnetworked computers by leaving certain ports open (like 445) without adequate firewall.

Another problem is that, since introducing Windows 8 and later versions, Microsoft has become much more aggressive about pressuring users to replace operating systems on older hardware.  Often the loaded versions of operating systems like Windows 10 Creators Update, while loaded with the latest security, don’t run very well on older PC’s.  In the interest of providing gaming and tablet capabilities, Microsoft has made its systems less stable for people with ordinary uses (like blog posts).  Microsoft’s own PC’s, as compared to those with third party hardware (HP, Dell, ASUS, Acer, Lenovo, etc) may have fewer problems with updates inasmuch as they don’t have to deal with third party firmware (often from China) which may not be perfect.  Stability has become a much bigger issue since about 2013 with the introduction of Microsoft’s tablet systems.   I had a Toshiba laptop fail in 2014 when going from Windows 8 to Windows 8.1 because it overheated due to inadequate engineering of the power components.

There was a stir over the weekend when CBS reported that President Trump had ordered emergency meetings at DHS, as if he had intended to take some kind of action on his earlier “no computer is safe” idea.  His use of Twitter seems to contradict his previous dislike of computers as a way to get around dealing with people and salesmanship. I had wondered if he could propose liability rules for companies or individuals who leave computers unprotected and allow them to be used in conducting attacks (as like home PC’s that become botnet nodes in DDOD attacks).

It was a couple of two young male programmers (each around 22), one in Britain and one in Indiana, who helped break the attack.  One programmer found an unregistered domain as a “killswitch” and found he could stop the worm by buying the domain himself for about $11.  I started wondering if Trump would talk about a killswitch for many portions of the Internet, as he threatened in December 02 2015 in early debates. “Shut down those pipes.”

My other legacy coverage of this incident is here.

Wikipedia screenshot of the user greeting.

Malware Tech is one of the resources fighting the work.

(Posted: Tuesday, May 16, 2017 at 2 PM EDT)

Families of San Bernadino terror attack victims sue Facebook, Twitter, Google over “propaganda” arguments that evade Section 230

Families of victims of the fall 2015 terror attack in San Bernadino, CA are suing the three biggest social media companies (that allow unmonitored broadcast of content in public mode), that is Facebook, Twitter, and Google. Similar suits have been filed by victims of the Pulse attack in Orlando and the 2015 terror attacks in Paris.

Station WJLA in Washington DC, a subsidiary of the “conservative” (perhaps mildly so) Sinclair Broadcast Group in Baltimore, put up a news story Tuesday morning, including a Scribd PDF copy of the legal complaint in a federal court in central California, here. I find it interesting that Sinclair released this report, as it did so last summer with stories about threats to the power grids, which WJLA and News Channel 8 in Washington announced but then provided very little coverage of to local audiences (I had to hunt it down online to a station in Wisconsin).

Normally, Section 230 protects social media companies from downstream liability for the usual personal torts, especially libel, and DNCA Safe Harbor protects them in a similar fashion from copyright liability if they remove content when notified.

However, the complaint seems to suggest that the companies are spreading propaganda and share in the advertising revenue earned from the content, particularly in some cases from news aggregation aimed at user “Likenomics”.

Companies do have a legal responsibility to remove certain content when brought to their attention, including especially child pornography and probably sex trafficking, and probably clearcut criminal plans. They might have legal duties in wartime settings regarding espionage, and they conceivably could have legal obligations regarding classified information (which is what the legal debate over Wikileaks and Russian hacking deals with).

But “propaganda” by itself is ideology. Authoritarian politicians on both the right and left (Vladimir Putin) use the word a lot, because they rule over populations that are less individualistic in their life experience than ours, where critical thinking isn’t possible, and where people have to act together. The word, which we all learn about in high school civics and government social studies classes (and I write this post on a school day – and I used to sub), has always sounded dangerous to me.

But the propagation of ideology alone would probably be protected by the First Amendment, until it is accompanied by more specific criminal or military (war) plans. A possible complication could be the idea that terror ideology regards civilians as combatants.

Facebook recently announced it would add 3000 associates to screen for terror or hate content, but mainly on conjunction with Facebook Live broadcasts of crimes or even suicide. I would probably be a good candidate for one of these positions, but I am so busy working for myself I don’t have time (in “retirement”, which is rather like “in relief” in baseball).

Again, the Internet that we know with unfiltered user-generated content is not possible today if service companies have to pre-screen what gets published for possible legal problems. Section 230 will come under fire for other reasons soon (the Backpage scandal).

I have an earlier legacy post about Section 230 and Backpage here.

(Posted: Tuesday, May 9, 2017 at 1 PM EDT)

Facebook, and other social media companies and publishing platforms, come under more scrutiny as “attractive nuisances” for unstable people

The New York Times has a front page story about social media perils with a blunt headline, “Video of killing casts Facebook in a harsh light”.   (Maybe, in comparison to the tort manual, it’s a “false light”).  The story, by Mike Isaac and Christopher Mele, has a more expansive title online, “A murder on Facebook provokes outrage and questions over responsibility.”

This refers to a recent brazen random shooting of a senior citizen in Cleveland Easter Sunday (on Facebook Live), but there have been a few other such incidents, including the gunning of two reporters on a Virginia television station during a broadcast in 2015, after which the perpetrator committed suicide. Facebook Live has also been used to record shooting by police, however (as in Minnesota).

The Wall Street Journal has a similar story today by Deep Seetharaman (“Murder forces scrutiny at Facebook”) and Variety includes a statement by Justin Osofsky, Facebook’s VP of global operations.  Really, is it reasonable the AI or some other tool can detect violent activity being filmed prospectively?

At the outset, it’s pretty easy to ask why the assailants in these cases had weapons.  Obviously, they should not have passed background checks – except that some may have had no previous records.

As the articles point out, sometimes the possibility of public spectacle plays into the hands of “religious enemies”, that is lone wolf actors motivated by radical Islam or other ideologies. But at a certain psychological level, religion is a secondary contributing factor.  Persons who commit such acts publicly (or covertly) have found that this world if modernism, abstraction and personal responsibility makes no sense to them.  So ungated social media may, in rare cases, provoke a “15 minutes of fame” motive along with a “nothing to lose” attitude (and maybe a belief in martyrdom).  This syndrome seems very personal and usually goes beyond the portrayal of an authoritarian religious or political message.

It is easy, of course, to invoke a Cato-like statistical argument (which often applies to immigration).  In a nation of over 300 million people (or a world of billions), instant communication will rarely, but perhaps predictably with some very low probability, provoke such incidents.  You can make the same arguments about the mobility offered by driving cars.

Ungated user content offers new forms of journalism, personal expression and self-promotion, and new checks on political powers, but it comes with some risks, like fake news and crazy people seeking attention.

For me, the history is augmented by the observation that most of my own “self-promotion” came through search engines on flat sites, in the late 90s and early 00’s, before modern social media offered friending and news aggregation.  As with an incident when I was substitute teaching in late 2005, the possibility of search engine discovery carried its own risks, leading to the development of the notion of “online reputation.”

Still, the development of user-generated content, that did not have to pay its own freight the way old fashioned print publications did in the pre-Internet days when the bottom line controlled what could be published, is remarkable in the moral dilemmas it can create.

It’s ironic. How social media allows us to experience being “alone together”, but makes up for it by encouraging individuals to ask for help online by crowdfunding the meeting of their own needs – something I am usually hesitant to jump into.

This is a good place to mention a new intrusion onto Section 230, a bill by Anne Wagner (R-MO), “Allow States and Victims to Fight Online Sex Trafficking Act of 2017”, partly in response to the Backpage controversy, congressional link here. No doubt discussion of this bill will cause more discussion of the expectations for proactive screening by social media.

There’s an additional note: the perpetrator of the Cleveland incident has ended his own life after police attempted to apprehend him (Cleveland Plain Dealer story).

(Posted: Tuesday, April 18, 2017 at 1 PM EDT)

Update: Thursday, April 27, 2017 at 10:45 AM EDT

There has been a major crime deliberately filmed on Facebook Live in Thailand, story here.

Facebook has announced plans to hire 3000 more people to screen complaints for inappropriate content.  These jobs probably often require bilingual skills.

More followup on allowing guests router use, on downstream liability questions

Recently (Jan. 10), I wrote a posting about the possible downstream liability that router owners could experience if they allow guests to use their networks.  This could include persons hosting refugees or asylum seekers for humanitarian reasons or to “give back”. It could also apply to the sharing economy (Airbnb and other home-sharing sites).

After talking to Electronic Frontier Foundation, I was finally guided to a website they had set up called “Open Wireless,” and here is their take on it, at this link.

Here is how I interpret this paper.

First, as I noted, it is generally pretty easy to provide guest accounts, that would separate the log of Internet accesses made by the guest(s) for identification in any civil or criminal action.  It would always be advisable for the owner to do this, and insist on the use of a guest account and separate password  (or else the guest would use her own hotspot, which might not work in all locations).

Furthermore, discussions with others (like at Geek Squad) have suggested that installation of OpenDNS is not necessarily a critical idea for liability protection;  it does not provide perfect protection from a determined criminal compromise.  Indeed, some use of TOR and hidden sites for some foreign guests could be morally legitimate (to avoid detection by autocratic home countries).

There is no law requiring router owners to protect their networks, or establishing downstream liability potential.  There is also no law protecting owners from a injured party’s from the normal “” of negligence on the part of the owner. (States could vary on this, but it doesn’t seem like they have done much about it.)

An owner who could be reasonably suspicious that his router was being used for illegal downloads or to facilitate terror recruitment, sex trafficking, child pornography, cyberbullying, or other similar harms, would seem to be at risk, as I read this.  That could leave open the question of monitoring use.

It would seem that an owner would need to behave in good faith in allowing the use of his router.  Evidence of creditworthiness or reputation of guests might seem to be evidence of good faith, as well as providing a strike page requiring agreement to terms of service (which normally means no illegal use).

With personal guests (including boarders or roommates) it seems that a typical expectation is how well the host knows the guest, and whether the host can reasonably expect the guest to behave responsibly.  In the case of hosting for humanitarian reasons, I think there is something that is troubling here.  It may be like saying that providing foster care for children is risky (because it can be).   In Canada, the legal system recognizes the idea of private sponsorship or refugees and that would seem to provide some presumption of good faith because the host is privately supplying a needed service to others.  In the United States, especially now (under Trump) the legal system and culture seems to emphasize “take care or your own first” and seems to provide no such recognition. Yet asylum seekers, to stay out of detention and homeless shelters, would probably need private sponsors to support them and take responsibility for them.  It’s not yet clear to me that a host in the US might not be viewed as intrinsically negligent during our current political climate toward immigration.  However, background checking (with former employers, etc) or other forms of familiarity (repeated volunteering) might provide more of a presumption of good faith, as I would interpret this.

(Posted: Tuesday, January 31, 2017 at 3:30 PM EST)

Downstream liability concerns for allowing others to use your business or home WiFi connection, and how to mitigate

A rather obscure problem of liability exposure, both civil and possibly criminal, can occur to landlords, businesses, hotels, or homeowners (especially shared economy users) who allow others to use their WiFi hubs “free” as a way to attract business.

Literature on the problem so far, even from very responsible sources, seems a bit contradictory.  The legal landscape is evolving, and it’s clear the legal system has not been prepared to deal with this kind of problem, just as is the case with many other Internet issues.

Most hotels and other venues offering free WiFi take the guest to a strike page when she enters a browser; the guest has to enter a user-id, password, and agree to terms and conditions to continue.  This interception can normally be provided with router programming, with routers properly equipped.  The terms and conditions typically say that the user will not engage in any illegal behavior (especially illegal downloads, or possibly downloading child pornography or planning terror attacks).  The terms may include a legal agreement to indemnify the landlord for any litigation, which in practice has been very uncommon so far in the hotel business.  The router may be programmed to disallow peer-to-peer.

There is some controversy in the literature as to whether Section 230 of the 1996 Telecommunications Act would hold hotels and businesses harmless.  But my understanding that Section 230 has more to do with a content service provider (like a discussion forum host or a blogging service provide) being held harmless for content posted by users, usually for claims of libel or privacy invasion.  A similarly spirited provision in the Digital Millennium Copyright Act of 1998, called Safe Harbor, would protect service providers for copyright infringement by users.  Even so, some providers, like Google with its YouTube platform, have instituted some automated tools to flag some kinds of infringing content before posting, probably to protect their long-term business model viability. Whether Section 230 would protect a WiFi host sounds less certain, to me at least.  A similar question might be posed for web hosting companies, although it sounds as though generally they are protected.  Web hosting companies, however, all say that they are required to report child pornography should they happen to find it, in their AUP’s. You can make a case for saying that a telecommunications company is like a phone company, an utility, so a hotel or business is just extending a public utility. (That idea also mediates the network neutrality debate, which is likely to become more uncertain under a president Trump.)

Here’s a typical reference on this problem for hotels and businesses.

A more uncertain environment would exist for the sharing economy, especially home sharing services like Airbnb.  Most travelers probably carry their own laptops or tablets and hotspots (since most modern smart phones can work as hotspots) so they may not need to offer it, unless wireless reception is weak in their homes.  Nevertheless, some homeowners have asked about this.  These sorts of problems may even be more problematic for families, where parents are not savvy enough to understand the legal problems their teen kids can cause, or they could occur in private homes where roommates share telecommunications accounts, or where a landlord-homeowner takes in a boarder, or possibly even a live-in caregiver for an elderly relative.  The problem may also occur when hosting asylum seekers (which is likely to occur in private homes or apartments), and less often with refugees (who more often are housed in their own separate apartment units).

It’s also worth noting that even individual homeowners have had problems when their routers aren’t properly secured, and others are able to pick up the signal (which for some routers can carry a few hundred feet) and abuse it.  In a few cases (at least in Florida and New York State) homeowners were arrested for possession of child pornography and computers seized, and it took some time for homeowners to clear themselves by showing that an outside source had hijacked the connection.

Comcast, among other providers, is terminating some accounts with repeated complaints of illegal downloads through a home router.  In some countries, it is possible for a homeowner to lose the right to any Internet connection forever if this happens several times, even If others caused the problem.

Here are a couple of good articles on the problem at How-to-Geek and Huffington, talking about the Copyright Alerts System.  Some of this mechanism came out of the defeated Stop Online Piracy Act (SOPA), whose well-deserved death was engineering in part by Aaron Swartz, “The Internet’s Own Boy”, who tragically committed suicide in early 2013 after enormous legal threats from the Obama DOJ himself.

Along these lines, it’s well to understand that automated law enforcement and litigation scanning tools to look for violations are becoming more common on the Internet.  It is now possible to scan cloud backups for digital watermarks of known child pornography images, and it may become more common in the future to look for some kinds of copyright infringement or legal downloads this way (although content owners are good enough detecting the downloading themselves when it is done through P2P).

Generally, the best advice seems to be to have a router with guest-router options, and to set up the guest account to block P2P and also to set up OpenDNS.  An Airbnb community forum has a useful entry here.  Curiously, Airbnb itself provides a much more cursory advisory here, including ideas like locking the router in a closet (pun).

I have a relatively new router and modem combo from Comcast myself.  I don’t see any directions as to how to do this in what came with it.  I will have to call them soon and check into this.  But here is a typical forum source on guest accounts on Xfinity routers.  One reverse concern, if hosting an asylum seeker, could be that the guest needs to use TOR to communicate secretly with others in his or her home country.

It’s important to note that this kind of problem has come some way in the past fifteen years or so.  It used to be that families often had only one “family computer” and the main concerns could be illegal content that could be found on a hard drive.  Now, the concern migrates to abuse of the WiFi itself, since guests are likely to have their own laptops or tablets and storage devices.  There has also been some evolution on the concept of the nature of liability.  Up until about 2007 or so, it was common to read that child pornography possession was a “strict liability offense”, which holds the computer owner responsible regardless of a hacker or other user put it there (or if malware did).  In more recent years, police and prosecutors have indeed sounded willing to look at the usual “mens rea” standard.  One of my legacy blogs has a trace of the history of this notion here; note the posts on Feb. 3 and Feb. 25 2007 about a particularly horrible case in Arizona.  Still, in the worst situations, an “innocent” landlord could find himself banned from Internet accounts himself.  The legal climate still has to parse this idea of downstream liability (which Section 230 and Safe Harbor accomplish to some extent, but evoking considerable public criticism about the common good), with a position on how much affirmative action it wants those who benefit from technology to remain proactive to protect those who do not.

(Posted: Monday, January 9, 2017 at 10:45 PM EST)

Update: Tuesday, Jan 24, 2017, about 5 PM EST

Check out this Computerworld article (Michael Horowitz, “Just say No” [like Nancy Reagan] June 27, 2015) on how your “private hotspot” Xfinitywifi works.  There’s more stuff below in the comments I posted .  To me, the legal situation looks ambiguous (I’ve sent a question about this to Electronic Frontier Foundation; see pdf link in comment Jan. 24).  If you leave your router enabled, someone could sign onto it (it looks if they have your Xfinity account password, or other password if you changed it).  Comcast seems to think this is “usually” OK because any abuse can be separated to the culprit.

 

Families of victims of Orlando Pulse attack sue Twitter, Google, and Facebook in federal court in Michigan, outflanking Section 230

Three families of victims in the June 12, 2016 attack on Pulse, a gay nightclub in Orlando, FL (about one mile south of downtown) have filed a federal lawsuit against three major tech companies (Twitter, Google, and Facebook) in the Eastern District of Michigan (apparently not in Florida). The complaint against Google seems to involve its wholly owned YouTube video posting service, and possibly Adsense or other similar ad network products, but probably not the search engine itself or the popular Blogger platform.

The PDF of the complaint is here.

The “Prayer for Relief” at the end of the document mentions civil liability under United States Code 2333(a), and 2339(a) and 2339(b).  The statutes are at 2333  Civil remedies  2339  “Harboring or concealing terrorists”   https://www.law.cornell.edu/uscode/text/18/2339    I don’t see an amount specified, and I do see a trial by jury requested (apparently chosen in Michigan).

I have previously described the preliminary news about the litigation on one of my legacy blogs, here.

Points 148 and 149 in the Complaint try to establish that perpetrator Mateen was likely radicalized on these social media sites. But compared to other biographical information about Mateen now well known, it seems to many observers that social media influence on his intentions was probably small compared to many other factors in his life.

The most novel aspect of the argument seems to be the way the plaintiffs try to get around Section 230 of the 1996 Telecommunications Act (also known as the “Communications Decency Act”), test  Section c-1 says that no provider or user of an interactive computer service shall be treated as a publisher…

The plaintiffs claim that the aggregation of user content (as written by a terrorist recruiter), including any text, still images, and video, is regarded in the context of the user himself or herself, and also in the context of an ads generated and shown on the web page, either a computer or mobile device.  This new context or “intersection data” (to borrow from IBM’s old database terminology from the 1980s) is regarded as new content created by the social media company.

It should be noted that all the companies do have algorithms to prevent advertiser’s content from being delivered to offensive content.  For example, Google adsense will not deliver ads on pages when Google automated bots detect offensive content according to certain criteria which Google necessarily maintains as a trade secret. This would sound like a preliminary defense to this notion.

Also, as a user, I don’t particularly view the delivery of an ad to a webpage as “content” related to the page.  Since I don’t turn on “do not track”, I often see ads based on my own searches on my own pages. I am generally not influenced by the appearance of ads on web pages.

The plaintiffs give many details as to how foreign enemies (particularly connected to ISIS (“The Islamic State of Iraq and the Levant”) used their accounts on these platforms, and how, supposedly, attempts by the three companies to close accounts when they were discovered were insufficient.  A quick reading of the complaint does not show convincingly how potential enemies could reliably be prevented from establishing new accounts, but some failures (like related user names) do seem detectable. It would sound possible (to me, at least, as colored by my own military service in the distant past) that the idea that specific foreign enemies treat US civilians at home as combatants could become legally relevant.

User generated content, as we know it today, would not be possible if every item had to be approved by a “gate keeper” which was generally the model in print publishing before the Internet (outside of self-published books).  Even in traditional publishing, authors usually have to indemnify publishers against unexpected liabilities.

Nevertheless, there are some functional differences between what telecommunications providers (like Comcast or Verizon), hosting companies (like Verio, Godaddy, or Bluehost), and self-publishing platforms (like Blogger and WordPress, the latter of which is usually provided by a hosting company but doesn’t have to be), self-publishing companies for print-on-demand books (and e-books), and social media companies (which were originally envisioned as meetup tools but have tended to become personal news aggregation platforms) – provide for end-users. Add to this mix entities like chat rooms and discussion forums (like Reddit).   A loss by the defendants in this case (at least after appeals) could affect other kinds of providers.

Companies do have a responsibility for removing and reporting patently illegal content when they find it or when users report it (like child pornography).  But they don’t have a responsibility to pre-screen.  Nevertheless, companies do have some prescreening tools to apply to images and videos using watermarks to compare to databases for possible copyright infringement, and for child pornography (as maintained by the Center for Missing and Exploited Children).  Google in particular has a lot of expertise in this area.  But it is hard to imagine if this technology could screen for terror-promoting content.

Downstream liability for publishers has been assessed or at least conceded in the past, after crimes have been committed based on published material.  For example, consider the history of Paladin Press with the book “Hit Man” (Wikipedia account )

This case sounds very uncertain at this time.  More details will be provided here (in comments or future postings) as they become known. .

There have been a few other downstream liability suits against social media companies in relation to the Paris attacks in 2015. Brian Fung has a story in the Washington Post, “Tech companies ‘profit from ISIS’ allege families of Orlando shooting victims in federal lawsuit“, and notes that under Trump a GOP Congress is likely to weaken Section 230 when foreign enemy manipulation is at issue.

The pictures are from my visit to Detroit (Aug. 2012), and downtown Orlando festival and then the Pulse (July 2015).

(Posted: Wednesday, Dec. 21, 2016 at 11:45 PM EST)