Cato Institute holds forum on “Marxist Origins of Hate Speech Legislation and Political Correctness”

Today, Tuesday November 28, 2017, the Cato Institute held a 90-minute symposium “Marxist Origins of Hate Speech Legislation and Political Correctness”.

The basic link is here.  (Cato will presumably supply the entire video in the live space soon.)

The event was moderate by Marian L. Tupy, and featured Danish author Flemming Rose (author of “The Tyranny of Silence”, now a Cato fellow), and Christina Hoff Sommers. Resident Scholar, American Enterprise Institute.

Rose focused at first on UN Covenant on Civil and Political Rights (1965), Article 20, Paragraph 2, which included a definition of “hate speech” to include “any advocacy or national, religious or racial hatred that constitutes incitement to discrimination, hostility and violence…”.   That is, incitement is more than incitement for near term lawless action (as in the US); it includes encouraging others to discriminate. The US and most European countries voted against this at first, but most European countries have come around to this notion in their hate speech laws today.  Authoritarian countries favored this approach, because dictators think that they can stay in power if various minority groups are placated.

Rose traced legal sanctions against both hate speech and fake news distribution to the early days of Communism, back with the Bolshevik revolution (like the 1981 movie “Reds”) where news distribution was viewed in terms of propaganda.  Fake news manipulation (as a propaganda exercise) by foreign enemies is more likely when those who view themselves as educated and elite (“Hillary-like”) have little personal contact with those who are not;  in 2016 the Russians seem to have taken advantage of unawareness of “populism” by more conventional policy pundits.  But it should be obvious that fake news runs the legal risks of libel and defamation litigation, which may be a little easier to parry in the US than in Europe.

Rose also made the point that minorities need free speech to advance themselves, rather than regard free speech as an incitement or invitation to others to continue discrimination.

Authoritarian and leftist interpretation of hate speech law tends to give very little credit to the individual to be able to think and learn from himself, but assumes people will vote in terms of tribal interests, which often is true (as we found out with the election of Trump and Russian meddling). Rose included some panels of modern European fake news law, from Germany and Italy.

Sommers talked about the rapid expansion of campus speech codes, with ideas like trigger warnings and microaggressions and safe spaces, since about 2010.  This seems to have developed rather suddenly. Sommers attributed the rise of these campus speech codes to an ideology of “intersectionality”, a theory of multidimensional group oppression.

At least two questions from the audience came from undergraduate college students, one at GWU, who said that influence of “intersectional” thinking had been quite shocking to him. Milo Yiannopoulos had spent a good part of his “Dangerous” book explaining the perils of this idea.  But other writers, as in the transgender community recently, have tried to make a lot of it.  Again, there seems to be a loss of the idea that self-concept should come from the self (a tautology) and not inherited group identification.

Several thoughts need reinforcement. One is that “hate speech” codes don’t draw a clear line between actual commission of acts and becoming connected to others doing bad things (like “watching” and journaling but not intervening — the “no spectators” idea).  Another is that these collectivist behavior norms regard “systematic” discrimination against identifiable groups (or “intersections” of groups) as akin to actual violence and aggression against the constituent individuals.  Still another idea is that “meta-speech”, where commentators or journalists speak about the discriminatory value systems of the past in order to impart a sense of history, sometimes may come across as an invitation or gratuitous reminder for aggressive politicians to try the same behaviors again;  speakers should be expected to put their own skin in the game.  Finally, there is a loss of interest in individualism itself, partly because “hyper-individualism” tends to leave a lot of people behind as less “valuable”. There is more emphasis on belonging to the tribe or group, or at least in meeting standards of supervised community engagement.

Many attendees had seen the breaking news of (Communist) North Korea’s missile test today on their smartphone just before the session started.

(Posted: Tuesday, November 28, 2017 at 10:30 PM EST)

Downstream liability concerns for allowing others to use your business or home WiFi connection, and how to mitigate

A rather obscure problem of liability exposure, both civil and possibly criminal, can occur to landlords, businesses, hotels, or homeowners (especially shared economy users) who allow others to use their WiFi hubs “free” as a way to attract business.

Literature on the problem so far, even from very responsible sources, seems a bit contradictory.  The legal landscape is evolving, and it’s clear the legal system has not been prepared to deal with this kind of problem, just as is the case with many other Internet issues.

Most hotels and other venues offering free WiFi take the guest to a strike page when she enters a browser; the guest has to enter a user-id, password, and agree to terms and conditions to continue.  This interception can normally be provided with router programming, with routers properly equipped.  The terms and conditions typically say that the user will not engage in any illegal behavior (especially illegal downloads, or possibly downloading child pornography or planning terror attacks).  The terms may include a legal agreement to indemnify the landlord for any litigation, which in practice has been very uncommon so far in the hotel business.  The router may be programmed to disallow peer-to-peer.

There is some controversy in the literature as to whether Section 230 of the 1996 Telecommunications Act would hold hotels and businesses harmless.  But my understanding that Section 230 has more to do with a content service provider (like a discussion forum host or a blogging service provide) being held harmless for content posted by users, usually for claims of libel or privacy invasion.  A similarly spirited provision in the Digital Millennium Copyright Act of 1998, called Safe Harbor, would protect service providers for copyright infringement by users.  Even so, some providers, like Google with its YouTube platform, have instituted some automated tools to flag some kinds of infringing content before posting, probably to protect their long-term business model viability. Whether Section 230 would protect a WiFi host sounds less certain, to me at least.  A similar question might be posed for web hosting companies, although it sounds as though generally they are protected.  Web hosting companies, however, all say that they are required to report child pornography should they happen to find it, in their AUP’s. You can make a case for saying that a telecommunications company is like a phone company, an utility, so a hotel or business is just extending a public utility. (That idea also mediates the network neutrality debate, which is likely to become more uncertain under a president Trump.)

Here’s a typical reference on this problem for hotels and businesses.

A more uncertain environment would exist for the sharing economy, especially home sharing services like Airbnb.  Most travelers probably carry their own laptops or tablets and hotspots (since most modern smart phones can work as hotspots) so they may not need to offer it, unless wireless reception is weak in their homes.  Nevertheless, some homeowners have asked about this.  These sorts of problems may even be more problematic for families, where parents are not savvy enough to understand the legal problems their teen kids can cause, or they could occur in private homes where roommates share telecommunications accounts, or where a landlord-homeowner takes in a boarder, or possibly even a live-in caregiver for an elderly relative.  The problem may also occur when hosting asylum seekers (which is likely to occur in private homes or apartments), and less often with refugees (who more often are housed in their own separate apartment units).

It’s also worth noting that even individual homeowners have had problems when their routers aren’t properly secured, and others are able to pick up the signal (which for some routers can carry a few hundred feet) and abuse it.  In a few cases (at least in Florida and New York State) homeowners were arrested for possession of child pornography and computers seized, and it took some time for homeowners to clear themselves by showing that an outside source had hijacked the connection.

Comcast, among other providers, is terminating some accounts with repeated complaints of illegal downloads through a home router.  In some countries, it is possible for a homeowner to lose the right to any Internet connection forever if this happens several times, even If others caused the problem.

Here are a couple of good articles on the problem at How-to-Geek and Huffington, talking about the Copyright Alerts System.  Some of this mechanism came out of the defeated Stop Online Piracy Act (SOPA), whose well-deserved death was engineering in part by Aaron Swartz, “The Internet’s Own Boy”, who tragically committed suicide in early 2013 after enormous legal threats from the Obama DOJ himself.

Along these lines, it’s well to understand that automated law enforcement and litigation scanning tools to look for violations are becoming more common on the Internet.  It is now possible to scan cloud backups for digital watermarks of known child pornography images, and it may become more common in the future to look for some kinds of copyright infringement or legal downloads this way (although content owners are good enough detecting the downloading themselves when it is done through P2P).

Generally, the best advice seems to be to have a router with guest-router options, and to set up the guest account to block P2P and also to set up OpenDNS.  An Airbnb community forum has a useful entry here.  Curiously, Airbnb itself provides a much more cursory advisory here, including ideas like locking the router in a closet (pun).

I have a relatively new router and modem combo from Comcast myself.  I don’t see any directions as to how to do this in what came with it.  I will have to call them soon and check into this.  But here is a typical forum source on guest accounts on Xfinity routers.  One reverse concern, if hosting an asylum seeker, could be that the guest needs to use TOR to communicate secretly with others in his or her home country.

It’s important to note that this kind of problem has come some way in the past fifteen years or so.  It used to be that families often had only one “family computer” and the main concerns could be illegal content that could be found on a hard drive.  Now, the concern migrates to abuse of the WiFi itself, since guests are likely to have their own laptops or tablets and storage devices.  There has also been some evolution on the concept of the nature of liability.  Up until about 2007 or so, it was common to read that child pornography possession was a “strict liability offense”, which holds the computer owner responsible regardless of a hacker or other user put it there (or if malware did).  In more recent years, police and prosecutors have indeed sounded willing to look at the usual “mens rea” standard.  One of my legacy blogs has a trace of the history of this notion here; note the posts on Feb. 3 and Feb. 25 2007 about a particularly horrible case in Arizona.  Still, in the worst situations, an “innocent” landlord could find himself banned from Internet accounts himself.  The legal climate still has to parse this idea of downstream liability (which Section 230 and Safe Harbor accomplish to some extent, but evoking considerable public criticism about the common good), with a position on how much affirmative action it wants those who benefit from technology to remain proactive to protect those who do not.

(Posted: Monday, January 9, 2017 at 10:45 PM EST)

Update: Tuesday, Jan 24, 2017, about 5 PM EST

Check out this Computerworld article (Michael Horowitz, “Just say No” [like Nancy Reagan] June 27, 2015) on how your “private hotspot” Xfinitywifi works.  There’s more stuff below in the comments I posted .  To me, the legal situation looks ambiguous (I’ve sent a question about this to Electronic Frontier Foundation; see pdf link in comment Jan. 24).  If you leave your router enabled, someone could sign onto it (it looks if they have your Xfinity account password, or other password if you changed it).  Comcast seems to think this is “usually” OK because any abuse can be separated to the culprit.


Families of victims of Orlando Pulse attack sue Twitter, Google, and Facebook in federal court in Michigan, outflanking Section 230

Three families of victims in the June 12, 2016 attack on Pulse, a gay nightclub in Orlando, FL (about one mile south of downtown) have filed a federal lawsuit against three major tech companies (Twitter, Google, and Facebook) in the Eastern District of Michigan (apparently not in Florida). The complaint against Google seems to involve its wholly owned YouTube video posting service, and possibly Adsense or other similar ad network products, but probably not the search engine itself or the popular Blogger platform.

The PDF of the complaint is here.

The “Prayer for Relief” at the end of the document mentions civil liability under United States Code 2333(a), and 2339(a) and 2339(b).  The statutes are at 2333  Civil remedies  2339  “Harboring or concealing terrorists”    I don’t see an amount specified, and I do see a trial by jury requested (apparently chosen in Michigan).

I have previously described the preliminary news about the litigation on one of my legacy blogs, here.

Points 148 and 149 in the Complaint try to establish that perpetrator Mateen was likely radicalized on these social media sites. But compared to other biographical information about Mateen now well known, it seems to many observers that social media influence on his intentions was probably small compared to many other factors in his life.

The most novel aspect of the argument seems to be the way the plaintiffs try to get around Section 230 of the 1996 Telecommunications Act (also known as the “Communications Decency Act”), test  Section c-1 says that no provider or user of an interactive computer service shall be treated as a publisher…

The plaintiffs claim that the aggregation of user content (as written by a terrorist recruiter), including any text, still images, and video, is regarded in the context of the user himself or herself, and also in the context of an ads generated and shown on the web page, either a computer or mobile device.  This new context or “intersection data” (to borrow from IBM’s old database terminology from the 1980s) is regarded as new content created by the social media company.

It should be noted that all the companies do have algorithms to prevent advertiser’s content from being delivered to offensive content.  For example, Google adsense will not deliver ads on pages when Google automated bots detect offensive content according to certain criteria which Google necessarily maintains as a trade secret. This would sound like a preliminary defense to this notion.

Also, as a user, I don’t particularly view the delivery of an ad to a webpage as “content” related to the page.  Since I don’t turn on “do not track”, I often see ads based on my own searches on my own pages. I am generally not influenced by the appearance of ads on web pages.

The plaintiffs give many details as to how foreign enemies (particularly connected to ISIS (“The Islamic State of Iraq and the Levant”) used their accounts on these platforms, and how, supposedly, attempts by the three companies to close accounts when they were discovered were insufficient.  A quick reading of the complaint does not show convincingly how potential enemies could reliably be prevented from establishing new accounts, but some failures (like related user names) do seem detectable. It would sound possible (to me, at least, as colored by my own military service in the distant past) that the idea that specific foreign enemies treat US civilians at home as combatants could become legally relevant.

User generated content, as we know it today, would not be possible if every item had to be approved by a “gate keeper” which was generally the model in print publishing before the Internet (outside of self-published books).  Even in traditional publishing, authors usually have to indemnify publishers against unexpected liabilities.

Nevertheless, there are some functional differences between what telecommunications providers (like Comcast or Verizon), hosting companies (like Verio, Godaddy, or Bluehost), and self-publishing platforms (like Blogger and WordPress, the latter of which is usually provided by a hosting company but doesn’t have to be), self-publishing companies for print-on-demand books (and e-books), and social media companies (which were originally envisioned as meetup tools but have tended to become personal news aggregation platforms) – provide for end-users. Add to this mix entities like chat rooms and discussion forums (like Reddit).   A loss by the defendants in this case (at least after appeals) could affect other kinds of providers.

Companies do have a responsibility for removing and reporting patently illegal content when they find it or when users report it (like child pornography).  But they don’t have a responsibility to pre-screen.  Nevertheless, companies do have some prescreening tools to apply to images and videos using watermarks to compare to databases for possible copyright infringement, and for child pornography (as maintained by the Center for Missing and Exploited Children).  Google in particular has a lot of expertise in this area.  But it is hard to imagine if this technology could screen for terror-promoting content.

Downstream liability for publishers has been assessed or at least conceded in the past, after crimes have been committed based on published material.  For example, consider the history of Paladin Press with the book “Hit Man” (Wikipedia account )

This case sounds very uncertain at this time.  More details will be provided here (in comments or future postings) as they become known. .

There have been a few other downstream liability suits against social media companies in relation to the Paris attacks in 2015. Brian Fung has a story in the Washington Post, “Tech companies ‘profit from ISIS’ allege families of Orlando shooting victims in federal lawsuit“, and notes that under Trump a GOP Congress is likely to weaken Section 230 when foreign enemy manipulation is at issue.

The pictures are from my visit to Detroit (Aug. 2012), and downtown Orlando festival and then the Pulse (July 2015).

(Posted: Wednesday, Dec. 21, 2016 at 11:45 PM EST)

Wikileaks exposes a lot of private lives


Wikileaks has published private data of many ordinary citizens overseas, according to a more recent AP story by Raphael Satter and Maggie Michael Aug. 23, “Private lives are exposed as Wikileaks spills its secrets“.  One person “exposed” was someone arrested for gay sex (or was it merely for being known as gay).  Some says that the PII leaks come about as Wikileaks doesn’t have the staff to carefully review what it puts out.

On my legacy movie reviews blog, I’ve covered a lot of films dealing with government surveillance and its exposure (“Killer Switch”, “Citizenfour”, “Silenced”, “The Internet’s Own Boy”, “The Fifth Estate”, “We Steal Secrets”, “Underground: The Julian Assange Story”. An additional film was a 40-minute clip of US action in Iraq, “Collateral Murder”, with the help of Chelsea Manning.

“Amateur” publishing (“The Fifth Estate” indeed) does “keep them honest”.  But a lot of ordinary people tend to wind up in the crosshairs, and given the asymmetry.

(Published: Wednesday, Aug. 24, 2016 at 11 PM EDT)

Gawker case seems to have an ugly backdrop of silencing naughtier elements of the press


The idea of chilling speech from newbies or upstarts creates great concern with me. That’s one reason that I follow, for example, the SLAPP issue (May 30 post).

So I was a bit concerned over reports that Peter Thiel had “secretly” bankrolled the litigation costs for a destructive suit against Gawker, over a Hulk Hogan video, as I had previously explained on a  legacy blog post.    Vanity Fair has a long article by Abigail Tracy about Jeff Bezos’s saying Thiel needs a thicker skin (but so does Donald Trump, below – and Thiel is a Trump delegate).  Gawker (more or less in “Estate 4.5”) is threatened with extinction, not only from the award, but the enormous cost of defending what would have sounded like improbable litigation at first.


I know the arguments on third-party support for tort litigation.  Some have said that this action is simply the moral equivalent of the ACLU or EFF backing litigation, but that’s more often class action, or often in cases where there are multiple “amateur” complainants.  The Washington Post has aired an op-ed by Stuart Karle (North Base Media counsel) “In the lawsuits against Gawker, echoes of the racist South”  .  Karle discusses the 1960 case New York Times v. Sullivan , where elected Alabama officials sued the paper and won over “libel” for handling of black student protests, to be overturned by the Supreme Court. SCOTUS thus established a higher bar in libel or privacy cases affecting public figures.  (The case has started with publishing an ad, real downstream liability to be sure.) These standards involve ideas like actual malice and reckless disregard of the truth.  I don’t have a particular personal opinion about how Gawker should have been decided on these standards, as I haven’t really looked at the offending material in any “detail”. But the Post article suggests that trial court judges were not aware of how the litigation had been funded.

Donald Trump, recall, has said that he wants to reign in on the media and lower the libel standard again (even to the point that he could threaten the existence of a company like CNN).

Thiel’s action is reported to be motivated by his earlier “outing” as gay by Gawker, a story I had never actually heard.

This is all so disturbing first because of Thiel’s hand in building major Internet companies (including Facebook and later Paypal) and also because Thiel has assisted inventor Taylor Wilson, an indication that he views the security and stability of the electric power grids in the future as an existential issue for modern civilization, let alone his own companies.  (The only presidential candidate to mention this issue specifically so far has been Ted Cruz.)  I had thought, wouldn’t somebody like that be preferred to Donald Trump if a business person is to become president.  Then I noticed he was born in Germany, so ineligible (no out like Ted Cruz has).  I also noticed his connection to the Libertarian Party.  So the reports of vindictiveness behind the Gawker suit are rather baffling. Only after all this did I learn he had become a Trump delegate for the GOP Convention in Cleveland.

I don’t write original stories exposing people, and I don’t even use review sites to complain about businesses.   But I have been viewed as a “threat” in my own way.  Others sometimes object that I “compete with myself” (and therefore unfairly with others) and offer practically all of my content “free” online, because I am “lucky” enough not to have to make a living at it.  In fact, I would like to work my way into “legitimate” Fourth Estate journalism.  So, maybe I am bad for other people’s business models (they need the ad revenue, or they need to sell hard copies of books, say, to remain legitimate publishers, or to support literacy programs).  An associated idea is that, if you want to be listened to and heard, you should have real responsibility for others.  I’ve even op-ed on that on my old legacy site with a perspective (early 2005), “The Privilege of Being Listened to”   which even got this angry “flaming angel” email reply in 2007 (middle of page) .  That style of thinking assumed no one has a right to prove he’s “right” and “better” than others by becoming a chatterbox or troublemaker until he has others depending on him (tying sexuality to marriage and kids, etc.) in “real life”.  Today, I rarely get valid comments and emails on my own blog content;  most reaction comes to Twitter and Facebook postings, especially my own comments on personal news stories.  Still, my blogs and sites help “keep them honest”, even if I have no qualifying lineage to be heard.  All of this is in “reactive mode” and I’ll come back to it soon.

(Published on Thursday, June 2, 2016 at 9:45 PM EDT)


In the Washington Post, June 5, 2016, Christine Emba writes, “We’re all implicated in the messy Gawker case.”

On June 6, a Wahington Post LTE makes the point that Thiel had no “standing” to become involved in the Gawker litigation, so could become vulnerable for abrograting Gawker’s First Amendment rights.  But this idea is worrisome.  It could open up theories of torts against bloggers who comment on a situation about another consumer and seller not directly involving them. I am thinking of Michael Mann’s 1999 film “The Insider” and the theory of “tortious interference.”

P.S. 2 2016/6/10

Gawker has filed for Chapter 11 bankruptcy, but it is by no means “dead”; Vox story by Timothy B. Lee today.

Mother Jones faced a massive lawsuit in Idaho from GOP donor Frank VanderSloot and company Meleleuca in a complicated situation connected to an earlier incident involving a gay reporter in Idaho.  The details would make a movie by themselves;  you can read it all here.  But the narrative shows the power of the super-wealthy to control publicity that could expose them with what amount to SLAPP suits.