SESTA clears Senate committee, and Congress seems serious about stopping trafficking, even if it requires sacrifices from Internet users — and it seems superluous

Electronic Frontier Foundation has reported that the Senate Commerce Committee has approved a version of SESTA, the Stop Enabling Sex Traffickers Act, S. 1693.  Elliot Harmon’s article calls it “still an awful bill”.   Harmon goes into the feasibility of using automated filters to detect trafficking-related material, which very large companies like Google and Facebook might be half-way OK with. We saw this debate on the COPA trial, about filtering, more than a decade ago (I attended one day of that trial in Philadelphia in October 2006). No doubt, automated filtering would cause a lot of false positives and implicit self-censoring.

Apparently the bill contains or uses a “manager’s amendment”  (text) floated by John Thune (R-SD) which tries to deal with the degree of knowledge that a platform may have about its users.  The theory seems to be that it is easy to recognize the intentions of customers of Backpage but not of a shared hosting service. Sophia Cope criticizes the amendment here.

Elliot Harmon also writes that the Internet Association (which represents large companies like Google) has given some lukewarm support to modified versions of SESTA, which would not affect large companies as much as small startups that want user-generated content   It’s important to note that SESTA (and a related House bill) could make it harder for victims of trafficking to discuss what happened to them online, an unintended consequence, perhaps.  Some observers have said that the law regarding sex trafficking should be patterned after child pornography (where the law seems to work without too much interference of users) and that the law is already “there” now.

But “” has published a historical summary by Cindy Cohn and Jamie Williams that traces the history of Section 230 all the way back to a possibly libelous item in an AOL message board regarding Oklahoma City (the Zeran case).  Then others wanted to punish Craigslist and other sites for allowing users to post ads that were discriminatory in a Civil Rights sense. The law need to recognize the difference between a publisher and a distributor (and a simple utility, like a telecom company, which can migrate us toward the network neutrality debate).   Facebook and Twitter are arguably a lot more involved with what their users do than are shared hosting sites like BlueHost and Verio, an observation that seems to get overlooked.   It’s interesting that some observers think this puts Wikipedia at particular risk.

I don’t have much an issue with my blogs, because the volume of comments I get is small (thanks to the diversion by Facebook) these days compared to 8 years ago.  When I accept a guest post, I should add that Section 230 would not protect me, since I really have become the “publisher” so if a guest post is controversial, I tend to fact-check some of the content (especially accusations of crimes) myself online.

I’d also say that a recent story by Mitch Stoltz about Sci-Hub, relating to the Open Access debate which, for example. Jack Andraka has stimulated in some of his Ted Talks, gets to be relevant (in the sense that DMCA Safe Harbor is the analogy to Section 230 in the copyright law world). A federal court in Virginia ruled against Sci-Hub (Alexandra Elbakyan) recently after a complaint by a particular science journal, the American Chemical Society  But it also put intermediaries (ranging from hosting companies to search engines) at unpredictable risk if they support “open access” sites like this. The case also runs some risk of conflating copyright issues with trademark, but that’s a bit peripheral to discussing 230 itself.

Again, I think we have a major break in our society over the value of personalized free speech (outside of the control of organizational hierarchy and aggregate partisan or identity politics).  It’s particularly discouraging when you look at reports of surveys at campuses where students seem to believe that safe places are more important than open debate, and that some things should not be discussed openly (especially involving “oppressed” minorities) because debating them implies that the issues are not settled and that societal protections could be taken away again by future political changes (Trump doesn’t help). We’ve noted here a lot of the other issues besides defamation, privacy and copyright; they include bullying, stalking, hate speech, terror recruiting, fake news, and even manipulation of elections (am issue we already had an earlier run-in about in the mid 2000s over campaign finance reform, well before Russia and Trump and even Facebook). So it’s understandable that many people, maybe used to tribal values and culture, could view user-generated content as a gratuitous luxury for some (the more privileged like me) that diverts attention from remedying inequality and protecting minorities.  Many people think everyone should operate only by participating in organized social structures run top-down, but that throws us back, at least slouching toward authoritarianism (Trump is the obvious example). That is how societies like Russia, China, and say Singapore see things (let alone the world of radical Islam, or the hyper-communism of North Korea).

The permissive climate for user-generated content that has evolved, almost by default, since the late 1990s, seems to presume individuals can speak and act on their own, without too much concern about their group affiliations.  That idea from Ayn Rand doesn’t seem to represent how real people express themselves in social media, so a lot of us (like me) seem to be preaching to our own choirs, and not “caring” personally about people out of our own “cognitive”  circles.  We have our own kind of tribalism.

(Posted: Wednesday, Nov. 15, 2017 at 2 PM EST)


Will user-generated public content be around forever? The sex-trafficking issue and Section 230 are just the latest problem

It used to be very difficult to “get published”.  Generally, a third party would have to be convinced that consumers would really pay to buy the content you had produced.  For most people that usually consisted of periodical articles and sometimes books.  It was a long-shot to make a living as a best-selling author, as there was only “room at the top” for so many celebrities.  Subsidy “vanity” book publishing was possible, but usually ridiculously expensive with older technologies.

That started to change particularly in the mid 1990s as desktop publishing became cheaper, as did book manufacturing, to be followed soon by POD, print on demand, by about 2000.  I certainly took advantage of these developments with my first “Do Ask Do Tell” book in 1997.

Furthermore, by the late 1990s, it had become very cheap to have one’s own domain and put up writings for the rest of the world to find with web browsers.  And the way search engine technology worked by say 1998, amateur sites with detailed and original content had a good chance of being found passively and attracting a wide audience.  In addition to owned domains, some platforms, such as Hometown AOL at first, made it very easy to FTP content for unlimited distribution.  At the same time, Amazon and other online mass retail sites made it convenient for consumers to find self-published books, music, and other content.

Social media, first with Myspace and later with the much more successful Facebook, was at first predicated on the idea of sharing content with a known whitelisted audience of “friends” or “followers”.  In some cases (Snapchat), there was an implicit understanding that the content was not to be permanent. But over time, many social media platforms (most of all, Facebook, Twitter, and Instagram) were often used to publish brief commentaries and links to provocative news stories on the Web, as well as videos and images of personal experiences.  Sometimes they could be streamed Live.  Even though friends and followers were most likely to see it (curated by feed algorithms somewhat based on popularity in the case of Facebook) many of them were public for all to see,  Therefore, an introverted person like me who does not like “social combat” or hierarchy or does not like to be someone else’s voice (or to need someone else’s voice) could become effective in influencing debate.   It’s also important that modern social media were supplemented by blogging platforms, like Blogger, WordPress and Tumblr, which, although they did use the concept of “follower”,  were more obviously intended generally for public availability. The same was usually true of a lot of video content on YouTube and Vimeo.

The overall climate regarding self-distribution of one’s own speech to a possibly worldwide audience seemed permissive, in western countries and especially the U.S.   In authoritarian countries, political leaders would resist.  It might seem like an admission of weakness that an amateur journalist could threaten a regime, but we saw what happened, for example, with the Arab Spring.  A permissive environment regarding distribution of speech seemed to undercut the hierarchy and social command that some politicians claimed they needed to protect “their own people.”

Gradually, challenges to self-distribution evolved.   There was an obvious concern that children could find legitimate (often sexually oriented) content aimed for cognitive adults.  The first big problem was the Communications Decency Act of 1996.  The censorship portion of this would be overturned by the Supreme Court in 1997 (I had attended the oral arguments).  Censorship would be attempted again with the Child Online Protection Act, or COPA, for which I was a sublitigant under the Electronic Frontier Foundation.  It would be overturned in 2007 after a complicated legal battle, in the Supreme Court twice.  But the 1996 Communications Decency Act, or more properly known as the Telecommunications Act, also contained a desirable provision, that service providers (ranging from Blogging or video-sharing platforms to telecommunications companies and shared hosting companies) would be shielded from downstream liability for user content for most legal problems (especially defamation). That is because it was not possible for a hosting company or service platform to prescreen every posting for possible legal problems (which is what book publishers do, and yet require author indemnification!)  Web hosting and service companies were required to report known (as reported by users) child pornography and sometimes terrorism promotion.

At the same time, in the copyright infringement area, a similar provision developed, the Safe Harbor provision of the Digital Millennium Copyright Act of 1998, which shielded service providers from secondary liability for copyright infringement as long as they took down offending content from copyright owners when notified.  Various threats have developed to the mechanism, most of all SOPA, which got shot down by user protests in early 2012 (Aaron Swartz was a major and tragic figure).

The erosion of downstream liability protections would logically become the biggest threat to whether companies can continue to offer users the ability to put up free content without gatekeepers and participate in political and social discussions on their own, without proxies to speak for them, and without throwing money at lobbyists.  (Donald Trump told supporters in 2016, “I am your voice!”  Indeed.  Well, I don’t need one as long as I have Safe Harbor and Section 230.)

So recently we have seen bills introduced in the House (ASVFOSTA, “Allow States and Victims to Fight Online Trafficking Act”) in April (my post), and SESTA, Stop Enabling of Sex Traffickers Act” on Aug. 1 in the Senate (my post). These bills, supporters say, are specifically aimed at sex advertising sites, most of all Backpage..  Under current law, plaintiffs (young women or their parents) have lost suits because Backpage can claim immunity under 230.  There have been other controversies over the way some platforms use 230, especially Airbnb.  The companies maintain that they are not liable for what their users do.

Taken rather literally, the bills (especially the House bill) might be construed as meaning that any blogging platform or hosting provider runs a liability risk if a user posts a sex trafficking ad or promotion on the user’s site.  There would be no reasonable way Google or Blue Host or Godaddy or any similar party could anticipate that a particular user will do this.  Maybe some automated tools could be developed, but generally most hosting companies depend on users to report illegal content.  (It’s possible to screen images for water marks for known child pornography, and it’s possible to screen some videos and music files for possible copyright, and Google and other companies do some of this.)

Bob Portman, a sponsor of the Senate bill, told CNN and other reporters that normal service and hosting companies are not affected, only sites knowing that they host sex ads.  So he thinks he can target sites like Backpage, as if they were different.  In a sense, they are:  Backpage is a personal commerce-facilitation site, not a hosting company or hosting service (which by definition has almost no predictive knowledge of what subject matter any particular user is likely to post, and whether that content may include advertising or may execute potential commercial transactions, although use of “https everywhere” could become relevant).  Maybe the language of the bills could be tweaked to make this clearer. It is true that some services, especially Facebook, have become pro-active in removing or hiding content that flagrantly violates community norms, like hate speech (and that itself gets controversial).

Eric Goldman, a law professor at Santa Clara, offered analysis suggesting that states might be emboldened to try to pass laws requiring pre-screening of everything, for other problems like fake news.  The Senate bill particularly seems to encourage states to pass their own add-on laws. They could try to require pre-secreening.  It’s not possible for an ISP to know whether any one of the millions of postings made by customers could contain sex-trafficking before the fact, but a forum moderator or blogger monitoring comments probably could.  Off hand, it would seem that allowing a comment with unchecked links (which I often don’t navigate because of malware fears) could run legal risks (if the link was to a trafficking site under the table).  Again, a major issue should be whether the facilitator “knows”.  Backpage is much more likely to “know” than a hosting provider.  A smaller forum host might “know” (but Reddit would not).

From a moral perspective, we have something like the middle school problem of detention for everybody for the sins of a few.  I won’t elaborate here on the moral dimensions of the idea that some of us don’t have our own skin in the game in raising kids or in having dependents, as I’ve covered that elsewhere.  But you can see that people will perceive a moral tradeoff, that user-generated content on the web, the way the “average Joe” uses it, has more nuisance value (with risk of cyberbullying, revenge porn, etc) than genuine value in debate, which tends to come from people like me with fewer immediate personal responsibilities for others.

So, is the world of user-generated content “in trouble”?  Maybe.  It would sound like it could come down to a business model problem.  It’s true that shared hosting providers charge annual fees for hosting domains, but they are fairly low (except for some security services).  But free content service platforms (including Blogger, WordPress, YouTube, and Facebook and Twitter) do say “It’s free” now – they make their money on advertising connected to user content.   A world where people use ad blockers and “do not track” would seem grim for this business model in the future.  Furthermore, a  lot of people have “moral” objections to this model – saying that only authors should get the advertising revenue – but that would destroy the social media and UGC (user-generated content) world as we know it.  Consider the POD book publishing world. POD publishers actually do perform “content evaluation” for hate speech and legal problems, and do collect hefty fees for initial publication.  But lately they have become more aggressive with authors about books sales, a sign that they wonder about their own sustainability.

There are other challengers for those whose “second careers” like mine are based on permissive UGC.  One is the weakening of network neutrality rules, as I have covered here before.  The second comment period ends Aug. 17.  The telecom industry, through its association, has said there is no reason for ordinary web sites to be treated any differently than they have been, but some observers fear that some day new websites could have to pay to be connected to certain providers (beyond what you pay for a domain name and hosting now).

There have also been some fears in the past, which have vanished with time.  One flare-up started in 2004-2005 when some observers that political blogs could violate federal election laws by being construed as indirect “contributions”.   A more practically relevant problem is simply online reputation and the workplace, especially in a job where one has direct reports, underwriting authority, or the ability to affect a firm to get business with “partisanship”.  One point that gets forgotten often is that, indeed, social media sites can be set up with full privacy settings so that they’re not searchable.  Although that doesn’t prevent all mishaps (just as handwritten memos or telephone calls can get you in trouble at work in the physical world) it could prevent certain kinds of workplace conflicts.  Public access to amateur content could also be a security concern, in a situation where an otherwise obscure individual is able to become “famous” online, he could make others besides himself into targets.

Another personal flareup occurred in 2001 when I tried to buy media perils insurance and was turned down for renewal because of the lack of a third-party gatekeeper. This issue flared into debate in 2008 briefly but subsided.  But it’s conceivable that requirements could develop that sites (at least through associated businesses) pay for themselves and carry media liability insurance, as a way of helping account for the community hygiene issue of potential bad actors.

All of this said, the biggest threat to online free expression could still turn out to be national security, as in some of my recent posts.  While the mainstream media have talked about hackers and cybersecurity (most of all with elections), physical security for the power grid and for digital data could become a much bigger problem than we thought if we attract nuclear or EMP attacks, either from asymmetric terrorism or from rogue states like North Korea.  Have tech companies really provided for the physical security of their clouds and data given a threat like this?

Note the petition and suggested Congressional content format suggested by Electronic Frontier Foundation for bills like SESTA. It would be useful to know how British Commonwealth and European countries handle the downstream liability issues, as a comparison point. It’s also important to remember that a weakened statutory downstream liability protection for a service provider does not automatically create that liability.

(Posted: Thursday, Aug. 3, 2017 at 10:30 PM EDT)

Downstream liability concerns for allowing others to use your business or home WiFi connection, and how to mitigate

A rather obscure problem of liability exposure, both civil and possibly criminal, can occur to landlords, businesses, hotels, or homeowners (especially shared economy users) who allow others to use their WiFi hubs “free” as a way to attract business.

Literature on the problem so far, even from very responsible sources, seems a bit contradictory.  The legal landscape is evolving, and it’s clear the legal system has not been prepared to deal with this kind of problem, just as is the case with many other Internet issues.

Most hotels and other venues offering free WiFi take the guest to a strike page when she enters a browser; the guest has to enter a user-id, password, and agree to terms and conditions to continue.  This interception can normally be provided with router programming, with routers properly equipped.  The terms and conditions typically say that the user will not engage in any illegal behavior (especially illegal downloads, or possibly downloading child pornography or planning terror attacks).  The terms may include a legal agreement to indemnify the landlord for any litigation, which in practice has been very uncommon so far in the hotel business.  The router may be programmed to disallow peer-to-peer.

There is some controversy in the literature as to whether Section 230 of the 1996 Telecommunications Act would hold hotels and businesses harmless.  But my understanding that Section 230 has more to do with a content service provider (like a discussion forum host or a blogging service provide) being held harmless for content posted by users, usually for claims of libel or privacy invasion.  A similarly spirited provision in the Digital Millennium Copyright Act of 1998, called Safe Harbor, would protect service providers for copyright infringement by users.  Even so, some providers, like Google with its YouTube platform, have instituted some automated tools to flag some kinds of infringing content before posting, probably to protect their long-term business model viability. Whether Section 230 would protect a WiFi host sounds less certain, to me at least.  A similar question might be posed for web hosting companies, although it sounds as though generally they are protected.  Web hosting companies, however, all say that they are required to report child pornography should they happen to find it, in their AUP’s. You can make a case for saying that a telecommunications company is like a phone company, an utility, so a hotel or business is just extending a public utility. (That idea also mediates the network neutrality debate, which is likely to become more uncertain under a president Trump.)

Here’s a typical reference on this problem for hotels and businesses.

A more uncertain environment would exist for the sharing economy, especially home sharing services like Airbnb.  Most travelers probably carry their own laptops or tablets and hotspots (since most modern smart phones can work as hotspots) so they may not need to offer it, unless wireless reception is weak in their homes.  Nevertheless, some homeowners have asked about this.  These sorts of problems may even be more problematic for families, where parents are not savvy enough to understand the legal problems their teen kids can cause, or they could occur in private homes where roommates share telecommunications accounts, or where a landlord-homeowner takes in a boarder, or possibly even a live-in caregiver for an elderly relative.  The problem may also occur when hosting asylum seekers (which is likely to occur in private homes or apartments), and less often with refugees (who more often are housed in their own separate apartment units).

It’s also worth noting that even individual homeowners have had problems when their routers aren’t properly secured, and others are able to pick up the signal (which for some routers can carry a few hundred feet) and abuse it.  In a few cases (at least in Florida and New York State) homeowners were arrested for possession of child pornography and computers seized, and it took some time for homeowners to clear themselves by showing that an outside source had hijacked the connection.

Comcast, among other providers, is terminating some accounts with repeated complaints of illegal downloads through a home router.  In some countries, it is possible for a homeowner to lose the right to any Internet connection forever if this happens several times, even If others caused the problem.

Here are a couple of good articles on the problem at How-to-Geek and Huffington, talking about the Copyright Alerts System.  Some of this mechanism came out of the defeated Stop Online Piracy Act (SOPA), whose well-deserved death was engineering in part by Aaron Swartz, “The Internet’s Own Boy”, who tragically committed suicide in early 2013 after enormous legal threats from the Obama DOJ himself.

Along these lines, it’s well to understand that automated law enforcement and litigation scanning tools to look for violations are becoming more common on the Internet.  It is now possible to scan cloud backups for digital watermarks of known child pornography images, and it may become more common in the future to look for some kinds of copyright infringement or legal downloads this way (although content owners are good enough detecting the downloading themselves when it is done through P2P).

Generally, the best advice seems to be to have a router with guest-router options, and to set up the guest account to block P2P and also to set up OpenDNS.  An Airbnb community forum has a useful entry here.  Curiously, Airbnb itself provides a much more cursory advisory here, including ideas like locking the router in a closet (pun).

I have a relatively new router and modem combo from Comcast myself.  I don’t see any directions as to how to do this in what came with it.  I will have to call them soon and check into this.  But here is a typical forum source on guest accounts on Xfinity routers.  One reverse concern, if hosting an asylum seeker, could be that the guest needs to use TOR to communicate secretly with others in his or her home country.

It’s important to note that this kind of problem has come some way in the past fifteen years or so.  It used to be that families often had only one “family computer” and the main concerns could be illegal content that could be found on a hard drive.  Now, the concern migrates to abuse of the WiFi itself, since guests are likely to have their own laptops or tablets and storage devices.  There has also been some evolution on the concept of the nature of liability.  Up until about 2007 or so, it was common to read that child pornography possession was a “strict liability offense”, which holds the computer owner responsible regardless of a hacker or other user put it there (or if malware did).  In more recent years, police and prosecutors have indeed sounded willing to look at the usual “mens rea” standard.  One of my legacy blogs has a trace of the history of this notion here; note the posts on Feb. 3 and Feb. 25 2007 about a particularly horrible case in Arizona.  Still, in the worst situations, an “innocent” landlord could find himself banned from Internet accounts himself.  The legal climate still has to parse this idea of downstream liability (which Section 230 and Safe Harbor accomplish to some extent, but evoking considerable public criticism about the common good), with a position on how much affirmative action it wants those who benefit from technology to remain proactive to protect those who do not.

(Posted: Monday, January 9, 2017 at 10:45 PM EST)

Update: Tuesday, Jan 24, 2017, about 5 PM EST

Check out this Computerworld article (Michael Horowitz, “Just say No” [like Nancy Reagan] June 27, 2015) on how your “private hotspot” Xfinitywifi works.  There’s more stuff below in the comments I posted .  To me, the legal situation looks ambiguous (I’ve sent a question about this to Electronic Frontier Foundation; see pdf link in comment Jan. 24).  If you leave your router enabled, someone could sign onto it (it looks if they have your Xfinity account password, or other password if you changed it).  Comcast seems to think this is “usually” OK because any abuse can be separated to the culprit.


Journalists in peril, even in the U.S. now? What about the “amateurs”?


Margaret Sullivan has an important article on the safety of journalists in the Style section of the Washington Post today, Monday, July 18, 2016, “Free speech in peril, both far and near.”  She talks about the self-defense training journalists took before going to Cleveland for the RNC this week, as well as the illogic of some of the security rules in a state which allows open carry.  She wonders if this (Indians’s) baseball game (Progressive Field is nearby, replacing the old “Mistake by the Lake” of my boyhood) will end “Second Amendment 1, First Amendment 0”, very much a visiting team’s non-walkoff.


She does give a nod to the Committee to Protect Journalists and to “Pen America: The Freedom to Write”.

She also talks about the effectiveness of citizen journalism (my post here July 16). She makes the odd comment that this development adds to the number of people in peril (which in some cases could included people connected to the citizen journalists, like family, if they encounter combative enemies). She also credits citizen journalists for filling in all the details left out by the main media, “keeping them honest” (a trademarkable phrase from Anderson Cooper on CNN).  Still, her article leaves a nagging question about people like me who might not have “paid their dues” they way even Anderson Cooper (or Sebastian Junger) did early in their careers.


Why do authoritarian regimes crack down so hard on “ordinary people” as bloggers?  Do they really fear their power bases are in real peril from what the amateurs expose?  I think it is something more basic and sinister: they imprison people (like Ai Weiwei in China) or hack them (like in some attacks om Bangladesh) “just for authority” (a phrase I used as a child to protest my father), to prove that a political hierarchy imparts real meaning (if in fact that meaning is “imaginary”).

There is something disturbing, sometimes, about some of my own postings, which seem gratuitous to some people.  Why would I discuss a case of a particular casualty of a random bomb explosion in Central Park in New York on a blog post unless I was prepared “personally” to raise money for the victim?  (There is a personal sensitivity which for now I will skip, but return to later.)  Or, later, why would I present  (by YouTube embed) the rant of a “deranged” man who had attacked police recently?

In the later case, I was discussing “self-published” books and that particular assailant had created a curious or bizarre series of “self-help” books on Amazon (taken down today). So I was covering another wrinkle about self-publishing, a very important topic for me.  The danger would be that an impressionable or immature visitor finds the post, watches the video, and then doesn’t see that it is layered in a discussion of another point,, and wants to act on what the video says, out of context.  Am I responsible for that?  (The New Testament might say, well, yes;  I must become my brother’s keeper.)  Actually, this posting has a second “layered” point: to present the nature of “combativeness” in many adversaries (the part about actually “fighting back” rather than just protesting hit a nerve ending).  This person was as aggressive and intolerant as anyone in radical Islam, but came from a different source of antagonism.

All of this goes to the subject of “implicit content”, which came up in the COPA trial (2006), It came up when I was substitute teaching in 2005 with respect to the context of an on-line “screenplay” I had authored (details ).  The basic point it that I did not have an obvious “purpose” for what looked like self-defamation, so others could presume that it had been intended to incite others. There was an unbelievable set of coincidences that had set up this incident, however. The whole concept of “implicit content” could mean that, if as an amateur “citizen”, I’m not entitled to be viewed as a “true” journalist (or author), then I should be held accountable for what any unstable person does if he just “looks at the picture” and (as my mother would have said), is “given an idea” when taking a portion of a posting out of larger “layered” context, as is common in real journalism. Does the validity of speech depend on the identity of the speaker?  Maybe sometimes.


One other note: I get a little irritated by bombastic pleas from progressive news sites about their fund-raising campaigns, as if I needed them to speak for me.  I don’t need them now, but maybe some day I will.  What if Donald Trump actually wins?

(Published: Monday, July 18, 2016 at 9 PM EDT)