MEMEWARS: DOJ Proposal to Reform Section 230 Protects POTUS on Twitter But No Redress for Patriots

The Department of Justice on Wednesday sent a draft of a legislative bill to Congress that intends to reform Section 230 of the Communications Decency Act.

The move is the culmination of a year of preparation to remedy the ills of the social media giants, enabled by the original internet laws that erred on the side of least interference at the outset, but now will force Twitter and Facebook to play by the rules.

The biggest changes will be a specific definition for what cannot be censored and a defensive and offensive plan to root out criminals who run their businesses over the platforms.

The first amendment question could be partially resolved if the reforms work, with the end result being that Patriot voices online will cease to be censored or removed for contradicting the official narrative promoted by the monolithic mainstream media.

For that part of the new rules, what’s proposed is to define a simple Retweet as just that. In the case of POTUS, who is often censored by Twitter for Retweeting “the wrong” thing, this would be a fix.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” according to the document.

section_230_draft_legislation_cover_letter

It is then prohibited to block that post, “to restrict access to or availability of specific material that the provider or user has an objective reasonable belief violates its terms of service or use.”

Except for in the broadest sense, the proposal doesn’t address specific tactics being used to silence voices, such as shadowbanning, throttling readership, anonymous calls for suspension, or other tricks, like incessant demands to verify a user’s account or other ways to make the whole experience so upsetting that you quit the platform.

On the issue of censorship, the proposal provides a set of criteria the internet providers should exercise in order to practice equality of speech across their platforms. A clearly stated, publicly available, specific set of criteria for which the provider will use to moderate any speech to be deemed unacceptable is at the forefront.

Hate speech is grounds for censoring, and a platform can be held civilly liable by users, meaning that the internet providers will be responsible for knowingly permitting these types of posts.

section_230_draft_legislation_redline

If material is to be censored, then “the provider must send a timely notice to the provider of the material with the internet provider’s reasonable “factual” basis for the restriction of access (content) and a (provide for a meaningful opportunity to respond” — this is under the “Exclusion from Good Samaritan Immunity” section, entitled, Bad Samaritan Carve-Out.”

If the internet provider knowingly “disseminated or engaged in, violated Federal criminal law; had actual notice of that material’s or activity’s presence on the service and its illegality; and failed to remove, restrict or prevent access to the material, report it to the authorities and retain the material for evidence then the provider acted in bad faith or as a “Bad Samaritan” and will be held accountable by law.”

The draft legislative text implements reforms that the Department of Justice deemed necessary in its June recommendations and is the result of a year long review of the outdated statute by which internet giants have oft hidden behind. The legislation also executes President Trump’s directive from the Executive Order on Preventing Online Censorship.

“For too long Section 230 has provided a shield for online platforms to operate with impunity,” U.S. Attorney General William Barr said. “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America. We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”

In a cover letter addressed to Vice President Mike Pence, AG Barr writes, “The internet has drastically changed since 1996. Many of today’s online platforms are no longer nascent companies but have become titans of industry. Platforms have also changed how they operate. They no longer function as simple forums for posting third-party content, but use sophisticated algorithms to suggest and promote content and connect users. Platforms can use this power for good to promote free speech and the exchange of ideas, or platforms can abuse this power by censoring lawful speech and promoting certain ideas over others.”

The current interpretations of Section 230 have enabled online platforms to hide behind the immunity to censor lawful speech in bad faith and is inconsistent with their own terms of service.

Meanwhile, in those terms of service which virtually nobody reads and are updated frequently, the companies admit to all the technical reasons at their disposal that can be selectively applied to only those accounts they don’t like to effectively censor large groups of customers.

section_230_draft_legislation_-_section_by_section

This legislation proposal aims to correct that by revising and clarifying the existing language of Section 230 and replaces vague terms that may be used to shield arbitrary content moderation decisions with more concrete language that gives greater guidance to platforms, users, and courts.

This proposal also adds language to the definition of “information content provider” to clarify when platforms should be responsible for speech that they affirmatively and substantially contribute to or modify.

That part will end the debate as to whether the companies are responsible for moderating content — like a publisher — or providing free and fair access to all — as a disinterested platform.

There is currently a level of complicity on the tech giants when it comes to online criminal activities — in the same way the platforms can bring together many strangers for friendly purposes, they can bring together criminals from the world for nefarious aims, particularly global trafficking. These activities are being promoted online without censorship, law enforcement is being hindered by the tech companies when attempting to investigate these crimes and the platforms glorify much of the activity by allowing it to be displayed openly on their platforms.

Part of the legislative amendment proposal of section 230 is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation claims.

“Online platforms play a vital role…But these platforms can abuse those positions of trust, whether by deciding which voices they’re going to amplify & which they’re going to throttle & by improperly tracking & collecting user data & even facilitating criminal activity.” AG Barr said while presenting the proposal.

Section 230 immunity is meant to incentivize and protect Good Samaritans on the internet. Platforms that purposely solicit and facilitate harmful criminal activity — in effect, online Bad Samaritans — do not receive the benefit of this immunity.

A platform should not receive blanket immunity for continuing to host known criminal content on its services, despite repeated pleas from victims to take action. Therefore, there will be incentive to the tech companies to act in good faith in order to stop promoting or allowing illicit or dangerous, illegal speech online, otherwise they will be held accountable in a legal capacity both in Federal criminal court and in civil court per the victims’ compensation claims.

The DoJ also proposes to more clearly carve out federal civil enforcement actions from Section 230. Although federal criminal prosecutions have always been outside the scope of Section 230 immunity, online crime is a serious and growing problem, and there is no justification for blocking the federal government from civil enforcement on behalf of American citizens.

As part of this last endeavor the DoJ is proposing to carve out certain categories of civil claims that are far outside Section 230’s core objective, including offenses involving child sexual abuse, terrorism, and cyberstalking.

It’s important to note that the proposals are carefully tailored so as not to sweep in innocent or accidental acts or omissions by online platforms. Also of note, other than the court-order provision, the exclusions are limited to distribution or facilitation of third-party content that would violate federal criminal law.

Providing platforms with civil immunity for facilitating such egregious illicit content is inconsistent with the purpose of Section 230 of the Communications Decency Act to encourage platforms to make the internet a safer place for children.

The proposal also provides an explicit remedy for claims brought under the federal antitrust laws, which promotes competition and clarifies that Section 230 immunity is unavailable to internet companies when they assert it against claims of anticompetitive conduct.

“The proposed legislation would also amend the current 230 section (e) to expressly confirm that the immunity provided by this statute does not apply to civil enforcement actions brought by the federal government,” AG Barr wrote. “The proposed provision is narrowly tailored to clarify that the government’s civil enforcement capabilities are uninfringed by Section 230 without opening the floodgates to private damages lawsuits.”

This means that even though law enforcement has always had the ability and jurisdiction to monitor and take action upon criminals using electronic communications such as the internet, cell phones, etc. many tech companies have tried to hide behind the right to privacy and immunity. This part of the proposal would eliminate that ability.

These amendments, working together, will be critical first steps in enabling victims to seek redress for the most serious of online crimes. The addition of four new subsections explain the responsibility of the platforms to take down illegal posts from criminal operators, to respond to requests for that by users and to take responsibility for knowingly allowing bad operators on their platforms.

It also specifies those extreme cases where posts and users should be removed, but besides making a case that a user’s post he was Retweeting was being censored, there does not seem to be any role for suing the platforms for suspending your account!

RELATED ARTICLES:

MEMEWARS: POTUS Trump Smacks Twitter Over 1A with Executive Order

Facebook’s Silencer: 1st Amendment Rights Shot Full of Holes

MEMEWARS: About That Blue Bird Director — READ THIS!

MEMEWARS: POTUS Trump Expects Twitter Ban Before Elections

Fuel for the Fight — @QMN1775USA — You can’t suspend our spirit, you bastids!

LEAVE A REPLY

Please enter your comment!
Please enter your name here