Plans to steal the 2020 election are being rolled out through the use of artificial intelligence (AI). A new political action committee, Defeat Disinfo, is being advised by retired Army General Stanley McChrystal of The McChrystal Group using technology developed to combat ISIS.
Proponents of Defeat Disinfo seek to serve a domestic political goal: affecting the outcome of the 2020 election. Under the guise of combating disinformation about President Trump’s response to COVID-19, many fear that the weaponization of this technology is being used to manipulate voters in November.
Gen. McChrystal, best known for his command of the Joint Special Operations Command, criticized U.S. President Donald J. Trump for his dismissal of Secretary of Defense Mattis.
The program finds its roots in the Defense Advanced Research Program Agency (DARPA), a taxpayer-funded government research agency. The brainchild of President Dwight D. Eisenhower, DARPA was the president’s response to Russia’s 1957 launch of Sputnik.
The program has morphed into a myriad of government projects with titles so long that they’ve been truncated into an alphabet soup of acronyms that people recognize, but have no idea what they mean. This is the same program that developed Facebook/Lifelog to collect data by manipulating users to give up their personal information willingly, but is known as a social media platform to connect users. The populace has responded as intended.
Defeat Disinfo’s primary objective is to intervene by identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country – in some cases paying users with large followings to take sides against the president.
Conservative users of social media platforms have noticed that their accounts are being censored, shadowbanned, or manipulated. Many users have been sanctioned with their accounts being locked or eliminated.
The Semantic Forensics (SemaFor) program seeks to develop technologies that make the automatic detection, attribution, and characterization of falsified media assets a reality. The goal of SemaFor is to develop a suite of semantic analysis algorithms that dramatically increase the burden on the creators of falsified media, making it exceedingly difficult for them to create compelling manipulated content that goes undetected.
“At the intersection of media manipulation and social media lies the threat of disinformation designed to negatively influence viewers and stir unrest,” said Dr. Matt Turek, a program manager in DARPA’s Information Innovation Office (I2O).
Altered audio, images, video, and text continue to permeate social media platforms to manipulate the public’s belief in false narratives. Not only do high-dollar enterprises have access to altered media manipulation capabilities, individual content creators can access this technology to develop false media narratives.
DARPA is issuing a new Artificial Intelligence Exploration (AIE) Opportunity entitled Techniques for Machine Vision Disruption (TMVD), which invites submissions of innovative basic or applied research concepts in the technical domain of disrupting machine vision systems without detailed knowledge of their internal architecture or how they were trained. This open call will result in an award of an Other Transaction (OT) for Prototype Projects, not to exceed $1,000,000, if selected, according to a recent DARPA announcement.
The goal of the TMVD effort is to develop specific techniques to disrupt neural net-based computer vision technology, while is to counteract recent attempts at user manipulations and shadowbanning on platforms, such as Facebook and Twitter.
AI computer vision has improved and progressed, achieving superhuman performance with real-time executable codes that can detect, classify and segment within a complicated image, just like the technology in 2001: A Space Odyssey’s Hal9000.
Sometimes, reality is stranger than fiction!