Artificial intelligence (AI) usage in social media has been targeted as a potential threat to impact or sway voter sentiment in the upcoming 2024 presidential elections in the United States.
Major tech companies and U.S. governmental entities have been actively monitoring the situation surrounding disinformation. On Sept. 7 the Microsoft research unit called “Microsoft Threat Analysis Center” (MTAC) published research that observed “China-affiliated actors” leveraging the technology.
The report said these actors utilized AI-generated visual media in what it called a “broad campaign” that had a heavy emphasis on “politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols.”
It said it anticipates that China “will continue to hone this technology over time,” and remains to be seen how it will be deployed at scale for such purposes.
On the other hand, AI is also being employed to help detect such disinformation. On Aug. 29 Accrete AI deployed AI software to be used for real-time disinformation threat prediction from social media as contracted by the U.S. Special Operations Command (USSOCOM).
Prashant Bhuyan, the founder and CEO of Accrete said that these deep fakes and other “social media-based applications of AI” pose a serious threat.
In the previous U.S. election in 2020, troll farms were reported to have reached 140 million Americans each month, according to an MIT report.
Troll farms are an “institutionalized group” of internet trolls with the intent to interfere with political opinions and decision-making.
Related: Meta’s assault on privacy should serve as a warning against AI
Already, regulators in the U.S. have been looking at ways to regulate deep fakes ahead of the
Read more on cointelegraph.com