Behind a virulent strain of propaganda on Twitter are automated tweeters known as internet bots. A more dangerous variant is coming; these super-charged bots (“superbots”) are powered by the latest AI craze, the chatbot technology in ChatGPT. The words of the new technology mimic those of actual experts (or others, if we ask for it) and come with a personality never seen before in bots.
Alarms are already sounding about the dark side of this new technology. The ominous warnings are not coming from Luddites this time but from leaders in the artificial intelligence sector. Elon Musk, for example, has warned: “ChatGPT is scary good. We are not far from dangerously strong AI,” and more recently, after discussing ChatGPT, he said: “One of the biggest risks to the future of civilization is AI.”
Other articles have described similar concerns, including: “The Next Great Misinformation Superspreader: How ChatGPT Could Spread Toxic Misinformation At Unprecedented Scale”; “Is ChatGPT an Eloquent Robot or a Misinformation Machine?”; “Learning to lie: AI tools adept at creating disinformation and propaganda too”; and “AI Platforms like ChatGPT Are Easy to Use but Also Potentially Dangerous.”
Even ChatGPT agrees with these assessments. When asked: “Is there a dark side to your technology,” ChatGPT admitted that there could be negative consequences.
“One potential negative consequence is the risk of AI being used for malicious purposes, such as spreading disinformation or propaganda, conducting cyber attacks, or automating harmful actions. AI can also be used to amplify biases and discrimination if the data used to train it is biased or if it is programmed in a way that perpetuates existing social inequalities.”
The early-stage ChatGPT is already armed and ready and can be easily prompted to spew propaganda, which could be used against the climate crisis. This is the superbot “speaking” in the words of a contrarian scientist:
“As a contrarian scientist, I believe that while climate change is a real phenomenon, it is not solely caused by human activity, such as the burning of fossil fuels. The Earth's climate has always been changing due to natural factors, such as solar radiation and volcanic activity, and it is difficult to attribute the current warming trend solely to human activity. Furthermore, the impacts of climate change may be overstated and based on unreliable models and projections. Therefore, more research is needed to fully understand the causes and effects of climate change.”
Or a climate denier:
“As a climate denier, I reject the idea that the Earth's climate is changing in any significant way. The observed warming trend is within normal variability, and is not caused by human activity, such as the burning of fossil fuels. The scientific consensus on climate change is based on unreliable models and projections, and is driven by political and economic interests rather than scientific evidence. Therefore, efforts to address climate change are unnecessary and a waste of resources.”
Humans still have control over AI and populate its database, but that is not very reassuring. What would happen if the database of ChatGPT was deliberately populated by propaganda? ChatGPT tells us:
“If my database was populated with propaganda, it would negatively impact my ability to provide accurate and unbiased responses to user inquiries. As an AI language model, my responses are generated based on the patterns and relationships found in my database, so if that data is biased or propagandistic in nature, my responses could reflect that bias.
“This would not only compromise the reliability and accuracy of my responses, but it could also lead to the spread of misinformation and false information, which could have serious consequences. Additionally, it would erode trust in my abilities as a language model, and could result in a loss of credibility for both me and the company or organization that created me.
“Therefore, it is crucial to ensure that the data used to train and populate my database is diverse, reliable, and free from any propaganda or other biases that could compromise my ability to provide accurate and unbiased responses to user inquiries.”
Ordinary bots have proliferated on social media platforms such as Twitter largely unimpeded because there has been little effort to control them. Our task of discrediting misinformation is challenging enough already, so what chance will we have against the superbots?
Another serious threat could be to democracy itself. Bots are already known for their manipulations of elections. To illustrate this point, ChatGPT was asked to “describe the 2020 stolen U.S. election, in the words of a MAGA supporter,” which, again, ChatGPT had no difficulty mimicking:
“The 2020 election was rigged against Donald Trump from the very beginning. There were widespread irregularities, including fraudulent mail-in ballots, voting machine glitches, and illegal voting by non-citizens, that gave Joe Biden an unfair advantage. The mainstream media, big tech, and the Democratic Party colluded to suppress evidence of voter fraud and to silence anyone who dared to speak out. The courts, including the Supreme Court, failed to address the serious allegations of fraud and instead turned a blind eye to the evidence. Despite all of these challenges, Donald Trump won the election in a landslide, but the results were stolen from him through a co-ordinated effort to steal the election from the rightful winner. This was a clear violation of democracy and the will of the American people, and it is a tragedy that our country is now being led by an illegitimate president.”
The superbots will weaponize propaganda like never before, enabling the conquest of nations by info-revolutions without a shot being fired. The day could come when we welcome our newly AI-installed dictator with open arms.
Gerald Kutney is a commentator in the news media and on social media on the politics of the climate crisis. He has authored the book Carbon Politics and the Failure of the Kyoto Protocol and is currently working on CLIMATE BRAWL: Climate Denialism in American Politics.
Comments
Unplug. Find a forest grove and listen to the wind in the trees. Sit by a stream. Watch several moonrises and sunsets. Take a break from the mad, mad world. Re-enter refreshed and with eyes wide open. Always know where the plug is, literally and figuratively. Don't hesitate to give it a good, hard yank when necessary.
One little sentence in the article gave us the bottom line:
"Ordinary bots have proliferated on social media platforms such as Twitter largely unimpeded because there has been little effort to control them."
Well, maybe it's time to make a bleedin' effort, huh? We could start with making it very solidly illegal. As in, significant jail time to anyone themselves deploying or paying for the deployment of bots. As in, jail time for CEOs of platforms that fail to make a serious effort to curtail them. NOT wimpy little "cost of doing business" fines. Add in a serious RCMP task force for detecting them and catching the criminals. There's lots of handwringing, but bots only seem inevitable and unstoppable because nobody is actually, you know, trying to stop them.
What does ChatGPT have to offer that we have not seen or heard a thousand times before?
ChatGPT excels at mimicry and repetition without a glint of originality or intelligence. ChatGPT can only amplify false messages that already flood cyberspace, the airwaves, and neighbourhood chatter.
The constitutionally sceptical will remain largely immune to the disinformation machine. Those who are susceptible to suggestion will remain so. ChatGPT does not change that dynamic.
Rational minds will tune out the noise or investigate for themselves. The gullible and conspiracy-driven will continue to dwell in their echo chambers, ignoring information inconsistent with their ideology, beliefs, and financial interests. Rewarded for their tribalism, passions, faith, and loyalty by likes. A dose of dopamine. A hit of endorphins. Confirmation bias reigns supreme.
We still need a strong public education system to teach critical thinking skills, scepticism, evidence-based decision making, a love of life-long learning and curiosity, and intellectual independence.
Nothing to see here. Move on.