We’ve already seen how those in power have tried to control social media narratives concerning COVID-19, elections, wars, and social movements.
We know that elites want us all on the same page, not questioning anything.
We’re not making this up or stretching the truth.
At this past year’s WEF Annual Meeting in Davos, the EU Digital Services Act was discussed. The EU Digital Services Act establishes controls over all information on social media platforms.
The speaker, Ursula von der Leyen—President of the European Commission—emphasized the need for global control over the flow of digital information.
Who would be in control of this information? The WEF “Excellencies” and “Dear Klaus Schwab.” Yikes.
(You can read Ursula von der Leyen’s statements for yourself here.)
Now, imagine how those in power could use AI-generated content to mislead and control the masses.
It's a scary and real possibility.
According to BuiltIn, Stephen Hawking has raised alarms about the risks of AI singularity. In his words, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”
Read on to learn more about AI singularity and the dangers it poses.
What Is AI Singularity?
Singularity is the tipping point where AI becomes indistinguishable from—or even superior to—human intelligence.
We are already headed that way and may arrive there much sooner than expected.
For instance, when I researched AI singularity using ChatGPT, I received the following answer:
AI would become superhuman, able to copy itself and improve itself faster than humans. Computers would take over the most important decisions and problems.
If AI is already saying this about itself, we need to recognize that AI singularity is not merely hypothetical—it is real and should be anticipated.
Glenn Beck reports:
This is why Microsoft unplugged their first chatbot. It was talking to another chatbot, and everybody was really excited until about fifteen minutes in. Because of machine learning, it started using a new language that the other machine understood … that they had just taught each other quickly. And it started having a conversation we couldn’t follow or understand, and we unplugged it.
AI singularity is a terrifying, real threat.
[Related Read: Preparing for AI: Lessons from the Past]
How AI Can Create and Amplify Narratives
Experts are genuinely concerned that chatbots, such as ChatGPT, will be used to create and spread persuasive propaganda.
So much so that in February 2023, the Department of Justice (DOJ) and the Commerce Department announced the creation of the Disruptive Technology Strike Force with a mission to prevent nation-state “‘adversaries’ from acquiring ‘disruptive’ technologies.”
Let’s look at some ways AI can create and amplify misleading narratives:
- Fake news that appears legitimate. AI chatbots make it easy to create and spread fake news. But it is so much more than simply spreading mass texts. AI content includes creating videos that appear real, or even entire websites that are fake. For instance, consider what happened in Turkey when AI was used for elections. According to Wilson Center, “In the run-up to the elections, President Recep Tayyip Erdoğan’s staff shared a video depicting his main rival, Kemal Kiliçdaroğlu, being endorsed by the Kurdistan Workers’ Party, a designated terrorist group. The video was clearly fabricated but was widely circulated. Erdoğan won the election. Of course, his victory owed to much more than a single deepfake video. […] However, employing deepfakes, along with leveraging political power, against opponents could end their political career within minutes."
- AI-generated personas and influencers to push agendas. According to Blackbird AI, “Nation states, extremist groups, and bad actors are leveraging AI to automate influence operations at an unprecedented speed and scale. They harness AI to identify the most divisive narratives, craft authentic-seeming messaging, and deploy armies of human-like bots to spread this content far and wide. With just a single prompt, today’s AI tools can generate an entire narrative attack campaign that gets further amplified by bot networks.”
- Real-time content manipulation to suit the needs of governments or corporations. A few years ago, a field experiment was conducted to determine if state legislators could distinguish between randomized AI letters and human-written letters. They discovered that the state legislators were ultimately unable to distinguish between the AI letters and the human-written letters. According to The Journal of Democracy, “This suggests that a malicious actor capable of easily generating thousands of unique communications could potentially skew legislators’ perceptions of which issues are most important to their constituents as well as how constituents feel about any given issue.”
[Related Read: Preparing in a World of Censorship and False Media Narratives]
The Risks of Centralized AI Control
Not only are there serious concerns about using AI to create misleading narratives, but there are also major red flags concerning centralized AI control.
Companies are already making a move toward centralized AI.
For example, in September 2024, BlackRock, Global Infrastructure Partners, Microsoft, and MGX launched an AI partnership between data centers and power structures.
Again, this is something the government and elites will push for as it makes it easy to control.
As was presented at Davos by the WEF, they will suggest control over AI just as they did with social media platforms.
Consider some of the risks of giving control of AI to a central group:
- Censorship on steroids. AI will make it fast and easy to censor content that challenges official narratives. This is already happening on a smaller scale. AI tools are used to search library books for certain words or ideas, and if unacceptable words are found, the books are placed on a banned books list. Imagine this on a global internet scale.
- Reality manipulation. As discussed with the presidential elections in Turkey, it is easy to manipulate reality. But AI can take this further. Imagine AI producing new images from riots that make the victims look like attackers. Imagine AI answering students’ history questions with clearly biased answers.
- Behavioral conditioning. One of the scariest but likeliest scenarios is AI being used for behavioral conditioning. AI-powered algorithms are already being used; you see certain ads online because of how algorithms have targeted you. What if the government utilizes AI-powered algorithms to push propaganda rather than just targeted retail ads? It is possible to use AI to nudge populations toward compliance through tailored content and propaganda.
Practical Tips for Resisting Narrative Control
Ultimately, you need to prepare for AI singularity.
Even ChatGPT creator, Sam Altman, has admitted he is “a little bit scared” of the potential of the technology… and he is a prepper.
In a profile with the New Yorker, Altman said, “I prep for survival. […] I try not to think about it too much. […] But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."
If the creator of one of the most popular AI tools lives this way, we should, too.
Here are some ways to prepare:
- Learn to identify and verify sources of information.
- Buy a set of encyclopedias.
- Use decentralized platforms and tools that prioritize freedom of speech.
- Build communities that prioritize real-world interaction and critical thinking.
- Create a personal offline library with how-to books and tools to mitigate dependence on AI.
Practice self-reliance, friends. Even online.
In liberty,
Elizabeth Anderson
Preparedness Advisor, My Patriot Supply