The End of History

Noel Zamot
5 min readSep 25, 2023

The greatest threat to humanity is not climate change, or nuclear war, or a global pandemic. It is something far more sinister, something that can unleash all of those apocalyptic threats — and worse.

The greatest threat to humanity is the weaponization—of truth—by AI.

The Promise

Artificial Intelligence (AI) has evolved exponentially, promising a future where “AI for all” is more than just a marketing slogan. Tech moguls, sheltered from the masses, envision a world where everyone has an “artificial assistant” at their beck and call, “augmenting their thinking” and making lives more convenient. These prophets advocate prosaic use cases: book me a haircut appointment, manage my schedule, organize my shopping list. This utopian vision of ever-present convenience, an effective vehicle for creating massive wealth, has a darker side.

Significant advancements have marked the evolution of AI, from rudimentary algorithms to target ads by monetizing your attention, to sophisticated machine learning models capable of discovering new technological, pharmacological, and genetic miracles. These advancements have opened up new unexpected possibilities in various fields — science, medicine, and astronomy — allowing us to make sense of chaos, solve complex problems, and gain deeper insights into the universe.

The Peril

They also allow us to create manufactured lies with superhuman ease. The United States lived through such an experience, facilitated by algorithms that pale in comparison to those in development.

The follow-up to the US 2020 Presidential election highlighted this problematic aspect of AI. Millions were swayed — by social media implementing comparatively puny algorithms — into believing manufactured falsehoods about the election. To this day, there are otherwise intelligent individuals who, absent a shred of evidence, insist that election was “stolen.” Lest I be accused of bias, zealots on the other side of the political spectrum gleefully defend their own illusions manufactured by optimized social media. This social media manipulation across the spectrum has a cost, whether it is incitement to violence, social cancellation, and more. Individuals manipulated into these extremes exhibits a common trait: when confronted with facts, otherwise intelligent individuals are incapable of accepting the truth. Their responses eschew any examination of critical facts, and fall into tit-for-tat rebuttals. Not accepting the bespoke reality their favorite platform has created is not a matter of opinion: it is an affront, a threat, and a justification to destroy.

This insanity — comical, were the stakes not so high — is driven by highly effective social media targeting. In little over a decade, algorithms with one aim — controlling attention spans to drive advertising dollars — have created a million realities to keep us bickering, and the money flowing. We live in a world where the ability to manufacture, deliver, and optimize facts is not an alarming development: it is simply a marketing strategy to drive sales. The Balkanization of world opinion was not driven by sophisticated discussions on intellectually complex concepts. It was driven by clickbait.

A Million Minds at War

Now imagine those algorithms on steroids, driven by an AI capable of spectacularly more accurate and subtle targeting. The outcome of such broad AI manipulation of social media consumption would be unlike anything humans have experienced. Our puny primate brains are no match for powerful algorithms designed to distill all of the world’s information into a personalized lie optimized to deliver dopamine hits. Having everyone live their own version of reality is an age-old philosophical question (“I can prove I’m conscious, but cannot prove you are”). The danger now lies in the impossibilities of the manufactured realities these machines may create. When walking through a forest, we can all agree that trees surround us. Walking through an AI-manipulated metaverse, it becomes impossible to agree on anything.

That’s the ultimate peril: the possibility of “manufactured facts” driving an internecine civil war driven by fable and myth. Humans have showed a disturbing willingness to kill and die for manufactured beliefs. We may never live to see the amazing impacts of AI on science, medicine, and astronomy, because we might be extinct from conflicts driven by misinformation and myth.

The Blind Spots of the Prophets

I can imagine the expected complaints from social media seers, tech “visionaries,” and others who see AI-augmented lives as the natural extension of human life and progress. “We can ensure fairness! We’ll program these AIs properly! It is augmentation, not manipulation!” I will respond with a simple question: would they allow their child to be the first test subject? To tether them to a constantly-present avatar designed to sell them everything, from cradle to early grave? To subject their offspring to a a scenario where at an early age, growing minds cannot distinguish between reality and fabrication? To instill in a growing mind a willingness to die and destroy based on manufactured truths?

No? The rest of the world’s minds are no different.

Luddites and the Last of Our Kind

I am not an AI Luddite. I use ML tools daily (yes, even to craft this post), and find that LLMs enable me to think at a higher level, streamlining mundane tasks. AI can revolutionize our future, enabling discoveries across the sciences and the arts. But it can also create manufactured dystopian realities (“Pedophile lizard people control the world,” “I have a right to be insulated from anyone with whom I disagree”) where misinformation reigns supreme.

How, as a species, do we fix this? I believe that the same genie that stands to destroy us might help find a cure. Using AI to monitor AI—a throwback to the most fundamental questions in philosophy—seems like a reasonable course of action. Treating AI in social media as a controlled substance is another. Just like powerful narcotics are indispensable in medicine but can also kill, AI-targeted social media can transform into a weapon, causing mass harm if not regulated. How do we implement either of those options? Should we? Ultimately, education may prove to be the answer. Instilling a healthy habit of critical thinking for future generations, placing value on sound judgment instead of consumption, may be the one act that prevents a self-inflicted apocalypse. That solution requires a long timeline, and our recent track record as a species suggests we are not up to the task.

Let’s ask ourselves a question we’ve asked too many times in the past hundred years, one far more pressing now that machines can control thought: How do we prevent us from killing ourselves?

I’m not exactly sure.

--

--