The Challenges and Issues of AI: Navigating a Double-Edged Sword
Some ideas on why AI will cause more problems than it solves in the short and medium term
Artificial Intelligence (AI) is no longer the future; it’s here, reshaping industries, communication, and even how we think about ourselves. But this transformation comes with significant challenges and risks.
Sometimes, in conversation with friends, and because I work in tech, I get asked about what the AI driven future holds.
My stock answer is: ‘It’s going to be a shitshow over the next 50 years. Then it’ll be ok’
There are many reasons for this sense of foreboding, which I know is shared by many people I know.
I love to discuss this! A bit of futurology. And recently I had an interesting chat with Edwin Groenendaal. He has been at the forefront of AI for several years and always has an interesting take.
Here’s our thoughts on the key factors behind the impending AI shitshow. (So useful to clarify my thoughts on this! Maybe I can provide a better answer to my friends next time.)
1. Misinformation, Weaponization, and the Crossbow Problem
AI’s power to generate and spread information at scale is both its strength and its greatest risk. Whether it’s chatbots confidently presenting falsehoods, social media algorithms amplifying divisive content, or targeted propaganda campaigns, AI makes misinformation and manipulation faster, cheaper, and more effective than ever before.
This dynamic mirrors the "crossbow problem." Historically, weapons like swords and armor evolved together in a balanced cycle of offense and defense. The crossbow disrupted this balance, enabling anyone to kill a heavily armored knight with minimal skill. It was so disruptive that it was banned by the Pope in the 12th century. Similarly, AI has created an imbalance in the information landscape: offensive capabilities (spreading falsehoods, manipulating public opinion) far outpace defensive tools (fact-checking, debunking misinformation).
For example, a chatbot might confidently generate a detailed explanation for why Beyonce resembles a horse, as discussed in the transcript, even though the premise is entirely false. This highlights how easily AI can produce believable but inaccurate information. Meanwhile, countering these falsehoods demands significant time and resources—debunking one false claim can take hours or days, while AI can create thousands more in seconds.
Social media platforms exacerbate this problem. Algorithms designed to maximize engagement amplify polarizing and sensational content, making divisive narratives more pervasive. AI-powered bots can spread misinformation and propaganda tailored to exploit biases, creating a feedback loop of increasing polarization. As Simon and Edwin discussed, this effect is particularly evident in politics, where historical divisions, such as those in Poland and the U.S., are amplified by AI-driven categorization and targeting.
2. The Illusion of Intelligence and Erosion of Confidence
AI’s ability to mimic human behavior creates the illusion of intelligence, which can undermine human confidence in our own decision-making. AI systems like chatbots are essentially advanced string-matching tools. They don’t "understand" what they’re saying, but their output often appears coherent and insightful. This illusion can lead people to overestimate AI’s capabilities and undervalue their own.
For instance, Edwin recounted how even experts are occasionally awed by AI-generated content, mistaking its fluency for intelligence. This misjudgment can erode trust in one’s ability to reason and make decisions. When AI provides incorrect or misleading answers—such as confidently explaining implausible scenarios—users may accept the information as accurate simply because the delivery is polished.
Furthermore, AI’s mimicry of human interaction compounds the issue. Chatbots and virtual assistants use casual language, intonations, and even colloquialisms to appear relatable. This can create a false sense of connection, making users more likely to trust and rely on the AI. Imagine a customer trusting a chatbot’s financial advice over their instincts because it "sounds" confident. This misplaced trust can have far-reaching consequences, from poor decision-making to financial losses.
The danger lies not in AI being "smarter" than us but in making us doubt our intelligence. The gap between human understanding and AI’s capabilities is blurred further by AI’s ability to mimic human reasoning convincingly. The more we lean on AI for decisions, the more we risk becoming passive participants in our own lives.
3. Ethical Dilemmas and the Lessons of the 2007-2008 Financial Crisis
The rapid development of AI has outpaced ethical considerations and regulatory frameworks, much like the financial algorithms that contributed to the 2007-2008 global financial crisis. These algorithms were designed to analyze complex financial instruments, such as mortgage-backed securities, and predict market behavior. While they were technically innovative, they were poorly understood, even by the experts using them. This lack of understanding and oversight led to catastrophic consequences when the algorithms failed to account for systemic risks, such as widespread default on subprime mortgages.
The algorithmic underpinnings of the financial crisis clearly parallel AI and highlight how unchecked innovation in technology can lead to systemic failures. The scope of AI is just so much larger… Financial algorithms created a false sense of confidence in the stability of the housing market, AI systems can give an illusion of control and reliability while masking inherent risks across most industries over the coming years. For example, AI’s reliance on biased training data can perpetuate systemic inequalities, much like the financial models that overlooked the vulnerability of subprime borrowers.
The financial crisis also revealed a troubling reliance on tools people didn’t fully understand. Similarly, today’s AI tools are often used by individuals and organizations with limited comprehension of their underlying mechanisms, increasing the risk of unintended consequences. The financial crash was a stark reminder that technological innovation without adequate regulation and ethical oversight can lead to widespread harm—a lesson that isn’t being applied to our approach AI development and deployment.
4. Pulling Back the Curtain: Data Analysis and the Pandora’s Box of Truth
Analyzing data can often reveal uncomfortable truths that may undermine the very systems they are intended to support.
For instance, consider voting. Elections are predicated on the belief in free will and informed choice, but data analysis has shown that geographical, economic, and historical factors heavily influence voting behavior. Analysis of voting behaviour in Poland and the U.S. shows how modern voting patterns align eerily with historical borders or economic divides. Such revelations suggest that many outcomes may be more deterministic than democratic, eroding confidence in the idea of free and fair elections.
Social media algorithms, by analyzing user behavior, have exposed how easily individuals can be influenced, challenging our assumptions about autonomy. When AI systems show that human actions are predictable patterns rather than independent choices, it raises philosophical questions about free will and undermines the perceived agency of individuals.
This Pandora’s box of truth demands careful consideration. While transparency and understanding are valuable, there are instances where too much information—especially when misinterpreted—can undermine the systems we rely on. Pulling back the curtain can leave us grappling with a destabilized world.
Where Do We Go From Here?
The challenges AI presents aren’t insurmountable, but they require a realistic understanding of the landscape. There are several barriers to addressing these challenges:
The Role of Capitalism: Profit-driven motives dominate the development of AI technologies. Companies are incentivized to push forward without pausing to consider ethical or societal implications. Capitalism’s need for growth often trumps caution, making meaningful regulation difficult to enforce. This creates a system where companies prioritize innovation over responsibility, with little incentive to change course.
The Geopolitical Arms Race: Superpowers view AI as a tool for economic dominance and national security, fueling a race to outpace rivals. This competition discourages restraint, as no superpower can afford to fall behind. The U.S. has embraced AI as a strategic advantage, pushing forward to prevent adversaries from weaponizing it against them. However, this race exacerbates risks, as rapid development often leaves ethical considerations in the dust.
The Lack of Levers to Pull: A grim reality: there are no obvious mechanisms to slow down or redirect the trajectory of AI development. The momentum is too great, the rewards too high and the incentives to continue on this path are deeply embedded in global systems.
It Will Get Worse Before It Gets Better: Like the climate crisis, the negative impacts of AI will intensify before we develop the tools and frameworks to address it. History suggests that humanity often reacts rather than prepares, and AI may follow this pattern. The lessons of the 2007-2008 financial crisis and the crossbow problem remind us that disruption often precedes meaningful reform.
Ultimately, regulation, ethics and public understanding of AI will catch up, and this new technology will transform our world beyond all recognition for the better.
Sadly, the rate and scale of change will lead to many adverse outcomes in the shorter term. My guess is 50 years from now it’ll all be great.
This is why I refer to the impacts of AI as a shitshow.