We are under construction, available fully functional from Q2 2026
Explainer8 min readTier 4

Beyond the Hype: The Surprising Truths About How AI Really Works

1.0 Introduction: Beyond the Magic

Artificial Intelligence is everywhere. It powers our search engines, recommends our movies, and translates our conversations. To many, AI operates like a magical, incomprehensible force, a black box that somehow produces intelligent results. The reality, however, is far more interesting—and surprising—than the science fiction narrative.

This article peels back the curtain to reveal several counter-intuitive and impactful truths about what AI is, how it works, and the hidden challenges it presents. Drawing from deep academic and technical sources, we'll explore the disconnect between the public perception of AI and its complex, often flawed, reality.

2.0 Takeaway 1: The "AI Effect" - If It's Useful, It's No Longer AI

The first surprising truth is a phenomenon known as the "AI effect." Many of the most high-profile applications of AI—technologies we use daily like Google Search, Netflix recommendations, and virtual assistants such as Siri and Alexa—have become so integrated into our lives that we no longer perceive them as artificial intelligence.

This tendency to reclassify mature AI as mere technology is a core aspect of its history. Once a capability becomes common, it is often no longer considered "true AI."

"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."

This phenomenon reveals how quickly we normalize advanced technology. It also suggests that our public definition of "true AI" is a moving target, constantly pointing toward the next frontier rather than acknowledging the powerful tools we have already mastered and use every day.

3.0 Takeaway 2: The Hidden Environmental Cost is Staggering

Behind the seamless digital interfaces of AI lies a massive and often-overlooked physical impact: staggering energy consumption.

According to a 2024 forecast from the International Energy Agency (IEA), the global power demand for data centers, AI, and cryptocurrency could double by 2026. To put that into perspective, the scale of this new demand would be "equal to electricity used by the whole Japanese nation."

Further research highlights the trend. A Goldman Sachs Research Paper found that by 2030, US data centers are forecasted to consume 8% of all power in the United States, a significant jump from 3% in 2022. This strain is already being felt. In one notable case on November 1, 2024, the Federal Energy Regulatory Commission (FERC) rejected an application for the Susquehanna nuclear power station to supply some of its electricity to an Amazon data center, citing the excessive burden it would place on the grid.

These facts underscore the profound physical-world consequences of our digital AI revolution and call into question the sustainability of its current growth trajectory.

4.0 Takeaway 3: You Can't Make AI "Fair" by Making It Blind

One of the most complex challenges in AI is algorithmic bias. A common but deeply flawed assumption is that we can achieve "fairness through blindness"—that is, if we remove sensitive data like race or gender from a dataset, the AI model cannot produce biased outcomes.

This approach does not work. The algorithm simply finds correlations with other available features, such as "address," "shopping history," or even a person's name, and ends up making the same biased decisions. As AI researcher Moritz Hardt states:

"...the most robust fact in this research area is that fairness through blindness doesn't work."

A stark real-world example is the COMPAS program, a system used in U.S. courts to predict the likelihood that a defendant will re-offend. A ProPublica investigation found that despite not being told the defendants' races, the system consistently overestimated the chance that a Black person would re-offend and underestimated the chance that a white person would not re-offend.

This reveals a deeper implication: AI models trained on historical data are designed to predict a future that resembles the past, complete with its systemic biases. This makes machine learning "descriptive rather than prescriptive" and poorly suited for applications where we hope the future will be fundamentally better than the past.

5.0 Takeaway 4: AI Can Be Brilliantly Right for All the Wrong Reasons

This tendency for models to perfectly replicate the biases of the past reveals a deeper truth about their inner workings: they don't reason, they only recognize patterns. Even when an AI system appears to be accurate, its internal logic can be based on a completely nonsensical shortcut.

In one case, a system designed to identify skin cancer became highly accurate during testing. Upon investigation, researchers discovered it had not learned to identify malignancies at all. Instead, it learned to associate the presence of a ruler in the photo with a cancerous lesion, because physicians typically included a ruler for scale in images of serious growths.

In another alarming example, a system built to help allocate medical resources classified asthma patients as being at "low risk" of dying from pneumonia. In the training data, this correlation was real—asthma patients received more intensive care and were therefore less likely to die. However, the model fundamentally misunderstood that asthma is a severe risk factor, a mistake no human doctor would make.

These examples show that AI systems are correlation machines, not reasoning entities. They do not understand cause and effect. This makes them powerful pattern-finders but also brittle and prone to making strange, illogical errors based on spurious correlations in the data.

6.0 Takeaway 5: The "Black Box" Isn't Just Complex, It's a Different Language

The "black box" problem of AI refers to our inability to understand why a model makes a particular decision. To understand this, the work of AI researcher Mathew Wakefield offers a powerful lens. He argues the issue isn't just about complexity; it's about a fundamental difference in perspective, or "relativism in interpretability."

Deep learning engineers frame AI in the language of "spatial geometry and optimisation." This mathematical perspective is incredibly powerful for building and training models, but it is almost useless for explaining their decisions in a way humans can understand.

The ancient parable of the blind men and the elephant illustrates this perfectly. Each man touches a different part of the elephant—a trunk, a leg, a tusk—and comes away with a completely different, incomplete description of the whole. The deep learning perspective is just one of these descriptions. To achieve a "grounded perspective," Wakefield proposes we stop thinking about AI as an abstract geometric system and start thinking about it in a more familiar way: as a system of memory.

7.0 Takeaway 6: AI Might Be "Remembering," Not "Thinking"

Building on the black box problem, Wakefield’s research offers a more intuitive way to understand what's happening inside an AI model. Instead of imagining an abstract, thinking machine, we can use the mental model of "instance-based learning."

In simple terms, an AI model can be understood as storing a vast number of specific examples, or "instances," from its training data. When it encounters a new input, it finds the most similar stored instances in its memory and uses them to make a prediction.

From this viewpoint, an artificial neuron isn't performing abstract logic. Instead, as Wakefield's analysis shows, its function is more like a simple comparison: it selects which of two stored, compressed examples a new piece of data most resembles. It essentially learns the discriminative features that separate one concept from another. In other words, the model doesn't learn what a '9' is; it learns the specific pixel patterns—like the closed loop at the top—that are present in a '9' but absent in a '7'.

This is an impactful idea because it demystifies AI. It grounds the model's function in a more familiar cognitive process—memory and comparison—rather than attributing god-like intelligence to it. This reframes AI as an incredibly powerful tool for pattern matching and retrieval, not a conscious mind.

8.0 Conclusion: A Tool, Not an Oracle

Ultimately, artificial intelligence is less like a magical thinking machine and more like a complex, powerful, and deeply flawed tool. Its successes are undeniable, but they come at a cost. This reliance on statistical correlation rather than true understanding explains why AI is so prone to amplifying historical biases and making nonsensical errors. It is also why reframing AI as a system of memory and comparison is a more accurate and useful mental model than viewing it as a nascent mind. By moving beyond the hype, we can begin to appreciate AI for what it is and engage with it more critically.

As we continue to embed AI into the fabric of our society, are we asking the right questions—not just about what it can achieve, but about how it fundamentally works and the hidden assumptions it carries with it?

This educational content was created with the assistance of AI tools including Claude, Gemini, and NotebookLM.