Why AI Needs Formula One Power — and When It Doesn’t

What toddlers, museums and modern computing reveal about the real purpose of compute power
A toddler sees a giraffe once. The next day, walking through a museum, the same child looks at a skeleton and immediately recognizes the animal again. No manual. No training cycle. No second explanation required.
Artificial intelligence approaches the same task very differently. It may need millions — sometimes billions — of examples to achieve a comparable result. Entire data centers operate day and night to detect patterns that children seem to grasp intuitively.
This contrast, recently highlighted in a column by physicist and science thinker Robbert Dijkgraaf, captures a growing paradox in modern technology. At the very moment computing power is expanding at unprecedented speed, understanding itself remains elusive.
If toddlers can learn with so little information, why do machines require so much?
The answer lies not in intelligence levels, but in the nature of learning itself.
Learning Without Datasets
Human learning is not based on isolated data points. Children learn through interaction — through tone of voice, facial expressions, repetition, correction and encouragement. Meaning is shaped socially, not statistically.
“Children don’t learn from isolated data points; they learn through emotional resonance and social feedback. A toddler doesn’t need a billion parameters because they have a social compass — a parent or a peer — who provides instant context”,
— Ruben Fukkink, Professor of Pedagogy and Child Development, University of Amsterdam
Artificial intelligence lacks such a compass. It does not receive meaning through interaction but infers patterns through correlation. Where a child builds an internal model of the world, AI optimizes probabilities.
In that sense, modern AI systems are extraordinarily capable calculators — yet socially and contextually deprived. To compensate, they rely on scale.
Power as a Substitute for Intuition
The consequence of this design choice is visible in the infrastructure behind AI. Models grow larger, data centers expand and energy demand increases sharply.
“We are currently building Formula One cars just to do the groceries. If we want AI to reason more efficiently, we shouldn’t only build bigger data centers — we need to rethink how information moves at the physical level,”
— Martijn Heck, Professor of Electrical Engineering, Eindhoven University of Technology
The contrast with human cognition is striking. The human brain operates on roughly twenty watts — less energy than a household light bulb. Large AI models, by comparison, may require megawatts of continuous power to function at scale.
This does not indicate failure. It reveals a difference in architecture. Machines compensate for missing intuition with computation.
Where Compute Truly Matters
In many domains, this brute-force approach is not excessive — it is essential.
Climate modeling depends on high-performance computing to simulate planetary systems that no human mind could fully grasp. Drug discovery, protein folding and materials science rely on vast computational exploration to uncover patterns invisible to traditional research.
In such contexts, compute power functions as a scientific accelerator. It compresses decades of trial and error into months or even weeks.
The same applies to real-time systems such as autonomous vehicles, robotics, aviation and industrial automation. Here, milliseconds matter. Failure is not theoretical but physical. Unlike toddlers, machines operating in the real world cannot afford playful experimentation.
In these environments, scale is not indulgence. It is safety.
When More Power Adds Little Understanding
Yet many everyday AI applications do not operate under such constraints. Tasks such as summarization, assistance or recommendation often rely on cloud-scale processing even when the cognitive challenge itself is limited.
As energy costs rise and sustainability concerns deepen, this imbalance has become harder to ignore.
“A toddler sees a giraffe once and understands the essence of ‘giraffeness’ — even in a sketch or a skeleton. AI sees millions of pixels and calculates a probability. The paradox of modern computing is that we are scaling power when we should be scaling insight,”
— Inspired by Robbert Dijkgraaf, Physicist and President-Elect, International Science Council
The question is no longer whether AI can become faster. It already is. The more pressing issue is whether speed alone leads to better intelligence.
The Shift Toward Intelligent Placement
This realization is driving a structural change in computing.
Rather than sending every task to centralized cloud infrastructure, engineers are increasingly focused on running AI closer to where data originates. Specialized chips now allow inference to happen directly on devices, reducing latency, energy consumption and dependence on continuous connectivity.
This approach — often referred to as edge intelligence — represents a move from unlimited scale toward proportional power.
Instead of asking how much compute is possible, the industry is beginning to ask where compute is truly necessary.
Rethinking Intelligence Itself
The toddler in the museum offers more than a charming anecdote. It exposes a deeper truth about intelligence.
Children do not learn faster because they process more information. They learn because their brains organize experience into meaning. Curiosity guides attention. Feedback shapes interpretation. Context provides coherence.
Artificial intelligence excels at calculation. Humans excel at sense-making.
The future of AI will not emerge from choosing one over the other, but from understanding the difference.
Compute power remains one of humanity’s most powerful technological tools. Used wisely, it accelerates science, medicine and global coordination. Used indiscriminately, it becomes expensive noise.
The lesson may be simpler than expected: intelligence is not about having the biggest engine — but about knowing when to press the accelerator, and when to slow down.
Altair Media US — exploring technology, strategy and society at the frontier of innovation.
