In recent years, artificial intelligence has increasingly captured the attention of both media and science. Yet experts like Chiara Gallese warn that using AI does not automatically lead to understanding. Her critique of ChatGPT’s use on the Riemann Hypothesis is striking: AI can sound fluent, but it cannot guarantee deep insight. The illusion of knowledge, she argues, may be the greatest risk of generative AI.
AI & Ethics
How artificial intelligence is designed, deployed and governed — and what responsibility means in practice.
AI is everywhere. It writes, it predicts, it decides. But as the machines get smarter, one question keeps rising to the top: who is AI really for? The answer many in Silicon Valley now give is Human-Centered AI — AI that serves people, not the other way around. We analysed the people shaping this future: Sam Altman of OpenAI, Satya Nadella of Microsoft and Elon Musk of xAI/Tesla. They don’t always agree, but their messages overlap: AI should enhance human life, not replace it.
In 2025, social media algorithms are no longer just tools for sorting posts—they are the gatekeepers of public discourse, deciding what billions see every day. These systems, powered by machine learning, prioritize content based on engagement, relevance and sometimes owner preferences. But transparency varies wildly, sparking debates over bias, influence and regulation.
AI is not a technological upgrade but a structural rupture. For the first time in two centuries, a technology wave is not merely reorganizing labor but actively absorbing cognitive work at scale. Tasks that once required teams of analysts, developers, legal staff or financial specialists can now be executed in minutes. This is not automation as we knew it; it is capability displacement in its purest form.
Europe is home to some of the world’s most prestigious AI faculties. Institutions like ETH Zürich, the Technical University of Munich, EPFL, Oxford and Cambridge consistently produce research that ranks among the very best globally. Their professors are leaders in fields such as robotics, neuro-symbolic AI and trustworthy AI, attracting top PhD students and forming vibrant hubs of expertise. With such talent and intellectual firepower, Europe should, on paper, be a major force in the AI landscape.
Europe wants to protect its citizens and lead the world in responsible innovation. Yet the digital economy increasingly demands something regulators never anticipated: algorithms that grow stronger by consuming vast volumes of data. This tension has created what many now call the data trap — a space where innovators hesitate, policymakers tighten their grip and both sides wonder whether the rules that once defined Europe’s digital identity can still carry its ambitions forward.






