
Nvidia is preparing a $100 billion investment in OpenAI. Historic in scale, it would mark the largest transaction of its kind ever made in a private company. The agreement reaches far beyond finance. OpenAI will deploy millions of Nvidia processors to build data centers with a capacity of up to 10 gigawatts—comparable to ten nuclear reactors. The first stage alone involves $10 billion to bring the initial gigawatt online, granting the semiconductor giant roughly a 2% stake in OpenAI. With this, Nvidia would secure immense long-term revenues while tightening its hold on the core infrastructure of artificial intelligence, elevating its role from supplier to structural partner.
For OpenAI, the deal ensures privileged access to the computing power required to train its models, while deepening its reliance on a single supplier. The move underscores that the race in AI no longer rests solely on research and scientific ingenuity, but on access to colossal computational capacity—an enterprise akin to modern energy megaprojects and as capital-intensive as heavy industry itself. The outlook suggests that only a handful of corporations with extraordinary resources will be able to compete in this new phase, further cementing the concentration of power around Nvidia and its strategic partners, while raising inevitable questions about technological dependency and global balance of power.

The arrival of artificial intelligence in our homes is transforming everyday appliances into somewhat hyperactive interlocutors. An air purifier can now detect pollutants and automatically adjust its performance. Our television recognizes patterns and tailors its recommendations. They are no longer simple or inert devices. They learn, adapt, and return to the user the ability to choose experiences more finely tuned to their expectations.
What is striking is AI’s capacity to turn each household object into an extension of the individual. In a single technological gesture, it integrates security, energy efficiency, and personalization. A smart refrigerator suggests recipes based on its contents, a washing machine adjusts cycles according to the fabric, and an oven anticipates culinary preferences. This revolution redefines, in cultural terms, the relationship between people and their domestic tools, blurring the boundary between the human and the automated. Its convenience is beyond question, yet one must ask: are we surrendering too many of our daily decisions to machines? Are we prepared to live alongside these intelligent interlocutors, which not only anticipate our needs but also interpret and reshape our everyday lives? Are we ready to rethink the place we occupy within our own homes?

The rise of generative AI has brought an old question back to the forefront. What role does intuition retain when algorithms can deliver answers with greater speed and precision? I would argue that in contexts of uncertainty and the absence of precedent, human intuition remains an irreplaceable asset, even when machines can detect patterns with extraordinary efficiency. It is not a matter of confrontation. Our inner voice provides direction, meaning, and the ability to challenge what does not align with lived experience—something no calculation can replicate. Intuition is a critical compass in an ocean of data. It does not disappear; it redefines its operational space and reminds us that knowledge is not only accuracy but also risk, imagination, and the courage to decide in the absence of certainty.
The future will be hybrid. Intuition, analysis, and algorithms will coexist in constant interaction. The challenge will lie in discerning when to trust the machine and when to heed that instinct born of experience. Calibrating their use will be an art in itself. If data and its analysis are validated in practice, one may proceed; yet it is unwise to delegate doubt to the machine, particularly when it is the machine that generates it. Wisdom will consist in interrogating AI with the same lucidity with which we interrogate our own intuition, and in accepting that true learning rests in knowing when to listen and, with equal firmness, when to refuse to do so.

The intensive use of ChatGPT among university students—turned into a daily companion for solving doubts, drafting texts, or even offering advice in moments of crisis—has led to a dependence that is difficult to ignore. Over eighteen months, three students accumulated nearly twelve thousand exchanges with the AI, displacing the trial and error of learning in favor of immediate, frictionless answers. The tool appears as a brilliant ally, yet its glow conceals a side effect. The more it is consulted, the more creativity weakens and the capacity to articulate original thought diminishes. This is not merely an academic concern. The mind that grows accustomed to avoiding risk reduces its analytical power and impoverishes personal experience, replacing knowledge as an adventure with a passive acceptance of what the machine delivers.
Experts warn that this massive delegation to the machine impoverishes cognitive development and feeds a bleak perception of the future. Young people who once saw the university as a space of personal affirmation are abandoning humanistic and creative fields, convinced that the industrial output of AI has displaced human talent. Education and culture, rather than nurturing the singular voice of each individual, risk being reduced to a monotone stream of prefabricated formulas. It is not the speed of the answer that matters, but the erosion of the question as a vital act and of the intellectual identity that is forged in the very act of posing it.

The arrival of ChatGPT in the classroom, initially denounced as a threat for its use by students in summaries and assignments, has now become a common resource for teachers, generating an unsettling sense of ethical incoherence. The dilemma is not merely economic—why pay substantial tuition fees for what the algorithm offers freely—but existential. Does this not erode trust in the educational relationship and the promise of transmitting human experience? The tension between tradition and technology places the very authenticity of pedagogy in question.
Teachers, for their part, regard AI as a tool that amplifies their capacity and allows them to focus on what matters most. Yet their true challenge does not lie in producing materials more quickly. It lies in preserving what no technology can replicate: the dimension of care, attentive listening, and empathy. Teaching involves accompanying vital processes as well as transmitting knowledge. When AI is integrated without this awareness, education risks becoming a technical procedure emptied of presence, stripped of the very life that gives it meaning.

Cybersecurity is accelerating at an unprecedented rate. Increasingly, attacks rely on generative artificial intelligence, the same technology behind chatbots. In 2025, in Asia, eight out of ten security breaches result from direct system intrusions. Malware—harmful software installed on computers or phones—accounts for 83% of these incidents. Ransomware, which involves data being held hostage until payment is made, represents half of the cases. Adding to this, AI is now commonly used to create convincing deceptions. Scammers generate manipulated voices, videos, or mass emails that appear legitimate. What once seemed crude today is nearly undetectable.
Defense strategies can no longer function in isolation. Technologies that used to operate independently—cloud computing, artificial intelligence, and information security—must now work together. The key is to reduce irrelevant alerts and highlight only what truly matters using AI as a supportive filter, while ensuring that final decisions remain with human operators. Approaches such as Zero Trust (constant verification of identities and access), the collaboration between developers and security teams (DevSecOps), and continuous user training are currently the most effective. Success relies not on magic, but on the right combination of technology, clear policies, and ongoing education. Focusing on these principles is essential to maintaining trust in an increasingly hostile digital environment.

Geoffrey Hinton, 77, Nobel laureate in Physics and known as the “Godfather of AI,” has spent years warning about the risks of artificial general intelligence that might one day surpass humans. He has also just learned, firsthand, that chatbots don’t need to be all-powerful to ruin your day. In an interview with the Financial Times, he recounted how his ex-girlfriend turned to a digital assistant to explain why he had behaved like a “rat.” The algorithm, dutiful as ever, drafted the indictment. She sent it to him. And so, the scientist who fears for humanity’s future ended up receiving sentimental reproaches written by the very creature he helped bring into being.
The story may read like a joke, but it carries a serious weight. In 2012 Hinton designed the neural architecture that gave rise to today’s chatbots, work that earned him the Turing Award in 2018. Decades later he regretted it. He left Google in 2023 and has since kept pressing on the existential risks of AI. His only hope, he says, is to build systems that treat humans as a mother treats her child. Reality is more mundane. Algorithms already act as third parties in domestic quarrels. Whether AI is truly an “existential threat” can be debated. That it can wreck a relationship is beyond dispute.

In Athens, cradle of Western philosophy, Demis Hassabis, head of DeepMind and recent Nobel laureate in Chemistry for his protein-folding prediction systems, reminded that the future belongs to those who know how to “learn to learn.” At the foot of the Acropolis he warned that artificial intelligence is advancing so fast that traditional educational frameworks are no longer sufficient. What matters will not be the accumulation of data but the cultivation of meta-skills, the ability to understand learning processes, adjust strategies, and keep curiosity alive throughout working life. What AI has achieved in just a decade makes it possible to consider the prospect of an abundance directed toward collective well-being, though also of social transformations that are difficult to foresee.
Hassabis’s message goes beyond technological fascination. It is also a cultural and political warning. Greek Prime Minister Kyriakos Mitsotakis joined him to stress that this revolution will only be ethically acceptable if it distributes tangible benefits and prevents wealth from concentrating in a few hands. Otherwise, it will be seen as a threat and generate a diffuse unease across many sectors of society. Ordinary citizens will need to stay in motion, alert, always learning, because changes are unfolding faster than the ability to absorb them. It is essential not to invest effort in what machines will do better, faster, and for longer.

The debate over ChatGPT in education used to stay inside the classroom. The teacher who suspects, the student who copies, the statistic that confirms it. The snake bites its tail. Estonia breaks the circle. A small, digital country has decided to integrate artificial intelligence into its school system as state policy. This is not an experiment; it is a change of scale watched by other nations with caution and a touch of envy. AI stops being an individual resource and becomes structural to teaching, like textbooks and curricula.
The issue is the nature of effort. If a student delegates the collection and sorting of data to the machine, they must work to supervise how it summarizes and writes to produce valid work. Perhaps the task is no longer to fill notebooks but to verify, cross-check, and transform what AI proposes. France and Germany ban phones and AI in the classroom. Estonia and Finland show that reflective integration can improve results and foster creativity. Each country shapes in its schools the digital citizen it wants for its future. What should we do—train students to master the tool or raise addicts unable to tell their reflection from the real world?