
Meta has unveiled Vibes, a new section within its Meta AI assistant where users can view and generate short videos created automatically by artificial intelligence, in a format reminiscent of TikTok or Reels. To showcase its capabilities, the company displayed a cat kneading dough and the image of someone taking a selfie in ancient Egypt. The public’s response was overwhelmingly negative, accusing Meta of flooding its own platform with “AI slop,” a term that has come to describe synthetic content perceived as cheap, hollow, and devoid of authenticity. The irony was not lost: only months earlier, the company had urged its creators to focus on original storytelling. The impression left is that, rather than aligning with its users, Meta bows to the weight of competition.
The launch coincides with a major reorganization of Meta’s AI division, backed by more than $100 million in investments directed toward startup partnerships and the new Meta Superintelligence Labs. The ambition is to build an integrated ecosystem linking Meta AI seamlessly with devices such as the Ray-Ban smart glasses. Yet the backlash speaks to a broader climate of skepticism toward artificial intelligence, which many still see as failing to add genuine value to everyday digital life. For this majority, the idea of generating a video from a text prompt may seem novel, but it falls short against the enduring desire for authenticity—whether in family photographs, spontaneous exchanges among friends, or the expectation that what circulates on social platforms maintains some connection to reality.

On September 9, Apple introduced the Apple Watch Series 11 in Cupertino. Among its new features, the most notable is blood pressure monitoring. What makes this launch significant is that the feature carries official approval in the United States. Its AI algorithms will analyze data collected by an optical heart sensor over a period of one month, allowing the watch to identify patterns consistent with hypertension—without the need for traditional pressure cuffs. The function has been validated in studies involving more than 100,000 volunteers and a clinical trial with an additional 2,000 participants. Results showed the system’s ability to detect even advanced stages of the condition with a minimal rate of false positives. Apple expects the feature to help identify more than one million previously undiagnosed cases of hypertension in its first year.
The potential impact is considerable, given that hypertension affects over one billion adults worldwide and often goes unnoticed due to its asymptomatic nature. Apple emphasizes, however, that any alert should be confirmed using conventional medical equipment under professional supervision. Experts welcome the possibility of broadening access to early diagnostics, while warning that the feature should not create false reassurance among those who receive no alerts. Ultimately, this new function underscores the evolution of smartwatches from mere fitness accessories into devices with clinical relevance. It also adds a fresh dimension to the ongoing debate: is mass-market technology an ally of traditional medicine, or a potential source of interference?

In the highlands of Chiapas, a young teacher undertook what neither governments nor academies had attempted: to teach Tzotzil to an artificial intelligence. Andrés ta Chikinib, a communicator and poet from Zinacantán, transformed ChatGPT into a diligent pupil—one that, beyond repeating phrases, asks questions and seeks coherence in a language long relegated from the public sphere. The initial impulse was practical—to create teaching materials without spending nights copying manuals—yet the outcome proved to be an uncommon experiment. The machine “sat down to listen” to an ancestral tongue with the intent of mastering it. Uncanny: how much knowledge can be made available for such an apprenticeship?
Let us enter the polemic. Some linguists caution against introducing Indigenous languages into the machinery of global technology. What may appear as an act of preservation can also take the shape of domestication. Who decides which words are granted entry into the algorithm, and which vanish in the attempt? The reality is less philosophical and more pressing. According to UNESCO, sixty percent of Mexico’s Indigenous languages are at risk of extinction. If speakers do not transmit their language or use it in public spaces, how can they expect institutions to recognize or respect it? Its survival will depend less on congresses than on a grandmother—or, in this case, on the algorithm itself.

A recent study by Stanford University and BetterUp revealed that, despite the enthusiasm for incorporating AI into office work, its impact on productivity remains far from positive. The survey shows that four out of ten employees use the technology on a regular basis, yet many generate what researchers call workslop: results that appear polished but lack depth, forcing others to redo the work. The cost is considerable: on average, nearly two hours are required to correct each case. Millions lost to wasted time. Beyond the financial toll, it also erodes the internal trust of teams, as more than half of respondents report feeling uneasy or mistrustful of the ability of those producing such content.
The friction between those who use AI with professional rigor and those who apply it in a casual, amateur manner only amplifies the discomfort. The absence of clear directives from leadership further fuels frustration: employees are left uncertain whether to correct, return, or simply reject subpar work. The outcome is unnecessary meetings, endless rounds of review, and time diverted from what truly matters. Researchers stress that the solution is not to dismiss AI, but to integrate it within rigorous standards. Only then can it become a strategic ally, applied with intention and discernment. For now, the promise of superefficiency remains on hold, and the question lingers: is AI truly easing burdens, or merely shifting them from one desk to another?

At its annual conference in Hangzhou, Alibaba announced a strategic alliance with Nvidia, aimed at integrating the full suite of Physical AI software into Alibaba Cloud. The collaboration will grant developers access to tools spanning data generation, model training and validation, with a focus on applications that require direct interaction with the physical world—such as robotics, autonomous vehicles, and industrial automation systems. The announcement comes against the backdrop of U.S. trade restrictions on China, though Nvidia maintains a special agreement that allows the sale of certain chips in the country. Following the news, Alibaba’s shares rose more than 9% on the Hong Kong Stock Exchange, reflecting strong market expectations.
The agreement underscores Alibaba’s ambition to position itself at the forefront of the so-called “era of superintelligence,” backed by multibillion-dollar infrastructure investments. For Nvidia, it marks an expansion of its reach in a market eager to reduce reliance on Western suppliers, yet still dependent on its technology. The context is particularly relevant as the Physical AI sector continues to grow, with its value expected to multiply over the next decade. Demand in manufacturing, logistics, and healthcare shows no signs of slowing. This alliance consolidates the presence of both companies in an expanding field and, above all, demonstrates that—despite geopolitical tension—technological cooperation finds a way.

Experience shows that the adoption of artificial intelligence does not depend on sudden enthusiasm. Many companies have tried to launch projects in haste. A more solid approach is to assess organizations along two dimensions: the speed with which leadership drives change and the level of preparation in infrastructure, data, and capabilities. From this combination, four distinct zones emerge. Stagnation occurs when there is neither urgency nor preparation, and the company adopts a merely contemplative stance. Complacency appears in organizations that have resources but lack the drive to use them. Frustration prevails when speed is demanded without solid foundations. The most fertile scenario is innovation, which arises when urgency is matched with preparation and every part of the organization understands its role.
Today, many companies find themselves in the zone of frustration, caught between the pressure to “do something with AI” and the absence of the foundations needed to sustain it. Escaping this dead end requires ensuring that the data used to train and apply AI is accurate, complete, and accessible, while also creating testing environments and accepting that governance does not slow progress but brings order to it. The transition to innovation does not come from having the newest tool, but from scaling the conditions that allow experiments to become sustainable practices. Only then does artificial intelligence stop being a slogan and become part of everyday work, with effects that multiply across the entire organization.

When DHL decided to incorporate artificial intelligence into its voice assistant, it did so with a very different approach from many other companies. The project began with a simple problem. The system was unable to reliably recognize the German “ja,” the equivalent of “yes.” That misstep revealed that, before scaling, it was essential to fix the basics. The company treated AI as part of its critical infrastructure rather than rushing into eye-catching innovation projects.
DHL understood that speed without solid foundations leads to failure, and that governance is part of progress rather than an obstacle to it. For six months, teams from different departments worked in an isolated lab. They agreed on clear rules and cleaned up stored data without interfering with live operations. The result was a voice assistant that now handles around one million calls each month and resolves half of them without human intervention. The case shows that it is often more effective to create suitable environments for research and testing, and only then scale up sustainably. Truly novel tools do not always come from hurried, large-scale investments. Real progress lies in integrating these technologies into processes validated by practice, supported by reliable data and clear budgets.

Coca-Cola chose to step into artificial intelligence in a way that was unconventional for a multinational of its size. Instead of rolling out complex strategic plans or sending thousands of employees to training programs, it created a “sandbox,” a controlled environment where a small team could test tools like DALL-E and GPT outside the scope of its core operations. Only six people from key areas—legal, communications, and technology—were involved in the initiative. The company gave them space to experiment and, based on the results, launched Create Real Magic, a creative experience that allowed consumers to access the brand’s iconic elements and design their own versions of Coca-Cola products using popular image generators. With the typography, logo, or bottle as a base, users could combine them with futuristic landscapes, Asian dragons, or artistic styles from different periods of art history.
Beyond a campaign that unfolded almost on its own, the company gained firsthand insight into what the technology could deliver before committing significant resources or embarking on broader transformations with uncertain outcomes. A well-designed sandbox gave teams the freedom to make mistakes, learn quickly, and understand the real potential of AI without the costs of rushed improvisation. With this strategy, Coca-Cola reduced corporate anxiety and internal pressure, turning experimentation into a safe practice that did not put the brand’s reputation at risk.

Matt McLeod’s surgery was a success. An augmented reality visor gave surgeons a three-dimensional, precise view of his spine, making the procedure far easier. The use of AI in the operating room marks a paradigm shift in medical practice. The Xvision Spine system will allow more exact, less invasive interventions, guiding screw placement, monitoring oxygen, and locating instruments in real time. It has been approved after positive clinical trials. Soon we may see robots suturing intestines with a literal pulse of steel or detecting residual cancer in the brain. Such breakthroughs blur the boundary of how far machines may be allowed to replace physicians. In any case, they will strengthen the physician’s role and help in making better decisions under pressure.
Although the results inspire enthusiasm, they also provoke unease among skeptics who question the degree of autonomy given to machines. We are approaching an inflection point that compels us to reconsider whether medical knowledge will remain the exclusive domain of humans or shift toward a hybrid entanglement with technology. Should that occur, it will bring ethical, epistemological, and practical consequences. A stethoscope extends the senses and allows the ear to hear what it otherwise could not. AI, by contrast, analyzes thousands of medical records and suggests a diagnosis. It ceases to be a mere tool and instead participates in the genesis of knowledge, redefining the role of human judgment in clinical practice.