Umberto Eco was born on January 5th. So was I, though a few decades later. I first encountered his work in university, in a communications theory course (shout out to Andréa, my theory teacher) that ended up shaping how I think about media, culture, and meaning more than anything else I studied. Apocalittici e integrati was not just a text on a reading list. It was the first time I saw someone apply rigorous analytical tools to things the academy considered beneath serious attention: comic strips, pop songs, television. That permission to take popular culture seriously as an object of study never left me.
Sixty years after Eco wrote it, the framework he built to analyse mass media has become the most precise instrument available for describing what is happening with artificial intelligence. That is not a coincidence. It is what good theory does: it outlasts its original context.
Apocalittici e integrati was, at its origin, an analysis of the two dominant positions toward new media: those who saw in television and popular culture the destruction of civilisation, and those who celebrated it as the democratisation of knowledge. Eco chose neither side. He proposed that both were asking the wrong question, and that the real work was to understand what was actually happening, not to take sides in the spectacle of division.
In 2026, the debate about AI reproduces that structure with a fidelity that would be comic if the consequences were not so real.
The AI apocalyptics
The contemporary apocalyptic is easy to recognise. They see in every generative advance an existential threat to human creativity, employment, and the authenticity of experience. For this group, AI is a soulless simulacrum devouring what is genuinely human, and any use of it is a form of capitulation.
The critique has substance. AI was directly responsible for approximately 55,000 layoffs in the United States in 2025, and total job cuts reached 1.17 million, the highest level since the pandemic. Concerns about the concentration of technological power are legitimate and documented. The problem with the apocalyptic is not being wrong about the risks. It is letting the magnitude of the risks prevent anything beyond naming them. The critical position, when it becomes identity, stops being analysis and becomes performance.
The integrators and the CEO as inverted oracle
The contemporary integrator is more dangerous, not because they are malevolent, but because they are institutionally naive. Three quarters of CEOs say they are their organisation's main decision maker on AI, twice the share from the previous year. At the same time, more than 60% of CEOs admit feeling pressure to act on AI while confessing they lack a clear execution path. (PwC, 2026)
This is the integrator in its most consequential form: someone with decision-making authority, urgency to act, and an absence of critical framework. AI becomes an oracle to which strategic decisions are delegated not because they understand what it can do, but because competitive pressure demands that something be done with it quickly.
AI deployment grew 400% across enterprises between 2024 and 2025, but only 12 to 18% of companies captured meaningful ROI. Failures were consistently traced back to organisational dysfunction: unclear ownership, misaligned incentives, and leadership teams unwilling to make explicit decisions about how work should change. (Wharton / WNDYR, 2026)
The integrator treats AI as a strategic shortcut. What they get is a strategic dependency with no map for what happens when the shortcut leads somewhere the organisation did not intend to go.
The third position Eco did not name
Here is where the framework needs expanding. Eco's dichotomy was precise for 1964. In 2026, there is a third position that is not a synthesis of the other two and not a moderate midpoint between them. It is a categorically different relationship with the technology.
I call them adapters.
The adapter is not defined by enthusiasm or rejection. They are defined by the quality of their attention. They use AI with an awareness of what it can and cannot do: as a research accelerator, a brainstorming partner, a tool for processing qualitative data at scale, an automation layer for repetitive tasks. What they do not do is delegate to it the decisions that require something the machine structurally cannot provide: lived experience, cultural context, emotional memory, and the accumulated judgment of someone who has been wrong before and learned something from it.
The adapter understands the one thing that both apocalyptics and integrators miss: AI does not think. It optimises. It produces the statistically probable output from the patterns in its training data. That output can be extraordinarily useful. It can also be extraordinarily generic, because the statistically probable is, by definition, the average of what came before. The adapter is the person who knows the difference, and who brings enough context and judgment to the process that the output becomes something more than a well-formatted mean.
The paradox at the centre
Here is what makes the adapter philosophically interesting and practically indispensable. They are simultaneously the most critical of the system and the most useful to it.
Generative AI learns from human output. The richer, more specific, more contextually grounded the input, the better the output. The adapters, precisely because they engage with AI critically and intentionally, provide the kind of input that produces genuinely useful results. They are, in a very concrete sense, the source that trains and improves the systems they use with caution.
The apocalyptic refuses engagement and thereby contributes nothing to the system's evolution. The integrator engages without discrimination and thereby contributes noise. The adapter engages with judgment and thereby contributes signal. Eco himself wrote in 1964 that man should no longer be seen as a syllogising animal, but as an animal capable of building syllogising machines and posing new questions about their use. The adapter is that animal. The one asking the new questions.
What this means for organisations
The institutional error that most organisations are making right now is not adopting AI too quickly or too slowly. It is misidentifying who in the organisation should be driving that adoption.
Adapters are frequently labelled as resistant, slow, or overly cautious by the integrators above them in the hierarchy. They ask questions before implementing. They flag limitations. They push back on proposals that treat AI output as strategic truth. In organisations where speed is confused with intelligence, these are career-limiting behaviours.
The irony is structural. As Barry O'Reilly observed: "Most companies aren't failing at AI. They're failing at the conditions required for AI to succeed." Those conditions include, above all, people with enough critical intelligence to know when the tool is working and when it is producing confident-sounding nonsense. Removing the adapters from the process in the name of efficiency is not a technology decision. It is an epistemological one. And organisations that make it are not moving faster. They are moving blindly.
The permanent condition
The human-machine link is not a transitional phase on the way to full automation. It is the permanent condition of any intelligent use of technology. Machines need humans to evolve: to provide the context, judgment, and cultural specificity that training data alone cannot contain. Humans need machines to scale: to process at speeds and volumes that individual cognition cannot reach.
What Eco understood about television in 1964, and what applies with equal precision to AI in 2026, is that the interesting question is never whether the technology is good or bad. It is what human beings are becoming in a world where machines do things that yesterday only humans could do. The apocalyptic answers that question with fear. The integrator answers it with enthusiasm. The adapter answers it with the only instrument adequate to the complexity of the question: careful, critical, ongoing attention.
That is not a moderate position. It is a demanding one.
Photo by cottonbro studio: https://www.pexels.com/photo/bionic-hand-and-human-hand-finger-pointing-6153354/
References
- Eco, U. (1964). Apocalittici e integrati. Bompiani. Link
- Ligas, C., Crepaldi, F., & Alfieri, F. (2026). Né Apocalittici né Integrati: AI Semiology. Fondamenti di semiotica dell'Intelligenza Artificiale. Ars Europa. Link
- Fernández-Galiano, L. (2026). Apocalyptic and Integrated. Arquitectura Viva. Link
- BCG. (2026). As AI Investments Surge, CEOs Take the Lead. Boston Consulting Group. Link
- PwC. (2026). CEO Survey 2026. PricewaterhouseCoopers. Link
- IBM Institute for Business Value. (2025). CEOs Double Down on AI While Navigating Enterprise Hurdles. IBM. Link
- WNDYR / Wharton. (2026). 2026: The Year AI ROI Gets Real. Link
- Conference Board. (2026). AI and the C-Suite: Implications for CEO Strategy in 2026. Link






