In my piece for RIOT 2025 (June 2025), I argued that generative and agentic AI push us into something qualitatively different: intelligence no longer as an app, but as an atmosphere—infused into objects, services, and environments until the fabric of everyday life becomes conversational. The shift is not only from responsive tools to initiating systems, but from discrete interactions to a ubiquitous field of relations.
2025 was the pivoting year for agentic AI. Everyone declared it the year of agents—though the examples were interesting rather than transformative. Physical AI grabbed headlines at CES: humanoids and helper robots demonstrating cleaning tasks and household routines. Straightforward. Even dull. But this is the opening act.
The real shift happens when AI becomes embedded in everything we use. Every device with computing power, every service with a potential connection to an intelligence layer running beneath the surface. Call it the Upside Down, if you like—a realm that’s always there, occasionally bleeding through.
Three ways the atmosphere manifests
How does this intelligence layer relate to our lived world? I see three models emerging:
The Cloud. Everything intelligent draws from shared data and knowledge; intelligence operates at the system level. Individual actions connect to an ecosystem that almost lives on its own. Activities in the real world and in the cloud proceed as parallel processes—related but distinct.
The Digital Twin. A representation of reality built for understanding. Intelligence is made available to the actor’s individual profile in the real world. More sophisticated versions enable simulations and predictions based on aggregated profiles. The focus remains individual: what can this model tell me about my situation?
The Mirror World. Combining these two into something new. Not only does the individual actor apply profile information and predictive knowledge, but the social fabric of the lifeworld itself is mirrored. Relations, communities, collective patterns—all represented and capable of influencing action in return.
We’re early. It’s hard to predict how the mirror world will impact the lifeworld. But 2026 will give us the first real indications.
The individual trap
Notice that current AI development remains stuck in individual framing. Chatbots became primary access points for questions in 2025—Google pushing Gemini, OpenAI as poster child, Claude rising as the choice for users wanting something more personally attuned. Research shows people increasingly use these bots for personal questions, emotional feedback, and companionship.
But it’s always one person, one tool. Even in social contexts, the relationship is fundamentally individual.
The companion device market tried to extend this: Rabbit, Humane. Both failed—costs too high, friction too great. But intelligence is moving to the edge. Apple’s next-generation AirPods will matter here. Translation already works. Visual intelligence currently requires activation—take a picture, wait. That friction will disappear. The always-present companion is coming.
Still individual. Still personal optimization.
A collective turn
Here’s what shifts in 2026: we begin asking questions about collectivity, governance, and who sets the goals of the ecosystem.
In times of austerity—less money, fewer institutional supports—people turn to each other. Communities gain importance: local, ad hoc, interest-based, place-based. This isn’t speculation; it’s already happening.
Now add intelligent things to these communities. The sensors. The solar panels in your energy collective. The batteries. The shared vehicles. What happens when these become fellow community members?
Will we govern them collectively? Can we? How? What are the new forms of autonomy for actors?
In 2025, the introduction of agents or agentic operating services became foundational to the possibilities of collective working. This is extending to physical space. The interactions between the individual operating agents will be the defining factor for our day-to-day lives. New negotiating agents will become the crucial nodes.
This is where social intelligence enters. Not intelligence based on individual knowledge—that’s the IQ model we’ve been building. Social intelligence is the EQ of the ecosystem: intelligence that emerges from and operates on social relations. The mirror world doesn’t just reflect individuals; it reflects how we’re connected. And if we can see those connections, we can begin to govern them.
Respecting externalities
A community optimizing for itself creates externalities—impacts outside its boundaries, costs borne by others. When your energy collective’s AI negotiates with the grid, whose interests does it serve? When shared vehicles optimize routes for members, what happens to non-members, to public space, to the commons? Social intelligence without social accountability is just collective selfishness with better tooling.
This is the tension that 2026 will surface but not resolve. We’ll see the first cases of communities incorporating intelligent things as participants. We’ll see experiments in collective governance of AI systems. And we’ll see the early conflicts over who’s inside, who’s outside, and who bears the costs.
Agentic social intelligence
We will start living alongside intelligent things as part of our social fabric, not as tools but as participants in collectives of humans and things. The atmosphere is forming. The mirror world is taking shape.
2026 is the year social intelligence surfaces, and new relations with our new fellow agentic actors will start to take shape. It will be a year of learning about new social structures and building understanding on growing into a new living world.
I am looking forward to research through (speculative) design with Cities of Things and partners!
Let me know if you’d like to dive deeper, with an inspiring presentation with more extended examples and projections, or a speculative design workshop with your team or students.
