Categories
reflections research

Cities of Things affordances

In the process of creating a new Cities of Things manifesto for 2026 and beyond, as a reflection of the state of Cities of Things, an explorative research project was started. Through interviewing 30 experts the first outlines of thinking about the shift from a cities of things with agency, towards living in a reality of immersive AI, where given more depth.

The State of Cities of Things

Living with things, part of the city fabric, as residents, as citizens. It was what we envisioned back in 2017, resulting in a set of dilemmas that still hold true in current relations with AI. But what has changed in that almost decade? Machine learning and AI became generative AI. And we evolve in a society where we increasingly partner with other intelligences. Both ‘in your face’ while using the gen AI tools for creating and even communicating as a coach or a shrink. Trust in the systems changed. Critique is also serious. Big tech has shifted attention to generative AI, and new big tech players have taken a defining role. OpenAI, Anthropic, and next to the existing ones like Google and Meta. And even if seemingly lagging behind like Apple, it is the core of the attention. 

This all focuses on the backend, the infrastructure, and the screenbased interactions – with extension to other forms of communicating with images and sound. The physical angle is still early in development. Physical AI start to be a thing in 2026 for sure, and robotics is accelerating with humanoids and doganoids. It is still fresh and experimental. 

You could say that these forms of physical AI are the fully embodied ones. Another form are the autonomous vehicles that get more and more agency to make decisions by themselves, operate with less instructions. The latest version of FSD (Full Self-Driving) is illustrative. Comparing videos over the years give you trust that the current version is capable to deal with all regular situations, and only has flaws in edge cases. You might even say that it still has human characteristics; it sometimes misses a sign or is confused. The forgiveness of humans is however quite different. 

From the beginning of the Cities of Things research we think that things are not the same as objects. We like to see things more as assembles of meaning and behavior. With that, it can also be a static thing or system as a whole. In the thirty interviews that serve as the point of departure for reflecting on the state of Cities of Things, this notion is common. And it goes behind that, looking at the other intelligences and more-than-human actors and systems. New -or changing- forms of community and governance are a topic. And new types of infrastructure. One thing that we foresaw but was still very conceptual is the fact that these things that form the city are an assembly of layers of states, in the sense: the physical layer, the regulatory layer, the digital layer, the data layer, the object layer, the citizen layer, the relational layer. And within these layers, iterations happen, evolutions take place. The layers are responding to each other in different ways. Are entangled better said. How these entanglements are shaped differs and changes over time. A thing is, in that sense, a moment in time, entangling different aspects from the layers in a performative expression that is taking space in its environment. An abstract description, but in other words, the essence is that things are not static, given. 

A story from a possible future

Before diving deeper, I like to sketch a possible future of the state of Cities of Things that encapsules some concepts that are discussed and we find important to unpack later, using the expert opinions.

1st act, the context

Everything that can be a computer will become a computer. Chips are flocking the things we use. In our house, in our cars, the phones were the first ones maybe, and are an enabling platform for those things that are still lagging behind. Things become sensors of their environment. Not for direct services, but at least for informing maintenance. The iPhone was introduced in 2017 as a marriage of a communicator, a music player, and an internet device. The word “communicator” is another word for “phone”; using it makes the narrative correct (a phone that is a phone sounds less interesting). The aspect that this device became a communicator and connector of all other aspects of our life, of our physical and digital realities, literally, might not have been foreseen back then. 

It is a known change in society, in human interactions and daily life. We are disconnected from our direct reality and live more prominently in our own chosen digital reality. We all carry our personal bubbles and live inside these. It is seen as a problem for our social intelligence, for our healthy human-to-human connections. These can be separated, though; the phone’s enabling capability for creating new infrastructure and new types of embedded intelligence in a network. That is a capability in its own right. The other capability is serving as the hub in our social fabric. Social on all levels of friendship: from deep commitment to loosely one-time contact while doing a transaction, or even just an interaction. 

The big shift that is happening now in the second half of the 20s is the extension of this enabling layer with AI and, more importantly, agentic capabilities. That is not a new function or system added, but an evolution of existing intelligent systems. Or better, of all existing nodes in systems, aka things. What might happen next is that these agentic capabilities will also get more agency. Not forced by the producers or providers, but grow from the opportunities on a functional level. We will have different conversations with the things we use, the services we experience. Services that can be more autonomous. 

The next possible step is the forming of new networks, new communities, and new societies. Among the things, but especially among gatherings of things and humans. And other non-human lifeforms. 

The weak signals are the digital systems and especially the agent-based assistants that have their after-work gathering. It is a meme, but one that might make sense.

Next to the autonomous things, the level of agency is increasing. Who is taking the agency? Are we delegating more and more out of convenience and losing agency without being fully aware?

Next step after agency per things, is a type of organizing layer formed by the things, the communication between things, creating their own governance structures, with or without humans in the loop. 

The term humans in the loop is already an indication how this can evolve. A human in the lead or in the loop? Democratizing things? How far?

Some aspects that are interesting: What is driving this, and what are scenarios for this evolution? Who is the owner, and ownership Related to public spaces, governance, democracies? New infrastructures? Organizing scales (family, building, neighborhood, etc)?

2nd act, a story from the near future

In 2033, Apple is still organizing its yearly WWDC. And for the 2033 edition, people are already getting excited with the first announcements in April. There is expected to be a major update to the immersive device after it launched last year. It seems they have found their mojo again after years of downfall and doubts about their relevance in the current state of digital life.

The rise of a small player, such as Fairphone, in the late 20s was not to be expected. The pivot towards a community device, rather than a communication device, was so new that it shook the whole industry. The uptake from the mesh-based communication ate in the business of all players, and although Apple was not that affected in the first years, it started to compete with the 2029 introduced connector-hub-puck, as people started to combine this with all kinds of services in a new open platform that emerged from the mesh networks. 

It was not a surprise that Fairmesh was acquired by a big player, but Apple is usually not in the business of acquiring complete service brands. With the rapid growth of Fairmesh, it almost felt like a reverse takeover, and, officially, it is a collaboration, not a takeover, with Apple adding the Fairmesh protocol and chip to their AirTag devices.

More about Fairmesh

In 2029 the small mobile phone company Fairphone introduced a new device that was aimed at local communities. Already in 2025, the first local initiatives to build mesh networks for communicating text-based messages began experimenting. On the one hand, it focused on disaster-based communication needs, a hot topic as the several wars were increasingly developing into a worldwide conflict. A pivotal moment was when mesh networks began connecting to edge devices, such as phones, enabling simple messages to be integrated into richer content through smart use of AI. With the latest open source models with a smaller footprint, it became common to download AI models to personal devices and build a local personal assistant. 

The device that Fairphone introduced used a low-end computing architecture, and the core was the chip for the mesh-networking aimed at peer-to-peer. As they had integrated this chip also in the existing phones, they kickstarted a local-based network. And they offered a device for local communities, a kind of router/antenne that makes it possible for all kind of communities to create local networks with messaging services. 

The device gained traction as the liminal state of the world, moving from crisis to crisis, made people receptive to a new robust network that was not run by big tech. In just one year, millions of devices were distributed, and, more importantly, it has been possible since 2030 to connect from the Nordkapp to Tarifa and from rural Ireland to the Donbas, even reaching into Asia. And it is the first network to show similar uptake in the global south.

Apple 

With all the economic crises that followed in 2026, the US-Iran conflict, the already-started trend toward a declining market for premium phones, Apple had some rough years. They had been planning for a new type of device in the early 2020s that moved away from the phone as a form factor; the first signal was the iPhone Air, which proved that the core technical ingredients fit a small footprint, the camera bump. After the bust of the Apple Vision devices, the company can capitalize on its new success, and with the new AirTag, it launched a personal device that fully leverages the cloud services infrastructure. A special T1 chip was launched to create a platform for physical AI, a form that emerged in the late 20s as a new form of AI. With the Tag, services can be connected to every device you like. One thing that is not possible yet is to connect to the internet without a phone nearby. 

In 2032, Apple decided to start a collaboration with Fairmesh, not by fully acquiring but by opening their AirTag with a mesh chip and an AirTag Plus with voice-based communication devices. The Fairmesh is also expected to be added to a new version of the Apple Watch.

About

These are rough thoughts about the state of cities of things, as part of making sense of the current state, and part of the process of formulating a new manifesto and design principles for Cities of Things that aimed to be completed in May 2026. Written between 10-17 April, and used as resource for the first version is discussed at the event on 24 April.

More to follow!

Categories
update

Join us: State of Cities of Things event 24 April

State of Cities of Things: design for the interplay of humans, urban robotics, and physical AI

Last November, it was 7 years after the launching paper was presented at NordiCHI, “Near future cities of things: addressing dilemmas through design fiction“. It made me think it would be nice to reflect on the last seven years, especially on the state of Cities of Things, with the people who were important during those years. So I started planning interviews. In the coming month, this will come together in a publication, and I would like to invite you to join us for a network event where we present the results and reflect on them with some special guests. Powered by CLICKNL.

SAVE THE DATE:
Friday, 24 April 2026, in the afternoon.
Location in Amsterdam.
With: Iskander Smit, Maria Luce Lupetti, Sen Lin, Tomasz Jaskiewicz, and Vera van der Burg. Moderated by Monique van Dusseldorp.

Program

The program for the event:

15:00h__Doors open
__Wijkbot/Hoodbots will be present to explore!
16:00h__Welcome and introduction
__Presenting the research results by Iskander Smit
16:30h__Panel introductions
__Maria Luce Lupetti, Sen Lin, Tomasz Jaskiewicz, Vera van der Burg
17:00h__Interactive reflections
__Panel and participants moderated by Monique van Dusseldorp
18:00h__Drinks and bites
20:00h__End

About the panel

Maria Luce Lupetti is an Assistant Professor in Design at the Department of Architecture and Design at Politecnico di Torino. Her research is concerned with all matters of human entanglement with the artificial world, especially concerning complex technologies such as AI and robotics. She is co-director of POEL – Possible Entanglements Lab, a research group dedicated to investigating how people and technologies co-shape one another, and how design can help envision desirable configurations.

She is the PI of ‘Participatory Design Justice for Ethical AI Transitions’, a three-year project funded by the Italian Ministry of Education and Research, under the FIS2 program. She is also Partner Investigator at the ARC Centre of Excellence for Quality Work in the Digital Age, and serves as Exhibit X section editor for ACM Interactions Mag.

Maria Luce Lupetti was a postdoc researcher at the start of the Cities of Things research program at Delft University of Technology, and author of the first paper on the topic.

Sen Lin was among the first master graduate students at Delft Design Lab Cities of Things, with his research “Hinting Civic Futures, A call for cityness in the future smart age“. In 2024, Sen developed with Iskander the speculative workshop for Generative Things, as part of the ThingsCon team.

Sen is a product designer and researcher keen on design reflection and value intervention. Possessing a diverse background in strategic design, design speculation, and innovation consulting, he is passionate about creating socio-technical apparatuses following his importance felt. Sen used to lead trend-driven innovation and design foresight at Thoughtworks China, uncovering strategic pivots to future-proof businesses.

After working in China for 5 years, he returned to the Netherlands in 2023. He has been visiting China for a couple of months beginning 2026 and Sen will share his impressions from urban robotics in the China of today.

Tomasz Jaskiewicz is a professor in Civic Prototyping at the Creating010 knowledge centre at Rotterdam University of Applied Sciences. Within his research group, he investigates new applications, methodologies, tools and processes to engage city dwellers in digital innovation within their social and physical living environments.

Tomasz has been active within the Creating010 Research Centre since 1 March 2021. Tomasz Jaskiewicz has a background in architecture and urban planning and has practical work experience developing experimental architecture projects, interactive installations, and digital design tools. Since 2014, he has worked as an assistant professor at the Faculty of Industrial Design Engineering at TU Delft, where he has focused his research on Explorative Prototyping as a strategy for dealing with the complexity inherent in designing interactive environments. In parallel, he has also conducted substantive research into social practices within flexible office ecosystems and ‘smart’ urban contexts.

Tomasz was an initiating partner in the Cities of Things LAB010 living lab, and developed the first Wijkbot concept for civic prototyping citybots. He will share, among others, the backgrounds of Wijkbot/Hoodbot and the learnings ‘Between Experiments’ (ref).

Vera van der Burg is a designer, researcher, and Ph.D. candidate at TU Delft’s Designing Intelligence Lab, where she develops Reflective AI: a design practice that treats artificial intelligence as material for creative reflection rather than a tool for automation.

Her research investigates how training AI models can become a reflective practice, how subjective datasets reveal our own perspectives back to us, and how we might use AI to examine ourselves rather than extract from others. Vera strives to develop slower, smaller, and more intimate approaches to working with AI in creative practices, and I combine working with AI tools with an active making practice.

With her project Text-to-Clay, she showed at the last Dutch Design Week a different relationship with AI for design. She was awarded Emerging Talent at the Dutch Design Awards and FD Talent 50, and was featured in Dezeen.

Location

The event will take place in Commandantszaal at Marineterrein, Amsterdam

Registration

Please let us know if you plan to join! Also, if you only join us part of the time.

About the research

The current list of (planned) interviews for the research:
Kars Alfrink, Viktor Bedö, Vera van der Burg, Nazli Cila, Alain Dujardin, Paul Geurts, Elisa Giaccardi, Daniel Goddemeyer, Matt Gorbet, Louise Hugen, Tomasz Jaskiewicz, Jelmer Koedood, Rob van Kranenburg, Sen Lin, Maria Luce Lupetti, Iohanna Nicenboim, Gerard Nijboer, Simone Rebaudengo, Emiel Rijshouwer, Jip Schelling, Rob Schmidt, Marcel Schreuder, Paul Skinner, Kristel Thieme, Thijs Turèl, Jann de Waal, Martijn de Waal, Peter van Waart, Ren Yee, Cristina Zaga

Below is a PDF I made as an introduction for the interviewees.

Categories
update

Meet the Wijkbot!

Welcome! Whether you’ve encountered the Wijkbot in person or heard about it, read on for an introduction to how the Wijkbot came about and its journey so far.

On this website you’ll find many examples of neighborhood robots built in workshops, in the Cities of Things LAB010 living lab, in student projects, and at festivals. Contact Iskander if you’d like to know more or would like us to organise a workshop.

The Wijkbot began as an accessible way for anyone to quickly prototype city robots. In the Cities of Things LAB010 project (2022–2024), we collaborated with the Afrikaander Cooperative and spent a year working with a residents’ think tank to explore what city robots could mean for the Afrikaanderwijk. The result was the Inzamelbot — a collection robot.

The Inzamelbot’s appearance was inspired by the newly opened Grondstoffenstation of the Cooperative. The Wijkbot platform today combines a second-hand hoverboard as a base with a standard open-source controller, and a body built from leftover packaging materials from the nearby market.

We’ve discovered that the Wijkbot platform is an excellent way to start conversations, generate knowledge, and make tangible what would otherwise remain abstract. We’ve experienced this in workshops at PublicSpaces (Amsterdam), the Society 5.0 Festival, Smart & Social Fest (Rotterdam), Climate Meets Creative Coding (Hamburg), Dutch Design Week, and many others.

Over the past years, many student projects have been carried out using the Wijkbot platform as a foundation or tool — conceptually, design-focused, and technically.

The Wijkbot is an initiative of Cities of Things and Creating010. Cities of Things is a knowledge platform and research programme that originated at TU Delft, focused on the design of intelligent things and their role in cities and society.

Categories
update

Wijkbots in the wild at two events

This week, Wijkbot is present in different forms at two events, one in Amsterdam and one in Delft.

ESConference, Amsterdam, 11-13 February 2026

The Wijkbot Kit 2.0 is part of the ESC conference exhibition, the annual conference of the Expertisenetwork Systemic Co-Design*, held 11-13 February at Pakhuis de Zwijger, Amsterdam. Wijkbot has received funding from the ESCall program to refine its existing knowledge product, the Wijkbot Kit. In that project, we expanded the workshop to place greater emphasis on the contextual aspects and second-order effects of introducing wijkbots in existing neighborhoods or communities. We’ve written about this project previously.

Highlight Festival, Delft, 11-14 February 2026

The Highlight Festival** in Delft (11-14 February) is themed “Robo-futures” this year—a fitting context for the Labkar, a mobile robot derived from the Wijkbot concept and developed as part of the Sociaal AI Lab in Rotterdam by Creating010 (Hogeschool Rotterdam). It will be around in Delft at Leeuwenhoek park and in case of bad weather in The Social Hub, both next to NS Station Delft.

We invite visitors to consider what roles a neighborhood robot might play on the streets of Delft.

Hoodbots (Wijkbots) are experimental urban robots developed within the Civic Prototyping research group at the Research Centre Creating010 (Rotterdam University of Applied Sciences). These AI-speaking vehicles roam public space to spark unexpected encounters and ask big questions: Who owns robots in our streets? Who takes care of them? And how do we want to live alongside them?


*The Expertisenetwork Systemic Co-design (ESC) connects four metropolitan universities of applied sciences, six design research groups, and 38 societal and design partners. As a leader in systemic and design-oriented work, ESC has a direct and indirect impact at the intersection of practice-oriented design research, design professionalization, societal transitions, and education.

**Now in its ninth year, Highlight Festival transforms Delft into a playground for art and technology. Each edition brings together artists, designers, engineers, and researchers to create site-specific installations across the city. From light and sound to robotics and data, and research and fine art, the festival invites visitors to explore innovation and creativity in unexpected ways, not just as spectators, but as active participants in shaping the future.

Categories
update

Wijkbot at Makerfaire Eindhoven

At MakerDays 2025, the MakerFaire in Eindhoven, the Future Parade is a fixed part of the programme on 27 & 28 September 2025. The Wijkbot was part of the parade with a special version of the Gemaal op Zuid Rotterdam project, where children build their own robots from leftover market materials.

Categories
reflections research

How AI Reflects on the Hoodbot, Civic Prototyping and Fifth Order Design

As a little experiment and a way to reflect on a part of the research from the last 7 years of Cities of Things, I used the (in)famous Notebook LM tool to create a podcast discussing research sources. The result was rather pleasing, in the sense that based on four sources it gave a rather accurate description of the backgrounds of the Hoodbot research and the link with concepts we are exploring with Fifth Order Design.

The research sources were:

Lupetti, M. L., Smit, I., & Cila, N. (2018, September). Near future cities of things: addressing dilemmas through design fiction. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (pp. 787-800).

Lupetti, M. L., & Cila, N. (2019, October). Envisioning and questioning near future urban robotics. In Design and Semantics of Form and Movement. Conference: Desform.

Jaśkiewicz, T., & Smit, I. (2024). Between experiments leveraging prototypes to trigger, articulate, and share informal knowledge: Case of the cities of things living lab. In Applied Design Research in Living Labs and Other Experimental Learning and Innovation Environments (pp. 210-233). CRC Press.

Stompff, G., Jaskiewicz, T., Nachtigall, T., & Smit, I. (2025). Design in the Real World: Facilitating Collective Learning through Design. In Applied Design Research (pp. 290-308). CRC Press.

Notebook can also create infographics now, but this is not that good. The visual references would not pass my creative direction 🙂

In the latest NADR publication, we wrote about the potential role of AI-image generation in Civic Prototyping:

Scaffolds of Imagination: Partnering with AI-image generation in Civic Prototyping (Tomasz Jaśkiewicz, Iskander Smit, Peter van Waart, Manon Mostert van der Sar, Evin Wijninga)

Categories
reflections

2026: The year social intelligence surfaces

In my piece for RIOT 2025 (June 2025), I argued that generative and agentic AI push us into something qualitatively different: intelligence no longer as an app, but as an atmosphere—infused into objects, services, and environments until the fabric of everyday life becomes conversational. The shift is not only from responsive tools to initiating systems, but from discrete interactions to a ubiquitous field of relations.

2025 was the pivoting year for agentic AI. Everyone declared it the year of agents—though the examples were interesting rather than transformative. Physical AI grabbed headlines at CES: humanoids and helper robots demonstrating cleaning tasks and household routines. Straightforward. Even dull. But this is the opening act.

The real shift happens when AI becomes embedded in everything we use. Every device with computing power, every service with a potential connection to an intelligence layer running beneath the surface. Call it the Upside Down, if you like—a realm that’s always there, occasionally bleeding through.

Three ways the atmosphere manifests

How does this intelligence layer relate to our lived world? I see three models emerging:

The Cloud. Everything intelligent draws from shared data and knowledge; intelligence operates at the system level. Individual actions connect to an ecosystem that almost lives on its own. Activities in the real world and in the cloud proceed as parallel processes—related but distinct.

The Digital Twin. A representation of reality built for understanding. Intelligence is made available to the actor’s individual profile in the real world. More sophisticated versions enable simulations and predictions based on aggregated profiles. The focus remains individual: what can this model tell me about my situation?

The Mirror World. Combining these two into something new. Not only does the individual actor apply profile information and predictive knowledge, but the social fabric of the lifeworld itself is mirrored. Relations, communities, collective patterns—all represented and capable of influencing action in return.

We’re early. It’s hard to predict how the mirror world will impact the lifeworld. But 2026 will give us the first real indications.

The individual trap

Notice that current AI development remains stuck in individual framing. Chatbots became primary access points for questions in 2025—Google pushing Gemini, OpenAI as poster child, Claude rising as the choice for users wanting something more personally attuned. Research shows people increasingly use these bots for personal questions, emotional feedback, and companionship.

But it’s always one person, one tool. Even in social contexts, the relationship is fundamentally individual.

The companion device market tried to extend this: Rabbit, Humane. Both failed—costs too high, friction too great. But intelligence is moving to the edge. Apple’s next-generation AirPods will matter here. Translation already works. Visual intelligence currently requires activation—take a picture, wait. That friction will disappear. The always-present companion is coming.

Still individual. Still personal optimization.

A collective turn

Here’s what shifts in 2026: we begin asking questions about collectivity, governance, and who sets the goals of the ecosystem.

In times of austerity—less money, fewer institutional supports—people turn to each other. Communities gain importance: local, ad hoc, interest-based, place-based. This isn’t speculation; it’s already happening.

Now add intelligent things to these communities. The sensors. The solar panels in your energy collective. The batteries. The shared vehicles. What happens when these become fellow community members?

Will we govern them collectively? Can we? How? What are the new forms of autonomy for actors?

In 2025, the introduction of agents or agentic operating services became foundational to the possibilities of collective working. This is extending to physical space. The interactions between the individual operating agents will be the defining factor for our day-to-day lives. New negotiating agents will become the crucial nodes.

This is where social intelligence enters. Not intelligence based on individual knowledge—that’s the IQ model we’ve been building. Social intelligence is the EQ of the ecosystem: intelligence that emerges from and operates on social relations. The mirror world doesn’t just reflect individuals; it reflects how we’re connected. And if we can see those connections, we can begin to govern them.

Respecting externalities

A community optimizing for itself creates externalities—impacts outside its boundaries, costs borne by others. When your energy collective’s AI negotiates with the grid, whose interests does it serve? When shared vehicles optimize routes for members, what happens to non-members, to public space, to the commons? Social intelligence without social accountability is just collective selfishness with better tooling.

This is the tension that 2026 will surface but not resolve. We’ll see the first cases of communities incorporating intelligent things as participants. We’ll see experiments in collective governance of AI systems. And we’ll see the early conflicts over who’s inside, who’s outside, and who bears the costs.

Agentic social intelligence

We will start living alongside intelligent things as part of our social fabric, not as tools but as participants in collectives of humans and things. The atmosphere is forming. The mirror world is taking shape.

2026 is the year social intelligence surfaces, and new relations with our new fellow agentic actors will start to take shape. It will be a year of learning about new social structures and building understanding on growing into a new living world.

I am looking forward to research through (speculative) design with Cities of Things and partners!


Let me know if you’d like to dive deeper, with an inspiring presentation with more extended examples and projections, or a speculative design workshop with your team or students.

Categories
reflections update

Systemic co-design with agentic engineers

Weekly reflection on human-AI-things collaborations

The shift isn’t from human coders to AI agents. It’s from coding to engineering.

Every, the company behind Lex (which I’m using right now), released four predictions for 2026. The one that stuck with me: the rise of the “agentic engineer.” A couple of months ago, I wrote about an Austrian developer Peter Steinberger, who described this exact reality—directing a team of AI agents rather than writing code himself.

But calling this “directing agents” undersells what’s happening. The agentic engineer doesn’t just orchestrate; they design the environment in which agents can work at their best capacity. They build their own tools, shape workflows, and define constraints. The craft moves upstream: from writing the code to engineering the context.

This connects to another of Every’s predictions: designers building their own tools. That was what triggered me in the Austrian developer’s account was how much of his work involved creating bespoke tooling for his own process. This is what happens when the friction between idea and prototype disappears—designers no longer depend on a coder to test their thinking. They build, they learn, they iterate. Two principles at work: Understanding by doing, and Eating your own dogfood.

And here’s where it gets interesting: if professionals are building their own tools, will end users follow? Is every product becoming less an interface to data and more a platform for making bespoke tools? Does a new layer emerge where users shape their own outcomes?

I think this points toward co-design—but a richer version than we usually mean. Not just designer and user collaborating, but multiple layers of expertise are woven into the design process. The economist who maps value flows in a community. The ethicist who flags downstream effects. Stakeholders who traditionally appear only in the research phase are becoming part of the building itself.

In this framing, agents don’t just execute—they can represent these roles, simulate options, play out consequences before anything hits the real world. Agents as citizens. Not tools we use, but participants in how we design.


Within Cities of Things we have been looking into human-AI teams for a masterclass on designing these new teams, where this engineering was a key element. Building successful human-AI teams should focus on the relationships among the different human and non-human team members, not on task performance.


This triggered thought is part of my weekly newsletter:

Weeknotes 371 – The real shift is not from human coders to AI agents—it’s from coding to engineering the environment where agents are co-designers. And other news on AI companion devices and robots at CES.

https://www.iskandr.nl/systemic-co-design-with-agentic-engineers

Image: Interpretation by Midjourney

Categories
reflections

Annotating a future agentic thing from October news reflections

The weekly reflections on the news in human-AI-thing co-performances are brought to life via a story from the future of living with agentic things in the city.

A Day in the life: The Ginger Grid

I wake to a soft ripple of brass and bike chains—the “Bitter Bridge” profile our neighborhood brewed into its IPA last spring. The sound isn’t coming from a speaker in my room; it’s in the street outside, and the light through my blinds is tinted the amber that the profile taught the lamp posts to remember. Our block voted for this motif after the brewing co-op sent around tasting flights. Bitter, bright, a flash of citrus—somehow the flavor mapped to the sound of early commuters, and now it’s become how morning feels here.

At the corner café, the bartender pulls a ginger beer with the “Muddy Rhizome” tag. It’s the winter recipe: earthy heat, short finish. The tap handle has the NFC glyph that links to the profile’s social thread; I tap it, and my feed shows the neighboring district’s “Waxwing” IPA—a poet’s beer, crisp and clean. The city’s Ginger Grid aggregates these profiles across neighborhoods, and you can trace routes by flavor alone: follow Waxwing uptown, then switch to Rhizome to spiral back home.

Outside, the billboard across the tram stop murmurs awake as I arrive. It’s a national campaign from the brewery federation—a shimmering ribbon of foam, streetlights refracted in it, the slogan: “Drink Local, Link Global.” The panel senses our block’s Bitter Bridge profile and draws it into the spot like a warm sample, then lays it beside Waxwing. The ad isn’t static; it’s more like a tasting note for the city itself. A QR note invites us to tonight’s shared pour: three neighborhoods will stream their ambient profiles into the plaza and mingle them.

On the tram, I watch the line graph of today’s linkage emerge on my phone. The city’s soundscape is stitched by brew-tags, and every tag carries a governance token. When the ambience gets too intense, you can dial it down for your block. We learned that last month when a glitch introduced a deepfake saxophonist to our profile—someone had scraped an old video of the bandleader who passed away and layered his phrasing over our mornings. The effect was beautiful and wrong. The authenticity flags lit up red; the profile moderators sent a note: “We don’t let ghosts perform without consent.” The saxophonist’s daughter posted a response and asked if we’d like a studio outtake with his approval. We switched it in. The sound changed, softer now, but true.

At noon, I cross into Waxwing territory. The air cools; the streetlights shift to glacial white, and the shadows sharpen. People say this district’s profile makes you walk straighter. The local billboard is an evolving poem, lines rearranging themselves as passersby vote with their cups. A tiny badge in the corner guarantees “no synthetic faces” today. Their brewer says the poem is “our archive you can drink.” I sip the IPA and feel the neighborhood in the back of my throat.

By dusk, the shared pour starts. We stand in the plaza with paper cups, and three profiles thread together like a braid. Rhizome’s warmth spreads across the paving stones; Waxwing pitches the LEDs into a thin sheen; Bitter Bridge drifts in with a bicycle chorus and a slow brass line that feels like home. The billboard dissolves into a map of taste and sound—the day’s story of us. I spot my neighbor, Jas, who helped compile the original Bitter Bridge tag. She raises her cup and says, “It’s getting loud.” I scroll the governance slider a notch down—friction, not silence. The brass settles; the bikes keep weaving.

Later, the plaza lights dim to clear—no tint, just night. The profiles persist like aftertaste. I walk home through our amber glow, feeling the city’s linked conversation humming at street level, and think: we built this—flavor for language, sound for memory, light for belonging. The grid is national, the ginger is local, and the mix only works because it never overwhelms the thingness of the street or the faces within it.

Interpretation of the speculative thing by Midjourney

October Reflections: rebalancing local practice with global learning

The speculative thing that is the subject of the story above aims to embody the key themes I’ve been exploring in August via my newsletter. Claude explains the links between the story and the weekly columns.

The evolution of AI is reshaping our fundamental relationship with technology, transitioning from mere tools to companions to immersive presences embedded in physical reality. Throughout this transformation, crucial questions emerge about agency, transparency, and control—not simply whether AI might overtake humanity, but rather how we maintain our autonomy as we delegate increasing decisions to intelligent systems. As AI spans multiple forms of technological mediation—embodied, hermeneutic, alterity, and background relations—it creates unprecedented forms of human-technology interaction, potentially establishing a new “expressive relation” where our identities blend with AI capabilities. This evolution suggests that the future of human-AI collaboration depends less on functional capabilities and more on the quality of relationships we establish, particularly as intelligence becomes ubiquitous in our physical environment, understanding what we mean rather than just what we say, and actively participating in our daily lives in ways both visible and invisible.

Shaping a speculative thing

I took inspiration for the speculative object from the Design Fiction WorkKit:

As an object, the cards dictated a branded beverage, combined with a billboard advert as archetype, immersive as attributes, and a deepfake as action.

The speculative thing, based on the overarching theme, is linking different scales of operation and impact (national and local) by applying AI as a social-permeating infrastructure. In other words, a future where locally brewed ginger beers and IPAs are inheriting the local specific characteristics, while these profiles become the linking pin for connecting different neighborhoods in a wider (national or even global) network of communities. The profiles trigger certain synthetic elements of a neighborhood, such as sound and lights, resulting in immersive experiences that foster connection. The challenge is to balance out the local character with the merged exchanges expressed in the synthetic experiences.

With this concept for a thing, I prompted ChatGPT to write a day-in-the-life.

Overarching Theme

Across the month, AI is framed less as a spectacle and more as a social, permeating infrastructure: alien-yet-familiar intelligences seep into tools, objects, and communities, demanding governance beyond corporate logics, embracing productive friction, and rebalancing local practice with global learning so that technology mirrors and matures our collective values rather than eroding them.

Connections: Weekly Thoughts to Day-in-the-Life

Week 362: From Immersive to Permeating: How AI Quietly Transforms Our Lives — The amber morning light and stitched city soundscape demonstrate permeating, ambient intelligence in “things” that keep their thingness, with humans staying in the loop to tune intensity and meaning.

Week 358: Alien intelligence shaping human connections — The deepfake saxophonist incident and consent process show “alien” intelligence mimicking human emotion, forcing the neighborhood to renegotiate trust and expectations in human connection.

Week 359: Friction of Creation for Social Dynamics of AI Tools — The brewing co-op votes, governance tokens, and “Drink Local, Link Global” billboard embody creation as co-performance with productive friction, balancing local making and global sharing.

Week 360: Making AI tools for reshaping personal realities — Tap-handle NFC glyphs, profile threads, and composable neighborhood tags reflect personal toolchains where imagined flavors and ambiences rapidly materialize, tightening the hyperstitional loop.

Week 361: Building a collaborative society with Fifth Order AI — Lamp posts, billboards, and street soundscapes carry agency and responsibilities through civic protocols, with community moderation ensuring the system mirrors collective values rather than solutionist control.

Categories
reflections

August Reflections: From Tools to Companions to Immersive Presences

The weekly reflections on the news in human-AI-thing co-performances are brought to life via a story from the future of living with agentic things in the city. This month, we go inside the home.

A Day with NutriSphere: The Ambient Kitchen

6:30 AM: Maya’s bedroom gradually brightens as her NutriSphere begins its morning routine. “Good morning, Maya,” comes the gentle voice from everywhere and nowhere. “Your cortisol levels are slightly elevated today, likely from yesterday’s presentation stress. I’ve adjusted your breakfast accordingly.”

As Maya enters the kitchen space, surfaces come alive with subtle illumination. The central island—a seamless blend of preparation area, cooking surface, and interactive display—shows her personalized nutrition dashboard. Three breakfast options appear, each calibrated to her current biological state: a turmeric-ginger smoothie bowl, adaptogenic oatmeal with stress-reducing ashwagandha, or eggs with specific micronutrient-rich vegetables.

“I’ll go with the oatmeal today,” Maya says, watching as the necessary ingredients emerge from modular storage units. The preparation surface highlights where to place each item, guiding her hands with gentle light cues. As she works, NutriSphere shares insights: “Your magnesium levels have been trending down this week. I’ve added a sprinkle of pumpkin seeds to help balance that.”

12:30 PM: Returning home for lunch, Maya notices NutriSphere has already sensed her approach through her wearable. The kitchen’s ambient lighting has shifted to blues and greens—calming colors that research shows can reduce overeating during stressful workdays.

“Your morning meeting ran longer than expected,” NutriSphere observes. “I’ve prepared three quick lunch options that will support your afternoon focus.” Maya chooses a nutrient-dense salad, appreciating how the system knows her tendency to choose convenience over nutrition when rushed.

As she eats, the kitchen island displays her weekly trend report. “You’ve been consistent with protein intake this week,” the system notes approvingly. “And your omega-3 balance has improved since last month.”

7:00 PM: Maya returns home with a friend for dinner. NutriSphere immediately detects the second person and adjusts. “I notice you have company,” it says discretely through Maya’s earbuds rather than the room speakers. “Would you like me to suggest a shared meal plan or would you prefer privacy mode tonight?”

“Privacy mode, please,” Maya responds. The kitchen’s surveillance functions recede to background operations—still monitoring but not actively guiding. The space transforms from health-optimization center to social cooking environment.

Later, after her friend leaves, Maya reengages full functionality. “How did I do tonight?” she asks.

“Your wine intake was 30% above your weekly allocation,” NutriSphere responds without judgment. “I’ll adjust tomorrow’s recommendations to balance your antioxidant and hydration needs. Would you like me to prepare a hydration supplement for morning?”

10:30 PM: As Maya prepares for bed, NutriSphere sends a gentle notification: “Based on your sleep patterns this week and tomorrow’s exercise plan, I recommend the herbal sleep support tea I’ve prepared. It’s on your bedside table.”

Maya smiles, remembering when kitchens were just places with refrigerators and stoves, not integrated wellness companions. She takes the tea, knowing that NutriSphere has already analyzed her evening meal, stress levels, and activity patterns to formulate the perfect blend for restorative sleep.

As she drifts off, NutriSphere quietly begins its overnight analysis, updating her nutritional algorithms, cross-referencing her biological data with the latest research, and preparing to once again help her navigate the delicate balance between optimal health and human desire when morning comes.

This day in the life of Maya with NutriSphere was written by Claude 3.7 Sonnet via Lex.page writing tool. Prompted by me, read below how that came about.

An impression of NutriSphere by Midjourney, as prompted by Claude.

August Reflections: From Tools to Companions to Immersive Presences.

The speculative thing that is the subject of the story above aims to embody the key themes I’ve been exploring in August via my newsletter. Claude explains the links between the story and the weekly columns.

The evolution of AI is reshaping our fundamental relationship with technology, transitioning from mere tools to companions to immersive presences embedded in physical reality. Throughout this transformation, crucial questions emerge about agency, transparency, and control—not simply whether AI might overtake humanity, but rather how we maintain our autonomy as we delegate increasing decisions to intelligent systems. As AI spans multiple forms of technological mediation—embodied, hermeneutic, alterity, and background relations—it creates unprecedented forms of human-technology interaction, potentially establishing a new “expressive relation” where our identities blend with AI capabilities. This evolution suggests that the future of human-AI collaboration depends less on functional capabilities and more on the quality of relationships we establish, particularly as intelligence becomes ubiquitous in our physical environment, understanding what we mean rather than just what we say, and actively participating in our daily lives in ways both visible and invisible.

Shaping a speculative thing

I took inspiration for the speculative object from the Design Fiction WorkKit:

As an object, the cards dictated a kitchen appliance, combined with a trend report as archetype, supplements as attributes, and surveil as action.

I like to envision the kitchen as a whole as an appliance to serve us to build the best day of our life, every day again, by creating a specific meal plan, and help us prepare it as a companion. As we know how weak we can be, the kitchen as a whole is surveilling us, not to get data for external parties, but for ourselves, as a tool. What was once a trend in the decade of quantified self, for data geeks and health nerds, is not standard practice in every kitchen. Or do we still have a kitchen? A place to compose our energetic resources for the day, balancing out our nutrition over longer periods, being an immersive environment that creates trend reports that make it also possible for us to keep agency, divert from the offered meal plans, knowing that our companion will balance us out over time.

With this concept for a thing, I prompted Claude to write a day-in-the-life.

Based on this inspiration and triggered by four cards from a design fiction card deck, I created the concept for a speculative thing, wholesome kitches as an appliance for a balance diet. See the text.Can you create a day in the life of someone living in and with this kitchen appliance, including thinking of a catchy name of that ‘appliance’?

Connection to Weekly Columns

How does it relate to the four column I wrote in August? I asked Claude again to make that connection:

Weeknotes 352 – Companion Paradox: NutriSphere embodies the companion paradox by providing relatable AI interactions while balancing transparency (showing its nutritional reasoning) with a seamless experience, addressing the tension between agency and delegation.

Weeknotes 353 – Technological Mediation: NutriSphere perfectly exemplifies all of Ihde’s technological relations simultaneously—embodied (extending Maya’s nutritional awareness), hermeneutic (translating biological data into actionable insights), alterity (direct interaction as a companion), and background (ambient monitoring)—while also creating that potential “expressive relation” where Maya’s identity blends with the system’s nutritional intelligence.

Weeknotes 354 – Relations Over Functions: The success of NutriSphere isn’t primarily in its functional capabilities but in the quality of relationship it builds with Maya—adapting to her preferences, respecting her social contexts, and developing a personalized understanding that evolves through continued interaction rather than just executing preprogrammed nutritional rules.

Weeknotes 355 – Mass Intelligence in Physical Reality: NutriSphere represents the ultimate expression of immersive AI by embedding intelligence throughout the physical kitchen environment—from illuminated surfaces that guide food preparation to ambient lighting that influences eating behaviors—creating an experience where intelligence is physically manifested at every touchpoint without requiring explicit conversational interaction.