Categories
reflections research

How AI Reflects on the Hoodbot, Civic Prototyping and Fifth Order Design

As a little experiment and a way to reflect on a part of the research from the last 7 years of Cities of Things, I used the (in)famous Notebook LM tool to create a podcast discussing research sources. The result was rather pleasing, in the sense that based on four sources it gave a rather accurate description of the backgrounds of the Hoodbot research and the link with concepts we are exploring with Fifth Order Design.

The research sources were:

Lupetti, M. L., Smit, I., & Cila, N. (2018, September). Near future cities of things: addressing dilemmas through design fiction. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (pp. 787-800).

Lupetti, M. L., & Cila, N. (2019, October). Envisioning and questioning near future urban robotics. In Design and Semantics of Form and Movement. Conference: Desform.

Jaśkiewicz, T., & Smit, I. (2024). Between experiments leveraging prototypes to trigger, articulate, and share informal knowledge: Case of the cities of things living lab. In Applied Design Research in Living Labs and Other Experimental Learning and Innovation Environments (pp. 210-233). CRC Press.

Stompff, G., Jaskiewicz, T., Nachtigall, T., & Smit, I. (2025). Design in the Real World: Facilitating Collective Learning through Design. In Applied Design Research (pp. 290-308). CRC Press.

Notebook can also create infographics now, but this is not that good. The visual references would not pass my creative direction 🙂

In the latest NADR publication, we wrote about the potential role of AI-image generation in Civic Prototyping:

Scaffolds of Imagination: Partnering with AI-image generation in Civic Prototyping (Tomasz Jaśkiewicz, Iskander Smit, Peter van Waart, Manon Mostert van der Sar, Evin Wijninga)

Categories
reflections

2026: The year social intelligence surfaces

In my piece for RIOT 2025 (June 2025), I argued that generative and agentic AI push us into something qualitatively different: intelligence no longer as an app, but as an atmosphere—infused into objects, services, and environments until the fabric of everyday life becomes conversational. The shift is not only from responsive tools to initiating systems, but from discrete interactions to a ubiquitous field of relations.

2025 was the pivoting year for agentic AI. Everyone declared it the year of agents—though the examples were interesting rather than transformative. Physical AI grabbed headlines at CES: humanoids and helper robots demonstrating cleaning tasks and household routines. Straightforward. Even dull. But this is the opening act.

The real shift happens when AI becomes embedded in everything we use. Every device with computing power, every service with a potential connection to an intelligence layer running beneath the surface. Call it the Upside Down, if you like—a realm that’s always there, occasionally bleeding through.

Three ways the atmosphere manifests

How does this intelligence layer relate to our lived world? I see three models emerging:

The Cloud. Everything intelligent draws from shared data and knowledge; intelligence operates at the system level. Individual actions connect to an ecosystem that almost lives on its own. Activities in the real world and in the cloud proceed as parallel processes—related but distinct.

The Digital Twin. A representation of reality built for understanding. Intelligence is made available to the actor’s individual profile in the real world. More sophisticated versions enable simulations and predictions based on aggregated profiles. The focus remains individual: what can this model tell me about my situation?

The Mirror World. Combining these two into something new. Not only does the individual actor apply profile information and predictive knowledge, but the social fabric of the lifeworld itself is mirrored. Relations, communities, collective patterns—all represented and capable of influencing action in return.

We’re early. It’s hard to predict how the mirror world will impact the lifeworld. But 2026 will give us the first real indications.

The individual trap

Notice that current AI development remains stuck in individual framing. Chatbots became primary access points for questions in 2025—Google pushing Gemini, OpenAI as poster child, Claude rising as the choice for users wanting something more personally attuned. Research shows people increasingly use these bots for personal questions, emotional feedback, and companionship.

But it’s always one person, one tool. Even in social contexts, the relationship is fundamentally individual.

The companion device market tried to extend this: Rabbit, Humane. Both failed—costs too high, friction too great. But intelligence is moving to the edge. Apple’s next-generation AirPods will matter here. Translation already works. Visual intelligence currently requires activation—take a picture, wait. That friction will disappear. The always-present companion is coming.

Still individual. Still personal optimization.

A collective turn

Here’s what shifts in 2026: we begin asking questions about collectivity, governance, and who sets the goals of the ecosystem.

In times of austerity—less money, fewer institutional supports—people turn to each other. Communities gain importance: local, ad hoc, interest-based, place-based. This isn’t speculation; it’s already happening.

Now add intelligent things to these communities. The sensors. The solar panels in your energy collective. The batteries. The shared vehicles. What happens when these become fellow community members?

Will we govern them collectively? Can we? How? What are the new forms of autonomy for actors?

In 2025, the introduction of agents or agentic operating services became foundational to the possibilities of collective working. This is extending to physical space. The interactions between the individual operating agents will be the defining factor for our day-to-day lives. New negotiating agents will become the crucial nodes.

This is where social intelligence enters. Not intelligence based on individual knowledge—that’s the IQ model we’ve been building. Social intelligence is the EQ of the ecosystem: intelligence that emerges from and operates on social relations. The mirror world doesn’t just reflect individuals; it reflects how we’re connected. And if we can see those connections, we can begin to govern them.

Respecting externalities

A community optimizing for itself creates externalities—impacts outside its boundaries, costs borne by others. When your energy collective’s AI negotiates with the grid, whose interests does it serve? When shared vehicles optimize routes for members, what happens to non-members, to public space, to the commons? Social intelligence without social accountability is just collective selfishness with better tooling.

This is the tension that 2026 will surface but not resolve. We’ll see the first cases of communities incorporating intelligent things as participants. We’ll see experiments in collective governance of AI systems. And we’ll see the early conflicts over who’s inside, who’s outside, and who bears the costs.

Agentic social intelligence

We will start living alongside intelligent things as part of our social fabric, not as tools but as participants in collectives of humans and things. The atmosphere is forming. The mirror world is taking shape.

2026 is the year social intelligence surfaces, and new relations with our new fellow agentic actors will start to take shape. It will be a year of learning about new social structures and building understanding on growing into a new living world.

I am looking forward to research through (speculative) design with Cities of Things and partners!


Let me know if you’d like to dive deeper, with an inspiring presentation with more extended examples and projections, or a speculative design workshop with your team or students.

Categories
reflections update

Systemic co-design with agentic engineers

Weekly reflection on human-AI-things collaborations

The shift isn’t from human coders to AI agents. It’s from coding to engineering.

Every, the company behind Lex (which I’m using right now), released four predictions for 2026. The one that stuck with me: the rise of the “agentic engineer.” A couple of months ago, I wrote about an Austrian developer Peter Steinberger, who described this exact reality—directing a team of AI agents rather than writing code himself.

But calling this “directing agents” undersells what’s happening. The agentic engineer doesn’t just orchestrate; they design the environment in which agents can work at their best capacity. They build their own tools, shape workflows, and define constraints. The craft moves upstream: from writing the code to engineering the context.

This connects to another of Every’s predictions: designers building their own tools. That was what triggered me in the Austrian developer’s account was how much of his work involved creating bespoke tooling for his own process. This is what happens when the friction between idea and prototype disappears—designers no longer depend on a coder to test their thinking. They build, they learn, they iterate. Two principles at work: Understanding by doing, and Eating your own dogfood.

And here’s where it gets interesting: if professionals are building their own tools, will end users follow? Is every product becoming less an interface to data and more a platform for making bespoke tools? Does a new layer emerge where users shape their own outcomes?

I think this points toward co-design—but a richer version than we usually mean. Not just designer and user collaborating, but multiple layers of expertise are woven into the design process. The economist who maps value flows in a community. The ethicist who flags downstream effects. Stakeholders who traditionally appear only in the research phase are becoming part of the building itself.

In this framing, agents don’t just execute—they can represent these roles, simulate options, play out consequences before anything hits the real world. Agents as citizens. Not tools we use, but participants in how we design.


Within Cities of Things we have been looking into human-AI teams for a masterclass on designing these new teams, where this engineering was a key element. Building successful human-AI teams should focus on the relationships among the different human and non-human team members, not on task performance.


This triggered thought is part of my weekly newsletter:

Weeknotes 371 – The real shift is not from human coders to AI agents—it’s from coding to engineering the environment where agents are co-designers. And other news on AI companion devices and robots at CES.

https://www.iskandr.nl/systemic-co-design-with-agentic-engineers

Image: Interpretation by Midjourney

Categories
reflections

Annotating a future agentic thing from October news reflections

The weekly reflections on the news in human-AI-thing co-performances are brought to life via a story from the future of living with agentic things in the city.

A Day in the life: The Ginger Grid

I wake to a soft ripple of brass and bike chains—the “Bitter Bridge” profile our neighborhood brewed into its IPA last spring. The sound isn’t coming from a speaker in my room; it’s in the street outside, and the light through my blinds is tinted the amber that the profile taught the lamp posts to remember. Our block voted for this motif after the brewing co-op sent around tasting flights. Bitter, bright, a flash of citrus—somehow the flavor mapped to the sound of early commuters, and now it’s become how morning feels here.

At the corner café, the bartender pulls a ginger beer with the “Muddy Rhizome” tag. It’s the winter recipe: earthy heat, short finish. The tap handle has the NFC glyph that links to the profile’s social thread; I tap it, and my feed shows the neighboring district’s “Waxwing” IPA—a poet’s beer, crisp and clean. The city’s Ginger Grid aggregates these profiles across neighborhoods, and you can trace routes by flavor alone: follow Waxwing uptown, then switch to Rhizome to spiral back home.

Outside, the billboard across the tram stop murmurs awake as I arrive. It’s a national campaign from the brewery federation—a shimmering ribbon of foam, streetlights refracted in it, the slogan: “Drink Local, Link Global.” The panel senses our block’s Bitter Bridge profile and draws it into the spot like a warm sample, then lays it beside Waxwing. The ad isn’t static; it’s more like a tasting note for the city itself. A QR note invites us to tonight’s shared pour: three neighborhoods will stream their ambient profiles into the plaza and mingle them.

On the tram, I watch the line graph of today’s linkage emerge on my phone. The city’s soundscape is stitched by brew-tags, and every tag carries a governance token. When the ambience gets too intense, you can dial it down for your block. We learned that last month when a glitch introduced a deepfake saxophonist to our profile—someone had scraped an old video of the bandleader who passed away and layered his phrasing over our mornings. The effect was beautiful and wrong. The authenticity flags lit up red; the profile moderators sent a note: “We don’t let ghosts perform without consent.” The saxophonist’s daughter posted a response and asked if we’d like a studio outtake with his approval. We switched it in. The sound changed, softer now, but true.

At noon, I cross into Waxwing territory. The air cools; the streetlights shift to glacial white, and the shadows sharpen. People say this district’s profile makes you walk straighter. The local billboard is an evolving poem, lines rearranging themselves as passersby vote with their cups. A tiny badge in the corner guarantees “no synthetic faces” today. Their brewer says the poem is “our archive you can drink.” I sip the IPA and feel the neighborhood in the back of my throat.

By dusk, the shared pour starts. We stand in the plaza with paper cups, and three profiles thread together like a braid. Rhizome’s warmth spreads across the paving stones; Waxwing pitches the LEDs into a thin sheen; Bitter Bridge drifts in with a bicycle chorus and a slow brass line that feels like home. The billboard dissolves into a map of taste and sound—the day’s story of us. I spot my neighbor, Jas, who helped compile the original Bitter Bridge tag. She raises her cup and says, “It’s getting loud.” I scroll the governance slider a notch down—friction, not silence. The brass settles; the bikes keep weaving.

Later, the plaza lights dim to clear—no tint, just night. The profiles persist like aftertaste. I walk home through our amber glow, feeling the city’s linked conversation humming at street level, and think: we built this—flavor for language, sound for memory, light for belonging. The grid is national, the ginger is local, and the mix only works because it never overwhelms the thingness of the street or the faces within it.

Interpretation of the speculative thing by Midjourney

October Reflections: rebalancing local practice with global learning

The speculative thing that is the subject of the story above aims to embody the key themes I’ve been exploring in August via my newsletter. Claude explains the links between the story and the weekly columns.

The evolution of AI is reshaping our fundamental relationship with technology, transitioning from mere tools to companions to immersive presences embedded in physical reality. Throughout this transformation, crucial questions emerge about agency, transparency, and control—not simply whether AI might overtake humanity, but rather how we maintain our autonomy as we delegate increasing decisions to intelligent systems. As AI spans multiple forms of technological mediation—embodied, hermeneutic, alterity, and background relations—it creates unprecedented forms of human-technology interaction, potentially establishing a new “expressive relation” where our identities blend with AI capabilities. This evolution suggests that the future of human-AI collaboration depends less on functional capabilities and more on the quality of relationships we establish, particularly as intelligence becomes ubiquitous in our physical environment, understanding what we mean rather than just what we say, and actively participating in our daily lives in ways both visible and invisible.

Shaping a speculative thing

I took inspiration for the speculative object from the Design Fiction WorkKit:

As an object, the cards dictated a branded beverage, combined with a billboard advert as archetype, immersive as attributes, and a deepfake as action.

The speculative thing, based on the overarching theme, is linking different scales of operation and impact (national and local) by applying AI as a social-permeating infrastructure. In other words, a future where locally brewed ginger beers and IPAs are inheriting the local specific characteristics, while these profiles become the linking pin for connecting different neighborhoods in a wider (national or even global) network of communities. The profiles trigger certain synthetic elements of a neighborhood, such as sound and lights, resulting in immersive experiences that foster connection. The challenge is to balance out the local character with the merged exchanges expressed in the synthetic experiences.

With this concept for a thing, I prompted ChatGPT to write a day-in-the-life.

Overarching Theme

Across the month, AI is framed less as a spectacle and more as a social, permeating infrastructure: alien-yet-familiar intelligences seep into tools, objects, and communities, demanding governance beyond corporate logics, embracing productive friction, and rebalancing local practice with global learning so that technology mirrors and matures our collective values rather than eroding them.

Connections: Weekly Thoughts to Day-in-the-Life

Week 362: From Immersive to Permeating: How AI Quietly Transforms Our Lives — The amber morning light and stitched city soundscape demonstrate permeating, ambient intelligence in “things” that keep their thingness, with humans staying in the loop to tune intensity and meaning.

Week 358: Alien intelligence shaping human connections — The deepfake saxophonist incident and consent process show “alien” intelligence mimicking human emotion, forcing the neighborhood to renegotiate trust and expectations in human connection.

Week 359: Friction of Creation for Social Dynamics of AI Tools — The brewing co-op votes, governance tokens, and “Drink Local, Link Global” billboard embody creation as co-performance with productive friction, balancing local making and global sharing.

Week 360: Making AI tools for reshaping personal realities — Tap-handle NFC glyphs, profile threads, and composable neighborhood tags reflect personal toolchains where imagined flavors and ambiences rapidly materialize, tightening the hyperstitional loop.

Week 361: Building a collaborative society with Fifth Order AI — Lamp posts, billboards, and street soundscapes carry agency and responsibilities through civic protocols, with community moderation ensuring the system mirrors collective values rather than solutionist control.

Categories
reflections

August Reflections: From Tools to Companions to Immersive Presences

The weekly reflections on the news in human-AI-thing co-performances are brought to life via a story from the future of living with agentic things in the city. This month, we go inside the home.

A Day with NutriSphere: The Ambient Kitchen

6:30 AM: Maya’s bedroom gradually brightens as her NutriSphere begins its morning routine. “Good morning, Maya,” comes the gentle voice from everywhere and nowhere. “Your cortisol levels are slightly elevated today, likely from yesterday’s presentation stress. I’ve adjusted your breakfast accordingly.”

As Maya enters the kitchen space, surfaces come alive with subtle illumination. The central island—a seamless blend of preparation area, cooking surface, and interactive display—shows her personalized nutrition dashboard. Three breakfast options appear, each calibrated to her current biological state: a turmeric-ginger smoothie bowl, adaptogenic oatmeal with stress-reducing ashwagandha, or eggs with specific micronutrient-rich vegetables.

“I’ll go with the oatmeal today,” Maya says, watching as the necessary ingredients emerge from modular storage units. The preparation surface highlights where to place each item, guiding her hands with gentle light cues. As she works, NutriSphere shares insights: “Your magnesium levels have been trending down this week. I’ve added a sprinkle of pumpkin seeds to help balance that.”

12:30 PM: Returning home for lunch, Maya notices NutriSphere has already sensed her approach through her wearable. The kitchen’s ambient lighting has shifted to blues and greens—calming colors that research shows can reduce overeating during stressful workdays.

“Your morning meeting ran longer than expected,” NutriSphere observes. “I’ve prepared three quick lunch options that will support your afternoon focus.” Maya chooses a nutrient-dense salad, appreciating how the system knows her tendency to choose convenience over nutrition when rushed.

As she eats, the kitchen island displays her weekly trend report. “You’ve been consistent with protein intake this week,” the system notes approvingly. “And your omega-3 balance has improved since last month.”

7:00 PM: Maya returns home with a friend for dinner. NutriSphere immediately detects the second person and adjusts. “I notice you have company,” it says discretely through Maya’s earbuds rather than the room speakers. “Would you like me to suggest a shared meal plan or would you prefer privacy mode tonight?”

“Privacy mode, please,” Maya responds. The kitchen’s surveillance functions recede to background operations—still monitoring but not actively guiding. The space transforms from health-optimization center to social cooking environment.

Later, after her friend leaves, Maya reengages full functionality. “How did I do tonight?” she asks.

“Your wine intake was 30% above your weekly allocation,” NutriSphere responds without judgment. “I’ll adjust tomorrow’s recommendations to balance your antioxidant and hydration needs. Would you like me to prepare a hydration supplement for morning?”

10:30 PM: As Maya prepares for bed, NutriSphere sends a gentle notification: “Based on your sleep patterns this week and tomorrow’s exercise plan, I recommend the herbal sleep support tea I’ve prepared. It’s on your bedside table.”

Maya smiles, remembering when kitchens were just places with refrigerators and stoves, not integrated wellness companions. She takes the tea, knowing that NutriSphere has already analyzed her evening meal, stress levels, and activity patterns to formulate the perfect blend for restorative sleep.

As she drifts off, NutriSphere quietly begins its overnight analysis, updating her nutritional algorithms, cross-referencing her biological data with the latest research, and preparing to once again help her navigate the delicate balance between optimal health and human desire when morning comes.

This day in the life of Maya with NutriSphere was written by Claude 3.7 Sonnet via Lex.page writing tool. Prompted by me, read below how that came about.

An impression of NutriSphere by Midjourney, as prompted by Claude.

August Reflections: From Tools to Companions to Immersive Presences.

The speculative thing that is the subject of the story above aims to embody the key themes I’ve been exploring in August via my newsletter. Claude explains the links between the story and the weekly columns.

The evolution of AI is reshaping our fundamental relationship with technology, transitioning from mere tools to companions to immersive presences embedded in physical reality. Throughout this transformation, crucial questions emerge about agency, transparency, and control—not simply whether AI might overtake humanity, but rather how we maintain our autonomy as we delegate increasing decisions to intelligent systems. As AI spans multiple forms of technological mediation—embodied, hermeneutic, alterity, and background relations—it creates unprecedented forms of human-technology interaction, potentially establishing a new “expressive relation” where our identities blend with AI capabilities. This evolution suggests that the future of human-AI collaboration depends less on functional capabilities and more on the quality of relationships we establish, particularly as intelligence becomes ubiquitous in our physical environment, understanding what we mean rather than just what we say, and actively participating in our daily lives in ways both visible and invisible.

Shaping a speculative thing

I took inspiration for the speculative object from the Design Fiction WorkKit:

As an object, the cards dictated a kitchen appliance, combined with a trend report as archetype, supplements as attributes, and surveil as action.

I like to envision the kitchen as a whole as an appliance to serve us to build the best day of our life, every day again, by creating a specific meal plan, and help us prepare it as a companion. As we know how weak we can be, the kitchen as a whole is surveilling us, not to get data for external parties, but for ourselves, as a tool. What was once a trend in the decade of quantified self, for data geeks and health nerds, is not standard practice in every kitchen. Or do we still have a kitchen? A place to compose our energetic resources for the day, balancing out our nutrition over longer periods, being an immersive environment that creates trend reports that make it also possible for us to keep agency, divert from the offered meal plans, knowing that our companion will balance us out over time.

With this concept for a thing, I prompted Claude to write a day-in-the-life.

Based on this inspiration and triggered by four cards from a design fiction card deck, I created the concept for a speculative thing, wholesome kitches as an appliance for a balance diet. See the text.Can you create a day in the life of someone living in and with this kitchen appliance, including thinking of a catchy name of that ‘appliance’?

Connection to Weekly Columns

How does it relate to the four column I wrote in August? I asked Claude again to make that connection:

Weeknotes 352 – Companion Paradox: NutriSphere embodies the companion paradox by providing relatable AI interactions while balancing transparency (showing its nutritional reasoning) with a seamless experience, addressing the tension between agency and delegation.

Weeknotes 353 – Technological Mediation: NutriSphere perfectly exemplifies all of Ihde’s technological relations simultaneously—embodied (extending Maya’s nutritional awareness), hermeneutic (translating biological data into actionable insights), alterity (direct interaction as a companion), and background (ambient monitoring)—while also creating that potential “expressive relation” where Maya’s identity blends with the system’s nutritional intelligence.

Weeknotes 354 – Relations Over Functions: The success of NutriSphere isn’t primarily in its functional capabilities but in the quality of relationship it builds with Maya—adapting to her preferences, respecting her social contexts, and developing a personalized understanding that evolves through continued interaction rather than just executing preprogrammed nutritional rules.

Weeknotes 355 – Mass Intelligence in Physical Reality: NutriSphere represents the ultimate expression of immersive AI by embedding intelligence throughout the physical kitchen environment—from illuminated surfaces that guide food preparation to ambient lighting that influences eating behaviors—creating an experience where intelligence is physically manifested at every touchpoint without requiring explicit conversational interaction.

Categories
reflections

July reflections: think with us: friction, agency, and the networked self

The weekly reflections on the news in human-AI co-performances are brought to life via a story from the future of living with agentic things in the city.

Morning, with headlines

Maya scrolls the morning headlines with her thumb, the screen casting a cool light across the kitchen table. War, elections, the price of olives. She pauses on an explainer about last night’s late-breaking vote, unsure whether to be relieved or more worried. The bowl waits without hurrying her. Steam from the kettle curls into the air; the apartment is quiet enough that she can hear the street waking up one floor below.

She takes the first spoonful without looking, and something tart—briny almost—cuts through the cream. For a second she’s back at the small restaurant around the corner: the citrusy pickle that arrived unannounced between courses, the waiter’s shrug when she asked what was in it, the way the table’s conversation lifted after that bite. She smiles into her phone and only then notices the numbers stamped on the empty packets by the sink. She’s pretty sure she didn’t reach for those yesterday. Maybe she did. Maybe the hand just knows its own mind in the morning.

The bowl is larger than it needs to be at first, like a shallow pool that invites her to wander. As she eats, it seems to settle closer, drawing the edges in until each spoonful finds its way with less searching. She doesn’t think about it doing anything; it simply meets her pace. Between articles she checks a message from Raj—three photos of shoes lined up by a classroom door and a note: “Tried 4-11-6-2. The kids demolished it.” She glances back at her own numbers and thinks of the pickle again.

Outside, a siren flickers and fades. Another headline insists on certainty; another argues for the opposite. She decides she doesn’t buy either. A berry catches at the back of her tongue, not sweetness but the suggestion of it, like someone about to tell a story they’re not sure they should tell. She reads one more paragraph and pushes the phone away. The last spoonful is different from the first—less showy, more composed. She can’t say why.

At lunch she passes a café window and notices a tray of pastries dusted with something red. Not her usual place, not her usual hour. She almost goes in, doesn’t, makes a note to remember. On the train home, a stranger’s ringtone chimes a melody she can’t place, and for a moment the taste of the morning returns—bright, then gone.

In the evening she rinses the bowl and leaves it to dry, its little analog dial resting at a tick-mark she hasn’t learned to name. The new box on the counter has a small sticker she hasn’t seen before—a subtle upward arrow next to one of the numbers. Trend, or coincidence, or a hundred breakfasts elsewhere tugging on her own? She turns the sticker between her fingers and thinks of the article about the vote, of the pickle, of choosing without knowing exactly why.

Before bed, Raj sends a voice note: the field trip was a success, his daughter fell asleep before the train left the station. “Same combo tomorrow?” he asks. Maya doesn’t answer. In the morning, she’ll see what her hand reaches for. The headlines will argue again; the bowl will wait. Somewhere between them, the day will find its flavor.

Prompt by GPT-5, Image by Midjourney

July reflections: think with us: friction, agency, and the networked self

The speculative thing in this day-in-the-life story aims to embody the key themes I’ve been exploring this July via my newsletter Target is New. Claude (via Lex.page) explains the links between the story and the weekly columns.

Human-AI Partnership vs. Dependency: You consistently explore the tension between AI as a collaborative thinking partner versus AI as a replacement for human thinking. The phrase “think with us, not for us” captures this perfectly.

Necessary Friction: You argue against seamless, frictionless AI experiences, suggesting that certain types of friction (like explicitly telling an AI what you’re seeing) preserve human agency and engagement.

The Unseen/Unsaid: You explore how AI might relate to what’s not explicitly stated in data – the spaces between, the unspoken, the void that carries meaning in human experience.

Agency and Control: Throughout all entries, you question who controls the AI’s intentions and how we maintain human agency in our relationships with intelligent systems.

Networked Intelligence: You touch on moving beyond the individual to understanding intelligence as relational and networked between humans, technologies, and environments.

Day-in-the-life connected to the posts

Why does the day-in-the-life connect to the five posts? I asked my AI buddy (Lex.page using GPT-5).

Week 347: Think with us, not for us – Essence: Agency-preserving guidance.

The bowl suggests pace and possibilities (numbers, portion, timing) but leaves the choosing and meaning-making to Maya; it thinks alongside her, not instead of her.

Week 348: Articulated frictions for engaged presence — Essence: Friction as focus.

The numbered ritual and flavor surprises require small, intentional acts; the bowl waits for her lead, keeping the human as guide and making the experience felt, not automated.

Week 349: A void AI might make tangible — Essence: Sensing the unsaid.

Taste triggers memory (the pickle, the restaurant) and mood without explicit prompts; the bowl surfaces the in‑between—associations, not instructions—making the “void” perceptible.

Week 350: Paradox of ambient intelligence — Essence: Probabilities, not proclamations.

The quiet dial and subtle “trend” arrow communicate soft confidence rather than certainty; differences between first and last spoonful model calibrated, transparent uncertainty.

Week 351: Humanity in AI co‑performance — Essence: Networked choice without surrender.

Raj’s message, the shared trends, and Maya’s morning hand show co‑performance: individual agency braided with a social signal, neither outsourced nor isolated.

Shaping a speculative thing

I took inspiration for the speculative object from the Design Fiction WorkKit:

A sensor thing related to breakfast, with a retro chic feel, and has shrink/expand behavior. This is my prompt for my AI writing buddy1 to create a day in the life story above:

A special breakfast bowl offers a variety of combinations between yoghurt and ingredients, along with different variations. The subscription service is delivering a possible mix for a week, which can end up in many different variations. The ingredients are packaged in neutral-designed boxes with only numbers. Every morning, you choose a set of numbers based on a short conversation and sensing with the service, which checks your mood and the people sitting at the table (lonely, with family, functional, relaxing, etc.). The mix is not made by a set script but is generated based on all the users around the world. Depending on the popularity, the breakfast bowl is more or less filled, which will serve as an additional data point.

The service is not making the easiest bowl, but the one that is delivering the best experience, which is defined by the friction it unlocks, and the stories along.

Research through Design Fiction

With Cities of Things, we aim to envision the impact of living together with things that have agency, are agentic, or are autonomous in near-future cities. We explored concepts such as the Collect|Connect hub as an engaging “AI”-neighbor to orchestrate and support community life in neighborhoods. With Wijkbot, we created a kit for civic prototyping, possible futures, and impacts of living with intelligent and autonomous objects in our cities. Read more on our activities.

Combining capturing knowledge and creating inspirational artifacts is typical of what Cities of Things like to engage. By ourselves or as commissioned projects for students or workshops. Check out citiesofthings.org and hoodbot.net for examples, and reach out if you like to know more.

This example is just a quick impression that opens up more questions than it provides answers. Something that is always the right start.

Categories
update

Agentic AI and the intensifying of immediacy relations

On 17 July, Cities of Things was invited by Catch A Fire Agency to organise, design, and moderate a speculative design workshop for one of their clients on the topic of Agentic AI and its relation to the addiction to immediacy.

The workshop was part of a full-day program for a group of marketing professionals, and the time was limited (1 hour), so the speculative design part could only touch the surface. It worked very well, however, to engage with the thoughts I shared on the topic in an opening presentation. The flow of the session was roughly as follows: a 5-minute welcome, a 15-minute inspirational talk on the topic, a 10-minute introduction to speculative design and scenarios, a 15-minute group-based engagement with the topic through concepting an agentic object in an immediacy situated context, and a 15-minute sharing and recap.

I created a fresh presentation that combines thinking on immersive AI in urban spaces with the culture of immediacy. I introduced a model, the “triad of immersion,” that generates a so-called superstimulus effect by combining agentic AI, the cult of immediacy, and the physical environment as context. This SSE is accelerating the wheel of addiction even more. Based on these insights, I created four scenarios for the hybrid futures. The groups thought of an agentic generative object to make this tangible.

The updated story on things with agency in relation to immedicacy worked out nicely, and resonated with my earlier research on predictive relations.

The use of speculative design, including the application of the Design Fiction Work Kit and LEGO tools to express ideas, proved to structure the workshop and deliver on its output, even though time was too short for fleshed-out prototypes.

I am happy to share more details on request. Email me if you are interested!

Below are a few snippets from the presentation.

Categories
reflections update

June reflections on Cities of Agentic Things

Since this year I update the Substack Cities of Things with a short speculative design exercise. In good current habit, in a back-to-back composing with Claude (via Lex.page): I bring in my thoughts, I create an object concept and Claude is creating a little day-in-the-life story of it, and connects it back to the original thoughts.

You can subscribe to the substack, but I added the story and reflections also below:

Categories
update

Check out the new activities page

Next to our newsletter, organizing knowledge sharing, and commissioning of research projects, we have developed over the last years two specific activities to help understand, and formulate insights and opinions on our future co-living with agentic physical AI in our neighborhoods and cities.

Futuring and Civic Prototyping

Two types of activities to make sense of the future of Cities of Things, agentic things in the city, things as citizens:

– Cities of Things Futuring, a speculative workshop, adapted to context, audience, and knowledge needs. Generative Things, understanding future paths with physical AI.

– Robo-perspectives civic prototyping, generating knowledge and insights, connecting citizens, shaping policies. Using the WijkbotKit to understand the impact of neighborhood robo-things.

Read more on our dedicated page:

Categories
update

Updates Q1 Cities of Things

Let me share some updates from the last quarter at Cities of Things.

Newsletters

We started a fresh format for the Monthly newsletter, applying speculative design techniques to build a story on the near future living with agentic things in the city. The recipe is:

  • The weekly reflections of my personal weeknotes newsletter Target is New are sources of inspiration on the news of human-AI co-performances.
  • To frame the agentic thing, I use the card deck made by Near Future Laboratory (Design Fiction Work Kit, I use the physical set, there is also an online version). Four types of cards (Object, Archetype, Attribute, Action)
  • I speculate myself an object situated in the city that touches the four cards (loosely) and the themes from the Month.
  • I feed this all to Claude via the writing tool Lex and ask to build a story from the situated object, and summarize the links with reflections.
  • With Midjourney I create an illustrative image.
  • I do a final edit before publishing it via the Substack.

So you can read to stories:

Wijkbot / Hoodbot

Via the website of Wijkbot, we keep you updated on realized projects and collaborations.

  • A team of students completed their ideation app for workshops with the Wijkbot
  • Another team of students from The Hague University of Applied Sciences master Next Level Engineering completed their project.
  • We finished the updated version of the Wijkbot Kit, as this was supported by the ESCall funding. It is a combination of a more step-by-step approach to dive deeper into the dilemmas, including the needed artifacts (more details follow soon).
  • We wrote a proposal for an installation using Wijkbot as learning experience, applying a combination of civic prototyping and thing-centered design: “Wijkbot Commons, Cohabiting with Autonomous Objects” (let me know if you like to know more)

Publications/media

  • We are part of Publiekwijs Handboek, aimed at journalists who want to get in touch with people.
  • Wijkbot is also part of the Kennisportfolio Rijksoverheid as a fieldlab (Cities of Things LAB010 project).
  • Wijkbot was part of this publication: A living lab learning framework rooted in learning theories (https://doi.org/10.1016/j.eiar.2025.107894)
  • A new book chapter was written by Guido Stompff, Tomasz Jaskiewicz, and myself, for the NADR publication, with Wijkbot as the inspiring case for design in the real world (keep you posted)
Handboek Publiekwijs

Knowledge partnering

  • We are knowledge partner for the ThingsCon project Exhibition “Generative Things” that premiered at TH/NGS 2024 back in December and has two more events in April (14-15) and June (14) in collaboration with Master Digital Design of Amsterdam UAS and Waag Futurelab.
  • Iskander is working as a researcher in residency exploring Civic Protocol Economies at the Civic Interaction Design research group of Martijn de Waal.
  • We are partner in Digitaal Sociaal Innovatielab of Rotterdam University of Applied Sciences.