Categories
reflections

Blog – communities of collaborative AI

This post is a monthly update that I have sent to all subscribers of the Cities of Things newsletter.

Looking back at last month there were two interesting articles that deserve a closer look in relation to the Cities of Things. It connects to some of the basic concepts, and triggers deeper exploring that I probably do later.

One of the core concepts that inspire Cities of Things is the relationships we as humans have with the technology that gets more agency, will make their own decisions, will have more responsibility in taking decisions. The things that are becoming citizens. Things are still representing systems of designed agency, often by organizations. They might however become more learning creatures on their own. We strive for a harmonious society living together with these things, where we can perform in co-performance with the things reaching a shared goal, using each-others best characteristics.

Two articles last month introduced interesting notions on the future relations with technology, or AI as it is framed. In this article cooperative AI is introduced. “Machines must learn to find common ground”. It looks at the perceived impact of AI aiming to take over certain tasks humans to now, and pleas also to look it with the co-performance lens: “Instead, we could develop AI assistants that complement human intelligence and depend on us for tasks in which humans have a comparative advantage. As Stanford University radiologist Curtis Langlotz put it: “AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t.”

The best way to leverage AI is not to see it as a faster data-synthesizing machine, but to approach it as a social system. The AI-AI cooperation will happen and will be working not only better but also more responsible if the drivers are social and societal, is the belief.

The power of these systems of systems is explored in another article that dives into the relation of artificial intelligence and time; artificial intelligence is really artificial time, superintelligence is superhistory. It is an interesting read, and builds upon some of the same concepts as the cooperative AI; the centaur as human+AI marriage best presented in the AlphaGo design. The centaur works both ways, augmenting humans. “You might not even need a personal computer at all. It might be sufficient to have been taught by computers. Your own brain might be sufficient to hold digested models suitable for inference.”

That we enter a Centaur Age is like a given. The notion that there is a superhistory in these systems is derived from the reasoning that AI has so much more experiences than humans have in a similar time frame. This is not only delivering a different type of knowledge but it influences also the acting build on this knowledge, resulting in new forms for the things we know. “Autonomous driving algorithms already drive differently from humans because they are learning from dozens of cameras instead of just two forward-facing cameras. (…) It’s not one car that is learning to self-drive. It’s a hive mind of thousands of car-brains pooling raw data. A car Borg.”

“You’re not living as a witness to the rise of superintelligence. You’re living as an agent being augmented by supertime.” I think this especially interesting as it links to the thinking about predictive relations; the knowledge that derives from comparable events that occur on different places in different times and are connected to the event happening in the now, the interaction of a human and (connected) thing, influencing the characteristics and behavior of that thing. It is a centaur but with extra knowledge.

Making sense of the interactions, building relations with the things that predict introduces a new context of living. With things taking a different role. As we believe that things will be citizens in our future cities, then we mean that the things are still different from the human citizens, different species almost. What if the things become part of humans as centaurs and vice versa? This is something to explore much more. What if AI-AI collaborations are becoming part of community life?

I don’t have the answer here. These are concepts however that inspire thinking on Cities of Things, and especially what the relations of humans and things as citizens will be. In that sense the difference of lenses of networks and communities are relevant. It seems a popular frame to that is not a new insight really but it is gaining traction. In cities, this is, even more, the case and it fits the lens on cities as places build on social contracts. The importance of the community as a structural element is noted. It is a counter-reaction to the more technical social networks that promised to just connect you with everyone, but in the end, built a system of dark connections and actions. Shaping social contracts goes beyond the humans and things forming city life in a co-performance state.

The DAOs, the Decentralised Autonomous Organisations, are one of the possible concepts to connect, to start from, and find out how these might play a role. And are these in need of a community-based structure? 

“DAOs can become a meta-layer on top of the idea exchanges of the world — a second home for those eager to make strides towards their goals, and know that the way to get there is not by oneself, but in a collective.” as noted down by Julian Bleecker.

Also based on social contracts. Cooperative AI that is based on common ground, is an inspiring concept. AI-AI cooperation goes beyond human-ai relations, the co-performance. Especially in the autonomous organization need the right contracts. It seems not a coincidence that the article is published in Nature. Is it indeed part of our new nature? Or the new city? And what is the impact on citizenship?