(2025-10-13) Webb Cyborgs Vs Rooms Two Visions For The Future Of Computing
Matt Webb: Cyborgs vs rooms, two visions for the future of computing. Loosely I can see two visions for the future of how we interact with computers: cyborgs and rooms. The first is where the industry is going today; I’m more interested in the latter.
Cyborgs
Near-term, cyborgs means wearables
Apple AirPods are cyborg enhancements: transparency mode helps you hear better.
Meta AI glasses augment you with better memory and the knowledge of the internet – you mutter your questions and the answer is returned in audio, side-loaded into your working memory. Cognitively this feels just like thinking hard to remember something
Andy Clark’s Natural Born Cyborgs (2003) lays out why this is perfectly impedance-matched to how our brains work already.
Long term? I’ve joked before about a transcranial magnetic stimulation helmet that would walk my legs to work and this is the cyborg direction of travel: nootropics, CRISPR gene therapy, body modification and slicing open your fingertips to insert magnets for an electric field sixth sense.
When tech companies think about the Third Device - the mythical device that comes after the PC and the smartphone - this is what they reach for: the future of the personal computer is to turn the person into the computer.
Rooms
Contrast augmented users with augmented environments. Notably:
- Dynamicland (2018)
- Put-that-there (1980)
- Project Cybersyn (1971)
- SAGE (as previously discussed) (1958–) – the pinnacle of computing before the PC, group computing out of the Cold War.
Is it ubiquitous computing (ubicomp), in which computing power is embedded in everything around us, culminating in smart dust? It is ambient computing, which also supposes that computing will be invisible? Or calm technology, which is more of a design stance that computing must mesh appropriately with our cognitive systems instead of chasing attention?
So there’s no good word for this paradigm, which is why I call it simply room-scale, which is the scale that I can act as a user.
I would put smart speakers in the room-scale/augmented environments bucket: Amazon Alexa, Google Home.
And robotics too. Roomba.
Rather than “cyborg”, I like sci-fi author Becky Chambers’ concept of somaforming (as previously discussed), the same concept but gentler.
Both cyborgs and rooms are decent North Stars for our collective computing futures, you know?
Both can be done in good ways and ugly ways. Both can make equal use of AI.
Personally I’m more interested in room-scale computing and where that goes. Multi-actor and multi-modal. We live in the real world and together with other people, that’s where computing should be too. Computers you can walk into… and walk away from.
It has been overlooked I think.*
Edited: | Tweet this! | Search Twitter for discussion

Made with flux.garden