(2026-02-24) Kriss Childs Play
Sam Kriss: Child’s Play. The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all... Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.
there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes:
hi my name is roy
i got kicked out of school for cheating.
buy my cheating tool
cluely.com
Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial
The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about
What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. (k-shaped)
Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless.
The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things.
Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.
Roy Lee’s personal mythology is now firmly established. At the beginning of 2025, he was an undergraduate at Columbia, where he, like most of his fellow students, was using AI to do essentially all his work for him. He wasn’t there to learn; he was there to find someone to co-found a startup with. That person ended up being an engineering student named Neel Shanmugam, who tends to hover in the background of every article about Cluely. The startup they founded was called Interview Coder, and it was a tool for cheating on LeetCode.
The university suspended him for a year. He dropped out, started an upgraded version of Interview Coder dubbed Cluely, and moved to San Francisco to begin raking in tens of millions of dollars in venture-capital funding.
The startup’s mainstream breakthrough was a viral ad that showed Roy using a pair of speculative Cluely-enabled glasses (HUD) on a blind date.
The future they seem to envisage is one in which people don’t really do anything at all, except follow the instructions given to them by machines.
A significant part of working at Cluely seemed to involve dressing up as cartoon characters for viral videos.
Roy has a kind of idol status within the company, but he’s aware that a lot of people instinctively take against him: “I’d say about eighty percent of the time, people do not like me.”
Roy does talk a lot, but there’s also something mildly unnerving about the way he talks. Everything he says is very precise and direct. He doesn’t um or ah. He doesn’t take time to think things over. Zero latency... I asked him whether he’d ever tried modifying the way he interacts with people to see whether they would dislike him less. “Very unnatural to me,” he said. “I just say it’s not worth it.”
He was just glad to be among people. Roy had initially been offered a place at Harvard University, but the offer was rescinded. He hadn’t told them about a suspension in high school. This presented Roy’s family with a problem: His parents ran a college-prep agency that promised to help children get into elite schools like Harvard. It would not look good if their own son was conspicuously not at Harvard. So Roy spent the entirety of the next year at home.
Starting a company had been Roy’s sole ambition in life from early childhood. “I knew since the moment I gained consciousness that I would go start a company one day,” he told me. In elementary school in Georgia, he made money reselling Pokémon cards.
The dream of starting his own company was the dream of total control.
Roy has little patience for any kind of difficulty. He wants to be able to do anything, and to do it easily
As a child, he loved reading—Harry Potter, Percy Jackson—until he turned eight. “My mom tried to put me on classical books and I couldn’t understand, like, the bullshit Huckleberry, whatever fuck bullshit, and it made me bored.”
He didn’t see anything valuable in overcoming adversity. Would he, for instance, take a pill that meant he would be in perfect shape forever without having to set foot in the gym? “Yes, of course.”
I found it strange that Roy couldn’t see the glaring contradiction in his own project. Here was someone who reacted very violently to anyone who tried to tell him what to do. At the same time, his grand contribution to the world was a piece of software that told people what to do.
There’s a short story by Scott Alexander called “The Whispering Earring,” in which he describes a mystical piece of jewelry buried deep in “the treasure-vaults of Til Iosophrang.” The whispering earring is a little topaz gem that speaks to you. Its advice always begins with the words “Better for you if you . . . ,” and its advice is never wrong.
“The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family,” writes Alexander. After you die, the priests preparing your body for burial usually find that your brain has almost entirely rotted away, except for the parts associated with reflexive action. The first time you dangle the earring near your ear, it whispers: “Better for you if you take me off.”
Alexander is one of the leading proponents of rationalism.
Rationalists believe that the way most people understand the world is hopelessly muddled, and that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch. The method they landed on for rebuilding all of human knowledge is Bayes’ theorem
In the mid-Aughts, armed with the theorem, the rationalists discovered that humanity is in jeopardy of a rogue superintelligent AI (ASI) wiping out all life on the planet (p-doom). This has been their overriding concern ever since.
The most comprehensive outline of this scenario is “AI 2027,” a report authored by Alexander and four others.
Not long before I arrived in the Bay Area, I’d been involved in a minor but intense dispute with the rationalist community over a piece of fiction I’d written that I’d failed to properly label as fiction.... ended up turning into an invitation for Friday night dinner at Valinor, Alexander’s former group home in Oakland, named for a realm in the Lord of the Rings books. (Rationalists, like termites, live in eusocial mounds.)
Alexander is a titanic figure in this scene... He would probably have a very easy time starting a suicide cult. In person, though, he’s almost comically gentle.
Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section. “Everybody who started AI companies between, like, 2009 and 2019 was basically thinking, I want to do this superintelligence thing, and coming out of our milieu. Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.”
But that race seems to have stalled, at least for the moment. As Alexander predicted in “AI 2027,” OpenAI did release a major new model in 2025; unlike in his forecast, it’s been a damp squib. (that seems like a mid-2025 statement)
“Humans are great at agency and terrible at book learning,” Alexander told me. “Lizards have agency. We got the agency with the lizard brain. We only got book learning recently. The AIs are the opposite.” He still thinks it’s only a matter of time before they catch up. “If you were to ask an AI how should the world’s savviest businessman respond to this circumstance, they could create a good guess. Yet somehow they can’t even run a vending machine. They have the hard part. They just need the easy part that lizards can do. Surely somebody can figure out how to do this lizard thing and then everything else will fall very quickly.”
But are humans really so great at exhibiting agency? After all, Cluely managed to raise tens of millions of dollars with a product that promises to take decision-making out of our hands (stupid frame)
AI can’t function without instructions from humans, but an increasing number of humans seem incapable of functioning without AI
For Alexander, this is a kind of Sartrean mauvaise foi. “It’s terrifying to ask someone out,” he said. “What you want is to have the dating market site that tells you that algorithmically you’ve been matched with this person, and then magically you have permission to talk to them. I think there’s something similar going on here with AI. Many of these people are smart enough that they could answer their own questions, but they want someone else to do it, because then they don’t have to have this terrifying encounter with their own humanity.”
His best-case scenario for AI is essentially the antithesis of Roy’s: superintelligence that will actively refuse to give us everything we want, for the sake of preserving our humanity. “If we ever get AI that is strong enough to basically be God and solve all of our problems, it will need to use the same techniques that the actual God uses in terms of maintaining some distance.
But until we build an all-powerful but distant God, the agency problem remains. AIs are not capable of directing themselves; most people aren’t either. According to Alexander, Silicon Valley venture capitalists are now in a furious search for the few people who are. “VCs will throw money at a startup that looks like it can corner the market, even if they can’t code. Once they have money, they can hire competent engineers; it’s trivially easy for anything that’s not frontier tech. They’re willing to stake a lot of money on the one in a hundred people who are high-agency and economically viable.”
This shift has had a distorting effect on his own social milieu: “There’s an intense pressure to be an unusual person who will be unique and get the funding.” Since rationalists are already fairly unusual, it’s hard to imagine what that would look like. People will endure a lot of indignity to avoid being left behind without VC money when the great bifurcation takes place. Nobody wants to be part of the permanent underclass. I asked Alexander whether he thought of himself as highly agentic. “No, I don’t,” he said instantly. He told me that in his personal life, he felt as though he’d never once actually made a decision. But, he said, “It seems to be going well.”
Eric Zhu might be the most highly agentic person I’ve ever met.
When I dropped in on his office, which also serves as a biomedical lab and film studio, he had just turned eighteen
“I saw this Wall Street Journal article where a lot of PE firms were buying up a lot of small businesses and roll-ups. I was like, What if I figure out a way to underwrite these small businesses?” Eric built an AI-powered tool to assign value to local companies on the basis of publicly available demographic data. Clients wanted to take calls during work hours, so he would speak to them from his school bathroom.
Next, he built his own venture-capital fund, managing $20 million.
Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you’re a millionaire. And in a sense, it is easy. Absolutely anyone could have done the same things he did.
In a way, Eric reminded me of some of the great scammers of the 2010s.
Most people are condemned to trudge along in the furrow that the world has dug for them, but a few deranged dreamers really can wish themselves into whatever life they want.
Unlike Roy, Eric didn’t think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? “I think I was just bored. Honestly, I was really bored.” Did he think anyone could do what he did? “Yeah, I think anyone genuinely can.” So how come most people don’t? “I got really lucky. I met the right people at the right time.” Anyway, Eric isn’t involved with the underwriting firm or the venture-capital fund anymore. His new company is called Sperm Racing.
His venture seemed to be of a piece with a general trend toward obsessive masculine self-optimization à la RFK Jr. and Andrew Huberman. Still, to me it seemed obvious that Eric was doing it simply because he was amazed that he could.
Like Cluely, Sperm Racing seemed to be first and foremost a social-media hype machine. As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online.
On August 5, 2025, OpenAI’s CEO, Sam Altman, posted on X, “we have a lot of new stuff for you over the next few days! something big-but-small today. and then a big upgrade later this week.” An X user calling himself Donald Boat replied, “Can you send me $1500 so I can buy a gaming computer.”
That last one did the trick. “ok this was funny,” Altman replied. “send me your address and ill send you a 5090.”
This was the beginning of Donald Boat’s reign of terror. He began publicly demanding things from every major figure in the tech industry... He was becoming a kind of online folk hero, expropriating the expropriators, conjuring trivial things from tech barons in the way they seemed to have conjured enormous piles of money out of thin air. He started posting strange, gnomic messages. Things like “I am building a mechanical monstrosity that will bring about the end of history.” Images of the fasting, emaciated Buddha. A prominent crypto influencer who goes by the alias Ansem received an image of the dharmachakra. “Turn the wheel,” read Donald Boat’s message.
somehow he’d managed to do it without ever once having to create a B2B app. He was a kind of pure viral phenomenon.
He wanted to meet at a Cheesecake Factory. This was part of his new project, which was to review absolutely everything that exists in the universe. He was starting with chain restaurants. He’d already done Olive Garden.
Donald was twenty-one, terrifyingly tall, and intense. His head lolled from side to side as he chattered away, jumping from one thought to the next according to a pattern known only to himself. At one point he suddenly decided to draw a portrait of me, which he later scanned and turned into a bespoke business card.
He seemed to have a constant roster of projects on the go. He’d sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers.
I wasn’t thinking too hard about it. I don’t use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets.” As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. “They have too much money and nothing going on. They have no swag, no smoke, no motion, no hoes. That’s all you need to know.” Ever since his big viral moment, he’d been suddenly inundated with messages from startup drones who’d decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.
I told Donald the theory I’d been nursing—that he and Roy Lee were, in some sense, secret twins, viral phenomena gobbling up money and attention. (uh, like a classic celebrity?) I wasn’t sure if he’d like this. But to my surprise, he agreed. “I’m like Roy. I’m like Donald Trump. We have the same swaggering energy. There is a kind of source code underlying reality, and this is what we understand. Your words have to have wings. Roy and I both know that social media is the last remaining outlet for self-creation and artistry. That’s what you have to understand about zoomers: we’re agents of chaos. We want to destroy the whole world.” Did Donald consider himself to be highly agentic? “We need to ban the word ‘agency.’ I’m a dog.”
there were still some things I didn’t understand about Roy. He was clearly a highly agentic person, but what was all this agency being used for? What did he actually want?.. According to Roy, he has three great aims in life: “To hang out with friends, to do something meaningful, and to go on lots of dates.” He said he went on a date every two weeks, which was clearly meant to be an impressive figure.
For Roy, meanwhile, dating actually seemed to be a means to an end. “All the culture here is downstream of my belief that human beings are driven by biological desires. We have a pull-up bar and we go to the gym and we talk about dating, because nothing motivates people more than getting laid.” He was interested in physical beauty too, but only because “the better you look, the better you are as an entrepreneur. It’s all connected and beauty is everything. A lot of ugly men are just losers. The point of looking good is that society will reward you for that.”
The two possible functions of music were, apparently, focus and hype. Everything for the higher goal of building a successful startup.
The last time Donald had dropped in on his slaves at Cluely, he’d gifted them two Penguin Classics... He suggested that Roy might find something more valuable than dying for Cluely if he actually tried to read them. Roy disagreed: “I do not obtain value from reading books.” And anyway, he didn’t have the time. He was too busy keeping up with viral trends on TikTok.
Donald was practically vibrating when we left Cluely. “Dude, he’s just a scared little boy,” he said. “He’s scared he’s not doing the right thing, and because of the fucked-up world we live in, people who should be in The Hague are giving him twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen.” He sighed. “I just want Zohran’s nonbinary praetorians to march across the country and put all these guys in cuffs.”
Unlike Eric Zhu or Donald Boat, Roy didn’t really seem to have anything in his life except his own sense of agency. Everything was a means to an end, a way of fortifying his ability to do whatever he wanted in the world. But there was a great sucking void where the end ought to be. All he wanted, he’d said, was to hang out with his friends. I believed him. He wanted not to be alone, the way he’d been alone for a year after having his offer of admission rescinded by Harvard. For people to pay attention to him. To exist for other people.
Edited: | Tweet this! | Search Twitter for discussion

Made with flux.garden