By Liu Mo Edited by Jin Zha Originally published in Chinese by Renwu (人物) on March 31, 2026

Spring 2026 has been unusually kind to Kimi.

In just a few months, the company behind Kimi seemed to hit one milestone after another. Its revenue, fundraising, and valuation all kept breaking records. A research paper co-authored by a 17-year-old high school intern received praise from Silicon Valley figures including Elon Musk. And Cursor, the U.S. coding startup valued at around $50 billion, was accused by Chinese observers of essentially “wrapping” or heavily relying on Kimi’s model as part of its own product experience. In other words, Kimi suddenly seemed to be winning on all three fronts at once: capital, technology, and commercial traction.

This startup is only three years old. Its valuation has already surpassed RMB 120 billion, or roughly $16 billion. It is becoming impossible to ignore in the global AI story.

And yet Moonshot AI, the company behind Kimi, remains deeply mysterious.

I was given permission to spend 100 hours observing the company from the inside. As an independent writer, I was allowed to interview any employee willing to talk, sit in on any meeting that did not involve trade secrets, and write freely afterward. No one would edit my work. I would not be paid. That, it turns out, is very much in character for this company.

Inside the office, it feels like standing in the eye of a storm.

At the center, everything is strangely still. The desks are quiet. Only scattered keyboard sounds break the silence. Occasionally you hear someone laugh. But the noise outside, the rumors, arguments, hype, imitation, and endless commentary, seems to leave no trace here.

There are just over 300 employees. Their average age is under 30. Each person, if you divide the company valuation by headcount, is effectively carrying close to RMB 400 million in enterprise value on their shoulders.

About 80% of the staff are what Chinese internet slang calls “I people,” meaning introverts, borrowing from MBTI language. People sit side by side, but they are more comfortable typing than talking. Here, introversion is not treated as a flaw. It is almost an operating protocol.

I thought back to my first visit in 2024, on a night when the storm was only beginning to gather. At the time, I did not come away with a particularly positive first impression.

“DeepSeek saved us”

[

](https://x.com/ruima/article/2039245985520681257/media/2039245589666578432)

The night of December 24, 2024, was Christmas Eve, though for most people in China it was not a holiday that mattered much. For Julian, it became one of the darkest nights of her life.

She was 26, had graduated from Peking University only two years earlier, and had no prior industry experience. Yet she was already one of the earliest employees at Kimi. That night, this very young yet already “senior” employee sat at the long table in a conference room called Radiohead, crying in front of more than 30 colleagues.

She still had not delivered a holiday marketing plan that met the standards of the co-founders.

Chinese New Year was only a month away. The latest plan had already been revised six times, and now it needed to be upgraded again, perhaps even scrapped entirely. The odds of rebuilding it from scratch and then coordinating product and engineering to execute it in time were slim. But the company had high hopes for growth during the 2025 Lunar New Year period.

That mattered because the previous Lunar New Year had been a breakthrough moment for Kimi. It had gone viral in China thanks to its branding around handling “2 million Chinese characters of long-context input,” which was unusually advanced at the time. Consumer users surged, and in the Chinese stock market people even started talking about “Kimi concept stocks,” meaning public companies loosely associated with the trend.

That weekly meeting was long and brutal.

Around 20 young employees, most as inexperienced as Julian, took turns reporting on everything: social media ads, user operations, PR in China, overseas marketing, all the details. The group discussed everything collectively, and the co-founders made the final calls.

Kimi at that point felt like an adolescent: talented, full of potential, but not yet fully in control of itself. Even with a monthly advertising budget of tens of millions of RMB, it still looked clumsy in the face of fast-rising competitors.

The meeting ended around 4 a.m.

No one knows whether Julian’s final plan would have succeeded. A month later, it no longer mattered.

That was when the world first heard the name DeepSeek.

Hayley, who worked on growth, went home to Wenzhou for the holiday and found that relatives and friends all asked the same question: “Have you heard of DeepSeek?” It was as if Kimi had suddenly become yesterday’s news.

She says that was the hardest Lunar New Year of her life. The silence inside the company was deafening.

The annual company meeting is usually held in March, after the holiday. Employees are allowed to challenge management directly. That year, almost every question revolved around DeepSeek.

The sharpest question came from the HR team. With complete sincerity, they said the uncomfortable thing out loud:

“How are we supposed to answer candidates when they ask: DeepSeek also gave me an offer. Why should I join Kimi instead?”

But not everyone reacted the same way.

Alex from the algorithm team says that if he felt any strong emotion during the “DeepSeek moment,” it was not fear. It was excitement.

That feeling was not just personal. It reflected the mood of much of the algorithm team. DeepSeek had shown that there might be another way: lower-cost strategies, open-source approaches, and a truth many people had doubted before. A little-known Chinese startup, if its technology was strong enough and its model was good enough, could still earn global respect.

The product team was not especially anxious either. Kevin, one of the earliest product employees, believed that DeepSeek had broken out because of its model. Once Kimi’s own model capabilities caught up, he believed the product team would have even more room to build useful features on top.

No outsider knows exactly what discussions the co-founders had. But the company moved quickly. It adjusted strategy, narrowed focus, and reached something close to full internal alignment.

Ask almost anyone inside the company what matters most now, and they will answer without hesitation: the model.

From then on, you could feel a growing respect for DeepSeek inside Kimi. Part of it was professional admiration. Part of it was something else.

As Alex put it:

“In a way, DeepSeek saved us.”

Taste is all you need

“Why are you wearing shoes like that?”

After Ezra asked me that, I was more surprised than she was. On her floor of the office, almost everyone keeps a pair of slippers under the desk. Comfortable clothes and shoes, people believe, make you more relaxed, more focused, and more creative.

This is the dress code of smart people.

I have met many high-achieving students in my life. But the “good students” here are a very different species.

When Ezra was in elementary school, she tried to hack the family computer because her parents would not tell her the password. In middle school she became interested in Bitcoin, when one coin cost only a few hundred RMB. She asked her mother for spending money to invest; her mother told her it was a scam. In high school, the first time she ever took a taxi, she sketched out a ride-hailing product concept. Had today’s AI tools existed back then, she says, maybe she could have launched it. Once she finally had some money of her own in college, she put it into the Chinese stock market and lost 90%.

That disaster taught her something about the limits of human judgment, and pushed her toward AI.

Her view of AGI, or artificial general intelligence, is simple: create “N Einsteins” and use them to solve humanity’s hardest problems. From that point on, she became determined to find a company that would truly push the limits of AGI. This was despite the fact that she had already made her investment losses back in the stock market.

Because of her strong academic background, she received offers from many companies. She chose Kimi for one reason: during the interview, she was deeply impressed by founder Yang Zhilin’s understanding of technology and his seriousness about details. She felt he genuinely cared about models. He did not have the restlessness often seen in smart people, nor the utilitarian instinct common in businesspeople. In fact, by the end of the interview, she still did not know he was the founder.

Karen’s personality is different but leads to a similar place.

He was rebellious from childhood. He argued with teachers. He never listened to his parents. As a student, he insisted on going abroad. After graduating, he insisted on starting a business. The comfortable and stable life offered by a big Chinese tech company made him despair. He did not want a life whose ending was visible from the beginning.

I asked him: if given the choice between a guaranteed 60 out of 100, and a 1% chance at 100 out of 100, which would you choose?

He chose the latter without hesitation.

It was not that he could not tolerate a score of 60. He just hated the certainty of that 100% path.

That founder-like DNA forms part of the company’s underlying texture. By rough internal count, at least 50 people at Moonshot AI have founded or joined startups before.

Kimi, apparently, likes hiring CEOs.

A more accurate way to put it is this: the company shelters a rotating population of gifted drifters. A genius is not necessarily a top student or model employee. What matters is that in some dimension, they can see through time.

At a company where around 80% of employees come from China’s elite “985” and “211” universities, Yannis’s résumé does not look especially impressive. Yet as early as 2023, he had already predicted in engineering communities that both DeepSeek and Kimi would rise, at a time when model companies barely had products at all. Another employee, himself born after 2000, noticed Yannis’s insight and recommended him internally.

Karen says too many smart people get trapped by systems. First the family, then school, then the workplace. They obey group expectations without realizing it and lose sight of what they actually want. Only a small number try to escape, and even they often go unseen.

One of Kimi’s missions, he says, is to see them.

Without that instinct, a 17-year-old high school student would never have been brought in as a Kimi intern, collaborated with the team, and published a paper that later drew praise from Elon Musk. The person who put that student’s name first on the paper was Bob, the mentor who first spotted him.

There is only a thin line between genius and madness. When an “ununderstood madman” arrives at Moonshot AI, he may suddenly become a world-changing genius. Or perhaps some still-hidden genius can only truly bloom in a place like this.

Bob told me that, to some extent, having a big ego is not a problem. It may even be a good sign. If that ego functions as inner drive, if someone believes they must be part of a great mission, that may be exactly the sort of person the company cannot afford to miss.

Geniuses are obsessive.

Inside this team, training a top AI model is jokingly called “alchemy,” a common Chinese tech term for the mysterious, half-scientific, half-artistic process of model training. But in practice, alchemy means constantly fixing bugs.

Once a flagship training run begins, Bob and his teammates fall into the same ritual. The first thing they do every morning is refresh the company’s massive set of internal monitoring dashboards. Hundreds of thousands of metrics. If even one curve spikes abnormally, alarms go off in their heads. Was there a problem in optimization? A flaw in the architecture? A mismatch in numerical precision?

They react with almost animal sensitivity.

Some people even inspect training data token by token, printing out those that produced extreme gradients and interrogating them like suspects: why did you jump so violently?

Everyone who has ever truly participated in “delivering” one of these models has lived through this kind of sleepless tension. It is not really anxiety. It is curiosity driving obsession. That obsessive vigilance is part of what pushed the model toward top-tier performance.

Geniuses cluster.

Over the past year, more than 100 of Kimi’s hires came through referrals, friends or friends of friends. Inside the company, this is jokingly called “human-to-human transmission.”

Trust, because of these dense networks, becomes a natural organizational asset.

In essence, Kimi shifts the hardest part of management onto recruiting. If people are brought in by trusted peers, they are more likely to share the same instincts. This is why one word comes up over and over inside the company:

Taste.

One night in September 2025, several engineers casually launched a small internal project and named it Ensoul. They wanted code sleeping inside files to “come alive” and become a conversational assistant inside the command line.

This sensitivity to naming is not accidental.

They once had a framework called YAMAHA, short for “Yet Another Moonshot Agent.” Their deepest infrastructure layer was called Kosong, which means “empty” in Malay, inspired by the Buddhist phrase “emptiness is form.” It was meant to suggest a blank sheet of paper with no pre-assigned function, but infinite potential.

Taste, in other words, shapes the product itself.

While many other companies were shoving chat windows into the command line, Kimi’s engineers thought that was ugly. Real programmers open a terminal to issue commands, not to chat. So Kimi CLI was designed to feel more like a smart shell than a chat interface. It understands commands, but does not force itself into the shape of a conversation box.

This minimalism is visible in the code too. The core logic is only about 400 lines of Python, stripped of all unnecessary ornament. The modules are cleanly decoupled. Users can customize functions themselves, or take Kimi apart and reassemble it into their own applications.

Even Kimi Agent was once internally associated with the phrase OK Computer, a Radiohead reference, though that name was later changed because it was too obscure for wider adoption. The people who chose names like that did not seem especially interested in maximizing internet traffic. They obeyed their own musical taste and linguistic standards instead.

Someone joked that if you measured AI companies by the share of employees who play musical instruments, Kimi might rank first.

Taste has become the highest hiring standard, and also the hardest to define.

It cannot be quantified, but it is everywhere.

Generalize, then evolve

You may never fully understand what each person at Kimi actually does.

The company likes using the word “team” instead of department. At a high level, the main areas are clear enough: algorithms, product and engineering, growth, strategy, operations. But once you try to zoom in and map actual departments or fixed responsibilities, things start to blur.

That is because this is an organization with no formal departments, no hierarchy, no titles, no OKRs, and no KPIs. Reporting lines are so simple that they feel almost unreal.

For Brandon, this made no sense at all.

He had studied at Tsinghua, held management roles at Silicon Valley giants and major Chinese tech firms, and helped build a startup worth around $1 billion. He had spent years in the industry and excelled at technical management. He had led teams of nearly 1,000 people. He hoped to enter AI and apply that experience at scale.

Instead, co-founder Zhang Yutong told him that the company did not work that way. The number of people he would likely manage, if he joined, was about two.

Still, something about the future pulled him in, and he wanted one more conversation.

So in January 2025, during a period of internal doubt and unrest, Brandon met Yang Zhilin, his younger schoolmate from Tsinghua.

At the time, Brandon had no idea that Yang’s name would eventually be mentioned in media stories alongside Elon Musk and Jensen Huang. What he remembers most is the very first sentence Yang said after basic greetings:

“Reinforcement learning is the future.”

The rest of the conversation felt almost like Yang thinking out loud. He was so immersed in his own line of thought that Brandon could not understand much of what he was saying, even though it was all in Chinese.

But one thing was unmistakable: for the first time, Brandon felt the knowledge structure and mental models he had built over the past 20 years starting to collapse. Along with them went his ego.

When I asked why he eventually joined, he replied in a slightly mysterious tone: Yang Zhilin might become a great prophet, because he is both far-sighted and pure.

Later, when the company hesitated because it did not really know how to define his role in such a title-light system, Brandon replied firmly:

“Even if you make me clean toilets, I’ll come. And I’ll clean them better than anyone.”

Not every former big-tech manager or expert thrives in this environment.

Phoebe, born after 2000, moved from the growth team into product and engineering. She describes herself jokingly as “a clueless little girl,” but says something important: in this company, deep experience and strong credentials can actually become a burden.

AI is too new. The field is changing too fast. A highly experienced expert may not learn and adapt as fast as a younger person with fewer assumptions.

She has seen at least three mid-level or senior big-tech hires fail to “land” after joining. One eventually chose to leave the industry altogether, saying the people around him were just too young and too smart. After being repeatedly outperformed, he gave up. This, he decided, was no longer his era or his industry.

After the DeepSeek shock, Phoebe also felt a deep sense of crisis. She decided to abandon ad-buying work and instead try to help the company through product and engineering. She began an intense period of self-study, even streaming herself learning on Bilibili for hundreds of hours.

What surprised her most was that the company, from the start, gave her the chance to switch roles without much hesitation.

In fact, among the thirty employees I interviewed, more than half had changed responsibilities multiple times. Compared with their previous jobs, perhaps 80% were now doing something completely different.

Kimi likes people with generalization ability.

In AI, generalization means a model can perform well in new scenarios beyond its training data. It has not merely memorized answers; it has learned underlying structures.

The company applies this idea to people too.

Mid-level and senior employees from giant firms may have spent too long optimizing for a particular KPI system, a particular reporting language, a particular internal political game. Their “algorithm” becomes overfit to one local optimum. When the environment changes completely, they may fail to adapt.

If traditional big-tech workers are like specialized models, then the people Moonshot AI wants are more like base models. First they learn basic rules through supervised fine-tuning. Then, through reinforcement learning and repeated self-play across many tasks, they acquire the ability to transfer across domains.

James, a returnee from Silicon Valley, is 26 and says his dream is “to give money to young people.”

As a devout believer in AI, he sees his own body as little more than a sensor for an agent to collect information. When playing League of Legends with friends, he records voice and collects physiological data like heart rate and pulse, then analyzes which teammate’s comments affected his emotional state and game performance.

His views are so sharp they verge on extreme. He says: if a person starts learning a truly new language after age 14, they will never master it at a native level. AI, he argues, works similarly.

Dan, who joined the company right after graduation, says that for the first time in his life he felt true knowledge anxiety.

At school, he had only ever worked on “toy models,” around 7 billion parameters, which could be trained in a few days on 32 GPUs. Now he was handling enormous Mixture-of-Experts models with tens of billions of parameters and training datasets measured in trillions of tokens. It felt like jumping straight from a small pond into the Pacific Ocean.

To keep up, he threw himself into near self-abusive study. His schedule collapsed. Beijing daytime became Silicon Valley nighttime, then reversed. He stared at training dashboards for hundreds of hours, like a stock trader watching markets with no room to blink.

The real challenge was not just workload. He had to do three jobs at once.

He had to be an algorithm architect, designing the best plan through a maze of model choices. He had to be a systems engineer, debugging distributed computing problems like a mechanic repairing a pipeline stretched across the globe. He had to be a data curator, performing “alchemy” on giant datasets so the model would score well on benchmarks while also feeling natural and soft in actual conversation.

Sometimes that meant emergency surgery mid-training. At one point, key parameters stored in bf16 precision started behaving dangerously. The team made a snap decision to switch to fp32 precision halfway through training, just to stabilize the run. Dan says that if all you can do is write algorithms, or build systems, or clean data, you will never produce a top model. There is no excuse here of “I only handle this part.”

The company expects you to integrate algorithm, engineering, and data work across multiple worlds. It is like doing several jobs at once. But that kind of intense cross-training can give you years’ worth of growth in a very short time.

So anyone trying to join Kimi faces a brutal test.

There are no OKRs, no KPIs, no office politics, no manipulative managers, not even clock-in attendance. But if you are not AI-native, if you cannot generalize, if you cannot continuously reinforce and adapt, then you may struggle to find meaning for your existence here.

“There’s no bureaucrat smell here”

Most brands want a story.

But nearly every Kimi employee gently warned me: don’t write about Pink Floyd, or the piano near the office entrance.

Their view is that people who get it, get it. People who don’t, don’t need to. The names Moonshot and Kimi have nothing directly to do with AI or technology. But if the company talked too much about its connection to rock music or art, it would start to feel self-conscious and pretentious. Better, they seem to think, to be beautiful without trying to explain the beauty.

Win, another post-2000s employee who had escaped from a giant tech company, told me this place is bizarre because people can actually get work done without endless meetings.

At his former employer, daytime was for meetings and nighttime was for work. He learned a simple lesson: if your energy goes mainly into coordinating relationships around production, there is very little room left to improve actual productivity.

This is part of what an AI-native organization looks like.

More than ten employees told me explicitly that they increasingly prefer dealing with AI over dealing with humans. AI feels more reliable and simpler. That tendency also fits the company’s broader introverted character. One person used a gentler word: shy.

In group chats, everyone can be lively and expressive. In person, many are quiet. Kimi does not organize many cultural activities. Aside from the annual meeting, the most recent group event had simply been massages in the office.

Introversion does not mean a lack of communication or energy.

Even though no one was required to talk to me, not a single person said no. In group chats, information flies constantly, along with all kinds of abstract emoji. No one’s messages are left hanging in silence.

And if you need help from someone else to get work done, the process is simple: ask them directly.

No need to go through a manager. No need for approval. No need for a coordination meeting. No need to break through departmental walls.

Kimi has no departmental walls. In some sense, it does not even have departments.

Yang Zhilin’s status message is just four words:

Communicate directly.

Still, everyone acknowledges that the company has changed continuously since its founding.

Some changes were proactive, some reactive, and some even seemed like reversals. The company moved from heavy ad spending to model focus, from insisting on closed source to embracing open source, from chatbot products to Kimi Agent, Kimi Code, and Kimi Claw, from consumer to enterprise and back again. Not every shift stands up perfectly to scrutiny.

Yet in Ezra’s mind, one thing has remained constant: respect for facts.

All those changes, she believes, had only one cause and one purpose: to make the company align better with objective reality.

The company tolerates ego, but it does not like hiring people who place themselves above facts.

From the co-founders down, people are relatively easy to persuade, as long as the facts are clear enough. That willingness, employees say, comes from an intense commitment to truth, reality, and what is real. Truly smart people are not wounded by honest feedback.

Another condition for this level of honesty is that the company has no horse-race system, no zero-sum competition, no major internal conflicts of interest. People willingly share research findings and technical detail without expecting payment or credit. Early on the company had its own community; today it still promotes a community culture. Shared information and shared knowledge speed up everyone’s learning, which in the end benefits everyone.

Win says toxic culture is contagious. Good culture is contagious too.

Someone used the word “solidarity” to describe the atmosphere, a word that sounds almost old-fashioned when applied to a startup. But the company operates in a harsh environment. Outside are giant competitors. Inside are the pressures of being squeezed by established tech firms. Compute resources are limited. Those constraints, if anything, seem to increase cohesion.

At the root of it all, people are the only truly important asset in an organization.

Recently, Florence was approached by a competing company offering double her salary. She rejected it immediately. Her reason was simple:

“There’s no ‘officialdom smell’ here.”

That phrase is hard to translate directly. In Chinese internet slang, it refers to the stale, hierarchical, self-important atmosphere associated with bureaucracy, performative authority, and status games.

[

](https://x.com/ruima/article/2039245985520681257/media/2039245184211505152)

The company’s new office.

“I don’t know how she endured it”

At the beginning of this reporting process, I was extremely nervous. I was about to interview some of the smartest AI people in the world. I am a humanities person. I have never worked in tech. My knowledge of AI is limited.

But when I actually started talking with young experts from the algorithm and product-engineering teams, I realized they were the ones who seemed nervous. They were afraid I would feel awkward if I did not understand their terminology.

So first they would translate English into Chinese, and then translate that Chinese into a second, even simpler Chinese I could understand.

That instinct to protect was moving.

Before I started the interviews, the company gave me only one instruction: protect everyone.

So I tried to avoid questions that were too sensitive or likely to hurt people.

Even so, Ty, during a phone interview, could not fully hide a small emotional tremor. When he first joined the company and was going through the difficult onboarding process, he struggled badly. At one point he felt he could not continue and even thought about resigning.

Then one week, at the company meeting, he watched Annie, a woman who had graduated only two years earlier, finally push a difficult project forward after countless setbacks and internal doubts. Seeing that, he felt he could not give up either. He was older than she was, had more life experience, yet in terms of sheer stamina and willpower, he felt weaker.

He said:

“I don’t know how she endured it.”

In fact, Ty was not the only one who had thought about leaving.

Annie had too.

For a long time, she was trying to build a business line overseas from zero to one and made no real breakthrough. To make things worse, colleagues from other teams, with good intentions, directly told her to abandon what they viewed as a meaningless effort.

She says she cried more at Kimi than for any other company, or for any ex-boyfriend she had ever had.

It was not as though she lacked alternatives. She already had a better-paying offer elsewhere. But she says she simply could not persuade herself to go work for someone else. She wanted one more conversation with Zhang Yutong.

Afterward, she decided to stay.

She did not tell me what was said in that conversation. She only said: Yutong is the strongest boss I have ever seen, the fastest at iterating, with the highest ceiling. Following her is how I can raise my own ceiling.

Then Annie repeated the same line:

“I don’t know how she endured it.”

Once you gather enough material, you notice certain sentences recurring. And the most repeated phrases often reveal the deepest common qualities of a team.

Bob, who had been pulled back to China by Yang Zhilin and gave up the chance to pursue a PhD in the United States, joined the company on day one. If anyone understands the company deeply, he does.

When I asked him the same question I asked everyone else, what is the team’s most important quality, he thought for about two minutes and answered with one word:

Resilience.

For a company only three years old, talking about resilience may sound like a luxury. But he means it sincerely. Smart and brave, he says, are sometimes opposites. The smarter you are, the more clearly you see the risks, and the easier it becomes to walk away. Foolish persistence will not succeed either. So only those who see the truth, calculate the odds of failure, and still continue deserve to be called resilient.

Inside the company, there is a story known as “three trips to the cliff of reflection.”

In May 2023, Freddie and his colleagues were given a task that seemed impossible: make AI read and understand 128K context in a single pass, meaning hundreds of book pages, at a time when the industry standard was closer to 4K.

He quickly designed a solution called MoBA v0.5, but it required rewriting the underlying training framework while the main model was already halfway through training. The cost was too high, so the idea was shelved. That was the first trip to the “cliff of reflection.”

Half a year later he returned with version 1, now designed to continue training from the existing model. It worked on small models, but when tested on the large one it hit a loss spike and kept failing. The project was forced back to the cliff a second time, for another six months. It even missed the company’s 200,000-character product milestone. But the team was not disbanded. Instead, the company launched what it called a “saturation rescue,” gathering technical experts from everywhere to attack the problem together. They rewrote core logic and finally got version 2 to pass the classic long-context “needle in a haystack” test.

Just when launch seemed close, a third blow arrived. During supervised fine-tuning, the model performed poorly on long-summary tasks because the training signals were too sparse. By then huge resources had already been invested. Still, the engineers went back to the cliff again, searched for a solution, and eventually fixed the issue by changing the attention mechanism in the final layers.

Three retreats. Three returns.

At the end of the interview, I asked Freddie the ultimate question: how would you describe this company?

He answered in two words:

Moon landing.

Why moon landing?

He quoted the famous line from John F. Kennedy:

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.

[

](https://x.com/ruima/article/2039245985520681257/media/2039245329724485634)

All the company meeting rooms are named after musical acts.

Genius Swarm

In the end, I did not disturb or attempt to probe the co-founders themselves.

Externally, they remain almost invisible. They dislike interviews and have no interest in personal fame. Internally, though, they are everywhere.

In an extremely flat organization, you need superbrains at the center. Otherwise vitality turns into chaos. Because there is little middle management, each co-founder interfaces directly with around 40 to 50 employees and stays close to both the technical and business front lines. That is how the company keeps decision-making and execution aligned.

All five co-founders came from Tsinghua University. But biological limits still exist. Human attention spans are finite. Management range is finite. Once the company reached a RMB 120 billion valuation and grew past 300 people, even these superbrains began to strain under the load.

And it is not just the founders.

This is an infinite game driven by self-motivation. If every member is effectively carrying RMB 400 million of valuation, then each person is expected to create an extraordinary amount of value.

The revolutionary variable is the toolset.

Kimi does not actually run on extreme working hours. Employees are allowed to wake naturally. They are not required to stay in the office until dawn every night. Leo from the product team says he commands “an army” now, meaning AI agents.

Imagine this scenario:

Leo wakes up at 10 a.m. and walks into the office. His task is to analyze user feedback from five global markets over the past 24 hours and decide this week’s product priorities. In the past, that would have taken three people two days.

Now he launches three agents.

A strategy agent scans 3,000 feedback items and filters for high-priority requests related to long-context interruption. A translation agent interprets Japanese dialect and Korean honorifics in real time and marks true emotional intensity. A competitor agent monitors updates from Cursor and ChatGPT and produces a technical comparison.

Leo does only three things himself. He rejects one sarcastic comment that the system had misread as sincere. He flags a screenshot containing an unreleased UI. He confirms the top three needs recommended by the agents.

By 11:30 a.m., the product requirements document is already finished. Meanwhile, a coding agent has generated about 70% of the base implementation, leaving only the more creative design work for afternoon discussion with human engineers.

Humans set the rules. Silicon-based systems execute them. The organization becomes a container for algorithms.

In an AI-native company, using agents skillfully and embedding them deeply into workflows is not optional. It is part of the job.

The model is not only the goal. It is also the tool.

Whether by directly improving productivity or by fundamentally changing management structure, AI’s logic has already entered the bones of this company. Just as the company builds an Agent Swarm, the team itself begins to resemble a Genius Swarm: many independent geniuses working in parallel, coordinating seamlessly.

Still, such a flat structure has built-in fragility.

When I asked whether this model would remain sustainable if the company grew from 300 people to 3,000, most people answered cautiously. History is not encouraging. Similar experiments in extreme flatness, like holacracy or Haier’s internal contract-cell structures, often hit decision bottlenecks once they pass around 500 people. When there are too many information nodes, “direct communication” starts turning into information overload.

A more immediate pain point is the personal experience of weightlessness.

Without hierarchy to buffer uncertainty, confusion about direction is felt directly by each individual. One former employee who eventually returned to big tech put it bluntly: without top-down OKRs and KPIs, some mornings you walk into the office not knowing what you should do. No one necessarily tells you whether you are doing well. That lack of feedback creates insecurity. It can make people nostalgic for the clear reporting lines, review points, and measurable outputs of giant tech companies.

Those cumbersome structures, after all, do provide one essential thing: a baseline of certainty.

Where is the goal? What counts as completion? How will performance be judged? In a large firm, all that is visible.

That is not Stockholm syndrome, the person said. It is basic organizational physics.

If Alibaba is like a finely calibrated promotion conveyor belt, ByteDance like a ruthless battle corps with strong objectives, and Tencent like a more forgiving professional academy, then Moonshot AI is like a primeval forest.

Geniuses may find a hunting path. Ordinary people may just wander in the fog.

The necessary “two-dimensional foil”

No departments. No titles. No evaluations.

The AI-native organizational model is anti-bureaucratic and intentionally unstructured. Large companies can no longer pivot toward it easily. Small companies often miss the window because they expand into traditional structures too quickly. This is an asymmetric war.

Here the author turns to a famous science-fiction reference from The Three-Body Problem. In that story, an advanced civilization casually uses a weapon called a two-dimensional foil, which collapses the solar system from three dimensions into two. Planets, stars, and humans all become a flat image without thickness.

Moonshot AI, the author argues, is deliberately throwing such a “two-dimensional foil” at itself.

Not to destroy an opponent, but to flatten the organization in pursuit of maximum efficiency.

No vertical depth of hierarchy. No horizontal walls of departments. No three-dimensional tangles of office politics. Only “model” and “intelligence” facing each other directly in the simplest possible form.

In the age of AI, every startup, the author argues, is being forced to throw such a foil at itself. The rise of one-person companies reflects the same generational explosion of AI-native talent. If technology can compress organizational capability into the individual, then many of the middle layers of management simply evaporate. The organization gets flattened. There is no depth left for detours. Everyone is forced to face the problem itself.

That may be the hard rule governing the evolution of organizations in the business world.

Everyone, eventually, will be folded.

Once people are exposed on the same plane, one person radiating influence over fifty others no longer looks like a managerial miracle. It becomes normal. The distance from center to edge is redefined. People who depend on titles and OKRs as coordinates may suffocate instantly. But geniuses, on this exposed flat surface, can violently dismantle intelligence itself, while the “guardians” clear away noise and entropy, seeing themselves, not without humility, as pioneers widening the boundary of human civilization.

And yet the transition from three dimensions to two cannot be reversed.

That means Kimi cannot go backward.

Every strategic adjustment becomes a chaotic iteration with high stakes. Competitors can still turn slowly inside a maze. But if Moonshot AI tries to expand recklessly in size, it may tear itself apart structurally. This act of self-flattening is only acceptable because it is in service of something more radical.

The endpoint of lowering the organization’s dimension is raising the dimension of intelligence.

Only if model intelligence crosses the critical threshold, rising high enough to escape the gravity well of all carbon-based organizations, can Moonshot AI truly crush the organizational advantages of its competitors and justify this irreversible gamble.

At that point, debates over management span or org charts no longer matter. It would be like asking what dimension the Three-Body Problem civilization inhabits, when the real point is that its dimensional weapon has already rewritten the rules of war.

Then “Moonshot AI” would stop being a metaphor.

It would become a higher-dimensional light source, illuminating the dark side of the intelligence universe. All the organizational pain that came before would be no more than the heat shield burning off as the lunar module passed through the atmosphere.

Either they become godlike through ascent.

Or they are sealed away in collapse.

There is no third path.