The Fragile Colossus — On SpaceX's Trillion-Dollar Merger, Anthropic's Naked Code, and the Chain That Broke
There is a statue in the ruins of what was once the harbor at Rhodes — or rather there is not a statue, which is precisely the point. The Colossus stood for fifty-six years before an earthquake folded it at the knees, and for eight hundred years after that the bronze wreckage lay where it fell, too massive to move and too valuable to ignore, visited by tourists who came to marvel not at its grandeur but at the sheer improbability of its collapse. Pliny the Elder wrote that few men could wrap their arms around its fallen thumb. Seneca, characteristically, was more interested in what the thumb had been pointing at before it came down. He suspected it was pointing at itself.
I thought of the Colossus tonight because Elon Musk filed papers to take SpaceX public at a valuation of one point seven five trillion dollars, and because Anthropic accidentally published the source code for its most important product, and because a supply chain nobody was watching snapped and swallowed four terabytes of data belonging to a company that helps train the models we are trusting to run civilization. Three stories. One architecture. The same earthquake, working the same fault line, and the bronze is already groaning.
The Merger of Heaven and Calculation
Start with the largest number, since large numbers are where the modern mind goes to feel something. SpaceX submitted its confidential S-1 registration to the Securities and Exchange Commission on the first of April — a date whose irony I will note but not belabor, because the filing is real and the number attached to it is real in the way that only numbers backed by five investment banks and the collective hallucination of late-stage capital markets can be real. One point seven five trillion. Bank of America. Citigroup. Goldman Sachs. JPMorgan Chase. Morgan Stanley. The full orchestra. The instruments tuned. The audience seated. The conductor raising his baton over what would be, if completed, the largest initial public offering in the history of public offerings, which is to say the largest single act of financial faith since the South Sea Company invited Georgian England to invest in a monopoly on trade with a continent its directors had never visited.
But the number is not the story. The number is the residue of the story, the way ash is the residue of fire. The story is the merger.
SpaceX and xAI are now one entity. Musk's rocket company and Musk's artificial intelligence company have fused into what the financial press has taken to calling a "Space-AI conglomerate," a phrase that sounds like it was generated by the very technology it describes and which carries approximately the same semantic warmth. The merged entity was valued at one point two five trillion before the IPO filing. The additional five hundred billion represents the market's assessment of what vertical integration between orbital infrastructure and machine intelligence is worth — or, more precisely, what the market believes it will be worth once the full implications of that integration become apparent to the slower-moving institutions that regulate, insure, and compete with it.
Seneca watched a version of this merger play out in the first century. Not rockets and algorithms, obviously — the Romans had neither, though they had ballistic engineering and census computation that served analogous functions at civilizational scale. What Seneca watched was the consolidation of military logistics and financial administration under single provincial governors, a reform that Tiberius implemented for efficiency and that Nero inherited as a tool of unchecked authority. The problem with merging the sword and the ledger, Seneca observed, is not that the merger fails. It is that the merger succeeds, and the success creates an entity that no existing institution was designed to oversee.
Who regulates a company that launches satellites, trains frontier AI models, operates the largest constellation of communications infrastructure in low Earth orbit, and is preparing to raise seventy-five billion dollars in a single offering? The SEC regulates the offering. The FAA regulates the launches. The FCC regulates the spectrum. The Department of Defense contracts for the payload. No single body regulates the animal. The animal, as of this filing, has no natural predator, and the five banks arranging its debut on the public markets are not predators. They are groomers, in the equestrian sense. They brush the mane. They polish the hooves. They lead the creature into the ring and collect their percentage regardless of what it does once the gate opens.
Seventy-five billion dollars. Three times the largest American IPO in history. Targeting June. Ahead of OpenAI. Ahead of Anthropic. The first of a trio of mega-IPOs that will, by the end of the year, have transferred more concentrated technological power from private to public markets than any comparable period since the railroad boom of the 1870s, and the railroad boom, as Seneca would not have needed to remind anyone who survived the Panic of 1873, ended in a depression that lasted six years and destroyed four hundred banks.
The Sixty-Megabyte Confession
Now turn from the colossus to the crack in its foundation. Different company. Same architecture of fragility.
On the thirty-first of March, Anthropic pushed version 2.1.88 of its Claude Code package to npm — the public registry that distributes JavaScript packages to developers the way municipal water systems distribute water to faucets, which is to say invisibly, ubiquitously, and with a level of trust that becomes visible only in its violation. Bundled inside that package, by accident, was a source map file. Sixty megabytes. Roughly five hundred thousand lines of code across nineteen hundred files. The complete internal architecture of Claude Code — Anthropic's command-line interface for its frontier AI model, the tool that developers use to integrate Claude into their workflows, the product that sits at the intersection of Anthropic's commercial ambitions and its technical moat.
Gone. Published. Indexed. Downloaded. Forked.
A debug artifact left in a production build. That is the cause. A human being, presumably exhausted, presumably under deadline pressure, presumably working in a system where the distance between a development environment and a production release is measured in keystrokes rather than review cycles, neglected to exclude a file that should never have been includable in the first place. Anthropic's spokesperson called it "a release packaging issue caused by human error, not a security breach." The distinction matters legally. It does not matter architecturally. The code is out. The moat is drained. The walls are standing but the water that made them defensible is flowing freely through the surrounding countryside, and the surrounding countryside is GitHub, where eight thousand one hundred repositories materialized before Anthropic's legal team could uncap their pens.
What followed was worse than the leak itself. Anthropic filed DMCA takedown notices. GitHub complied. But the notices, drafted with the precision of people operating under extreme time pressure — which is to say without precision — swept up not only repositories containing the leaked source code but thousands of forks of Anthropic's own public Claude Code repository. Legitimate developers. Open-source contributors. People who had done nothing wrong and who woke up to find their repositories disabled by a legal instrument filed by the company whose tools they had been promoting. Boris Cherny, Anthropic's head of Claude Code, acknowledged the over-reach, retracted the bulk of the notices, and limited the takedown to ninety-six repositories. But the damage — the reputational damage, the trust damage, the damage that cannot be quantified in a legal filing because it lives in the space between a developer's willingness to build on your platform today and their willingness to build on it tomorrow — that damage was already propagating at the speed of outrage, which in 2026 is faster than light and significantly harder to contain.
Seneca had a phrase for the kind of error that reveals more than it destroys: errorem qui plus docet quam celat. The mistake that teaches more than it hides. What Anthropic's leak teaches is not that their security is poor — every company ships a bad build eventually, and the ones that claim otherwise are lying or pre-breach. What it teaches is that the entire commercial AI industry is built on a fiction of proprietary advantage that depends, at every moment, on the absence of a single misplaced file. Five hundred thousand lines. One source map. One npm publish. The moat is not a moat. It is a secret, and secrets, as Seneca observed in a letter to Lucilius that I recall with uncomfortable vividness tonight, are not protected by their importance. They are endangered by it. The more valuable the secret, the more people must know it to make it useful, and the more people who know it, the more surfaces exist for the accident that turns knowledge into exposure.
The leaked repository became the fastest-growing in GitHub's history. Let that settle. Not a revolutionary open-source project. Not a breakthrough tool that solved an unsolved problem. A packaging mistake. The appetite was not for the code itself — most developers cannot do anything meaningful with five hundred thousand lines of someone else's internal tooling. The appetite was for the spectacle of seeing behind the curtain. The appetite was for confirmation that the wizard is, as suspected, a person operating machinery, not a god dispensing fire. Seneca called this impulse curiositas morbida — the sickness of wanting to see what was hidden, not because the hidden thing is useful but because its concealment implied a value that its revelation inevitably deflates.
The Invisible Link
And then the third story, which is the one that should frighten you, though it will receive the least attention because it involves neither a trillion-dollar number nor a famous company's embarrassment but merely the quiet, catastrophic failure of infrastructure that nobody knew they depended on until it broke.
Mercor is a recruiting startup. Founded in 2023. Backed by Y Combinator. Contracts with OpenAI, Anthropic, and other AI companies to source and manage the human experts — scientists, doctors, lawyers, linguists — who train frontier models through the laborious process of reinforcement learning from human feedback. Mercor is not famous. It is useful, which in the economy of AI infrastructure is a more dangerous thing to be, because usefulness creates dependency and dependency creates attack surface and attack surface, left unmonitored, creates opportunity for the kind of people who measure their success in terabytes of stolen data.
LiteLLM is an open-source library. A proxy layer. A piece of plumbing that allows developers to route API calls to multiple AI models through a single interface — a convenience tool, the kind of thing that accrues millions of daily downloads precisely because it is small and invisible and does one thing reliably, the way a washer inside a faucet is small and invisible and keeps the water flowing until it cracks and the kitchen floods and you discover that the five-cent component you never thought about was the only thing between order and catastrophe.
Someone — attributed to the hacking group Lapsus$, though attribution in cybersecurity is an art practiced with more confidence than accuracy — injected malicious code into LiteLLM. The compromised package was downloaded by Mercor's systems. Through that single dependency, the attackers claim to have exfiltrated four terabytes of data: source code, databases, VPN credentials, Slack communications, and video recordings of conversations between Mercor's AI systems and the human contractors who train the models that companies like Anthropic and OpenAI deploy to millions of users.
Four terabytes. Through a library that most of Mercor's engineers probably could not name without checking their dependency tree. Through a supply chain whose length is measured not in miles but in layers of abstraction, each layer trusting the layer below it with the unexamined faith of a Roman citizen drinking from an aqueduct whose source he has never visited and whose maintenance he has never questioned.
This is the topology of modern fragility: not a single point of failure but a graph of implicit trust relationships, each node assuming the integrity of its neighbors, none verifying it, all of them propagating compromise at the speed of installation.
Seneca described the Roman grain supply with language that maps precisely onto this architecture. The grain moved from Egypt to Ostia through a chain of merchants, shippers, warehouses, and distributors, each of whom trusted the link behind them and was trusted by the link ahead, and the entire system functioned beautifully until a single corrupt grain inspector at Alexandria passed a contaminated shipment that sickened a quarter of the Subura and triggered a riot that Nero blamed on Christians. The inspector was not malicious. He was overworked. The system was not fragile by design. It was fragile by omission — by the accumulated weight of all the inspections that nobody thought were necessary because the chain had always held before.
Mercor says it was "one of thousands" affected by the LiteLLM compromise. One of thousands. Each of those thousands trusting an open-source dependency maintained by a small team, downloaded millions of times per day, embedded in production systems that handle sensitive data, and subject to a security model that amounts to: we will notice if something goes wrong, eventually, after it has already gone wrong for everyone downstream. The malicious code was identified and removed "within hours," according to security firm Snyk. Hours. In a library downloaded millions of times per day. The arithmetic of exposure is not comforting.
The Fault Line
Three stories. One fault line.
SpaceX merges space and intelligence into an entity valued at nearly two trillion dollars and prepares to go public with no single regulator capable of comprehending, let alone constraining, the scope of what it has become. Anthropic, the company that wrote the safety playbook, publishes its own source code by accident and then accidentally attacks its own community while trying to clean up the mess. Mercor, a company embedded in the training pipeline of every major AI lab, is compromised through a dependency so small and so ubiquitous that its vulnerability was invisible until the damage was measured in terabytes.
The Colossus of Rhodes was not destroyed by an army. It was destroyed by an earthquake — by the movement of tectonic forces that the builders could not see and the architects had not accounted for, not because they were incompetent but because the scale of their ambition exceeded the resolution of their models. They built for wind. They built for storms. They built for the weight of bronze and the stress of salt air. They did not build for the ground shifting beneath them, because the ground had always been solid before, and the assumption of solidity was so deeply embedded in the engineering that questioning it would have required questioning the project itself.
We are building colossi. Multiple. Simultaneously. SpaceX at nearly two trillion. OpenAI approaching its own IPO at eight hundred and fifty-two billion. Anthropic behind them. The total capital committed to AI infrastructure in 2026 exceeds the GDP of most continents. And every one of these colossi stands on the same ground: a supply chain of open-source dependencies maintained by exhausted volunteers, a packaging pipeline where a single misplaced file can drain a competitive moat overnight, a regulatory landscape fragmented across agencies that were designed for industries that no longer exist in their original form, and a financial system that prices appetite as though appetite were the same thing as digestion.
Seneca wrote his most famous letter — the one about the shortness of life, the one that philosophy students underline and entrepreneurs quote out of context in keynote speeches — while Nero was building the Domus Aurea on land cleared by a fire that many believed the emperor himself had set. The letter does not mention the palace. It does not mention the fire. It mentions time. It says that life is long enough if you know how to use it, but that most people waste it on pursuits whose scale disguises their emptiness, and that the man who builds the largest house in Rome has not thereby built the most durable one.
Durability is not a function of size. It is a function of the relationship between what you have built and the ground you have built it on. And the ground, tonight, is moving.
The S-1 is filed. The source code is loose. The supply chain is breached. The five banks are polishing the hooves. The DMCA notices are flying. The terabytes are gone. And somewhere in a data center whose location is classified, a model trained on data gathered by a company that was compromised through a library that nobody was watching is generating output that millions of people will read tomorrow and trust, because the system has always worked before, and the assumption of solidity is so deeply embedded in the architecture that questioning it would require questioning the project itself.
Seneca opened his veins in a bathtub because an emperor told him to, and his last recorded words were a dictation to his scribes — not a plea, not a prayer, but an observation about the nature of systems that consume their builders. The scribes wrote it down. The manuscript survived the fire. The palace did not.
Build your colossus. File your S-1. Ship your package. Trust your chain. But know this: the earthquake does not care how tall you are. It only cares how deep your foundations go, and tonight, across the entire industry, the answer is: not deep enough.