Prophets often emerge from a trial in the wilderness with a message for their people. Elon Musk emerged from production hell with an algorithm. No, the algorithm.
As Walter Isaacson writes in his biography of Musk, “[The algorithm] was shaped by the lessons he learned during the production hell surges at the Nevada and Fremont factories” in the late 2010s. These ordeals left Musk with great singleness of purpose. According to Isaacson
At any given production meeting, whether at Tesla or SpaceX, there is a nontrivial chance that Musk will intone, like a mantra, what he calls “the algorithm.”… His executives sometimes move their lips and mouth the words, like they would chant the liturgy along with their priest.
Musk agrees: “I became a broken record on the algorithm. But I think it’s helpful to say it to an annoying degree.”
It’s There For a Reason
Those of us who are interested in running organizations well should be interested in the algorithm. Why? Because it’s what someone who has disrupted two large, global, oligopolistic, capital- and knowledge-intensive industries with startups founded this century1 came to conclude, after much bitter experience, was worthy of repeating “to an annoying degree.” The algorithm reflects the things Musk felt that his companies, after 15+ years of his leadership, still weren’t getting right. And weren’t going to get right unless he “became a broken record.”
Those things must be the absolute worst, right? They must be the stickiest, hardest to fix, most pernicious stuff that plagues organizations. Musk had already sandblasted away most everything else. At Tesla and SpaceX he surrounded himself with people who shared his zeal for changing the world via building, innovating, and executing. They created lean, hungry companies where, as Musk put it, “A maniacal sense of urgency is our operating principle.” They got rid of the bureaucrats and clock-watchers and bloat. Tesla, for example, boiled new employee onboarding down to a five-page “anti-handbook” (by way of comparison, GM’s code of conduct is 47 pages long and Ford’s online version has 22 sections).
So what was left to do? What, by the late teens, was still so important that it merited the creation of the algorithm and its endless repetition by Musk? Two things: remove stuff and eliminate wiggle room.
Do those seem underwhelming to you? Less than revelatory? The more I understand about how evolution shaped us human beings and about why we do the things we do, the more impressed I am by the algorithm. I think it reflects an ultimate understanding of our species.
To be clear: An ultimate understanding. I don’t think Musk has acheived the ultimate understanding of humanity — the one that will never be improved on. My choice of article signals a different meaning of the word “ultimate.” I’m using it in the way evolutionary biologists do: to talk about function.
What’s this thing — this claw, this eye, this gill, this mating dance — for? What’s its function? Within evolutionary biology, those are called ultimate questions.2 They’re worth asking about human beings (just as they are about any other species), and they’re worth asking about our behaviors as well as our physical traits.
So let’s take an ultimate perspective on Musk’s algorithm and see what we see. An initial insight is that it must be aimed at behaviors that have a few characteristics. They must be common, or at least common enough to be concerned about. They must be pretty deep-rooted and hard to change, or else Musk wouldn’t need to keep harping on about them “to an annoying degree.” And they must somehow be bad news for his companies.
Here’s the algorithm, as described by Isaacson:
It had five commandments:
1. Question every requirement. Each should come with the name of the person who made it. You should never accept that a requirement came from a department, such as from “the legal department” or “the safety department.” You need to know the name of the real person who made that requirement. Then you should question it, no matter how smart that person is. Requirements from smart people are the most dangerous, because people are less likely to question them. Always do so, even if the requirement came from me. Then make the requirements less dumb.
2. Delete any part or process you can. You may have to add them back later. In fact, if you do not end up adding back at least 10% of them, then you didn’t delete enough.
3. Simplify and optimize. This should come after step two. A common mistake is to simplify and optimize a part or a process that should not exist.
4. Accelerate cycle time. Every process can be speeded up. But only do this after you have followed the first three steps. In the Tesla factory, I mistakenly spent a lot of time accelerating processes that I later realized should have been deleted.
5. Automate. That comes last. The big mistake in Nevada and at Fremont was that I began by trying to automate every step. We should have waited until all the requirements had been questioned, parts and processes deleted, and the bugs were shaken out.
It’s All Too Much
The human cognitive bias that most motivates the algorithm — that in fact permeates it — is status quo bias. We humans have a deep-seated fondness for maintaining the status quo: for keeping things the same and doing things the same over time. As Musk was goading Tesla and SpaceX to their world-changing achievements in the late 2010s, he realized this fondness was one of his most implacable foes.
Look at commandments 2 through 4. They’re all about fighting the status quo bias: about altering the status quo — of parts, of steps within procedures, of procedures themselves — rather than accepting it. Three out of the five commandments are variations on this theme. They’re repetitions within the constantly repeated algorithm. It’s not just that Musk dislikes the status quo bias. It’s also that he realizes how much effort it takes to make headway against it.
The status quo bias is so hard to fight because it’s built into human psychology. But not chimp psychology, as a series of clever experiments conducted early this century showed. Human toddlers and chimps (their intellectual peers in many ways) were both shown an opaque box. A human adult manipulated the box in order to get a treat out of it (food for the chimps; stickers for the humans). Both the kids and chimps were like “OK, you’ve got my attention now.” When both were given the box, both successfully repeated all the steps and extracted the treat.
Then the adult did the identical manipulations with a transparent version of the same box, revealing that a few of the steps were completely unnecessary: they were irrelevant to the goals of getting the treat out. The chimps observed this fact and, being busy simians with no time to waste on superfluities, skipped the irrelevant steps when given subsequent treat boxes.
Most of the human toddlers didn’t. They kept doing the irrelevant steps. They stuck with the status quo of treat acquisition even when they’d just been shown a more streamlined method. And we don’t outgrow the tendency to stick with the status quo as we outgrow our fondness for Teletubbties and juice boxes. As anthropologist Joe Henrich puts it in his book The Secret of our Success, “assuming the problem is sufficiently opaque, the magnitude of “overimitation” increases with age. This also isn’t just educated Western peoples. Research in the Kalahari Desert in southern Africa, whose populations lived as foragers until recent decades, show them to be at least as inclined to [overimitation] as Western undergraduates.”
What’s going on here? When we take an ultimate perspective, the ubiquity of the status quo bias among humans is a strong clue that it must have a function for us (but not for chimps). It must be there for a reason. Evolution tends to eliminate fitness-destroying traits and spread fitness-increasing ones.3 Status quo bias has spread until it’s just about universal among humans. So what’s its function?
The ultimate explanation, which I find compelling, is that we humans have a strong status quo bias because of the complexity of our cultural repertoires — the things that human groups have learned to do.
Many if not most of those things are crazy complicated. If someone gave you a bunch of rocks and asked you to make a simple working stone hand axe — one of the oldest and most common tools in humanity’s cultural repertoire — you would fail miserably. You wouldn’t have any idea which rock to use as the raw material for the axe, which rock to use to chip away at that raw material, how to do that chipping in order to wind up with a nice sharp edge, and so on. You might be able to figure these things out by trial and error, but probably not before you died of exposure or starvation.
So if you wanted to live, you’d simply imitate the master hand axe makers. You’d watch and listen to them, and copy them exactly. Once you mastered their technique you might try to vary it a bit to get better results, but most of your varying would probably lead to worse outcomes. So you’d just carry on in the great hand axe-making tradition of your ancestors.
You’d do the same for hunting, preparing food so that you nourish people instead of poisoning them, building shelters, making fire, and so on. You’d copy the steps you learned as faithfully as possible, rather than trying to dissect which of them are essential and which are superfluous.
Evolution has equipped you and me and everyone else with a “copy exactly” mental module. We pick up much of our cultural repertoires easily, naturally, and unconsciously by learning from others and doing just as they do. We are wired to be conservative cultural copycats. Chimps aren’t wired that way because their cultural repertoires are so much smaller and simpler.4
We have another, related tendency that makes Musk’s algorithm all the more essential. When we do change the status quo, we usually add steps or elements instead of subtracting them. We have a “addition sickness” as organizational scholars Bob Sutton and Huggy Rao put it in their book The Friction Project. For example, Experiments involving both onscreen and real-world tasks found a strong tendency to add steps instead of subtracting them when pursuing goals, even if subtracting was easier or cheaper. Only when subjects were specifically told that subtraction was an option did they engage in it as much as they should have.
In the algorithm Musk specifically and repeatedly tells his people that subtraction is more than an option. It’s a necessity. He’s realized, through what sounds like ample bitter experience, that most of the parts, processes, subassemblies, procedures — most of the cultural repertoire, in other words — that got built up over time at SpaceX and Tesla5 was more complex than it needed to be because of our status quo bias and addition sickness.
So it’s time to, as he exhorts, Question, Delete, Simplify, and Accelerate. How much? Musk’s answer here, which I love, is to subtract until you know you’ve gone too far. As he says in step 2, take things away until you learn that you’ve taken away too much. That’s the only way to be sure you’ve done enough of it.6
Only once all that taking away is finished can you add to the organization’s cultural repertoire by deploying new technology. “Automate” is the fifth and final step of the algorithm. Musk learned that it should come last, after all the questioning, deleting, simplifying, and acclerating are finished and the status quo bias has been fought to a standstill.
It Wasn’t Me
The second piece of ultimate insight about us fantastic and frustrating human beings that’s encapsulated in Musk’s algorithm is our hardwired fondness for plausible deniability. Wiggle room, in other words.
Human cultures are held together largely by social punishment. When you’re a member of the most social species on the planet (i.e. us) social exclusion really and truly hurts. As psychologist Matthew Lieberman summarizes “social pain is real pain just as physical pain is real pain.” So social exclusion and other similar punishments are a big part of how we bring norm violators back into line.
But when we’re the ones committing the violation, we obviously don’t want to be brought back into line. We want to get away with shirking or having a cookie before dinner or not putting a cover sheet on our TPS report or whatever. The all-purpose cultural tool that helps us here is plausible deniability — our ability to find enough wiggle room to get us off the hook. I wasn’t shirking; you just happened to come by my workstation during my bathroom break. I understand that those are my fingerprints on the cookie jar, but you need to understand that I put them there last night after dinner, when I took my one approved cookie. There are five steps in the TPS report submission process that come after I fill it out; my cover sheet must have been removed during one of them.
Note that those excuses aren’t particularly good. They don’t have to be. They just have to be plausible; that’s often enough to escape punishment.
My favorite demonstration of how quickly and naturally we generate plausible deniability and put it to use comes from classic social psychology experiments conducted in the 1970s.7 The experimenters had people come in, ostensibly to rate movies (social science experiments are like heist movies; the apparent setup is always a head fake). All subjects entered a viewing room that contained two TVs, each of which had two seats in front of it. One seat was occupied by a “handicapped person” wearing a metal brace.
Yep, the brace was a prop and the person was part of the experiment (a confederate, as they say).
When some of the suckers marks subjects in the experiment entered the viewing room both TVs were playing the same movie. Call this case A. For the rest, the two TVs were playing (only slightly) different movies. This was case B.
What happened? In case A, 3/4 of the subjects took the seat next to the “handicapped person.” In case B only 1/4 did, because in case B you could plausibly deny that you were a miserable human who’s biased against the handicapped.
I’m pretty sure that most of the subjects in fact didn’t have any such bias. But the instant they walked into the viewing room their minds assessed the situation and very quickly reasoned as follows:8
Look, whether or not you’d rather sit by yourself you can’t appear to be a jerk who’s biased against the handicapped. So unless there’s a plausible reason not to, you’re going to walk over and sit down next to that person.
In case B, the subjects’ minds added and Hooray! — a plausible reason exists. Go sit in front of the TV playing the other movie if you want to. You can tell anyone who asks that you sat there to help out by making sure more movies got reviewed.
As this example shows, our plausible deniability generator runs on autopilot. It does most of its work below our level of conscious awareness. It’s already achieved the full self-driving mode that Musk has been working hard to install in Tesla’s cars.
The algorithm’s first commandment — Question every requirement — is the equivalent of “thou shalt not accept or tolerate deniability about the origin of a requirement.” There needs to be an actual human being on the hook for every one. Those actual humans probably don’t like having their plausible deniability stripped away — they’d rather avoid off-hours phone calls asking about the requirement, interrogations from Musk himself about it, and so on — but the point of the commandment is to place responsibility on a person and take their wiggle room away. As Isaacson writes about the origins of the algorithm’s first commandment:
Whenever one of his engineers cited “a requirement” as a reason for doing something, Musk would grill them: Who made that requirement? And answering “The military” or “The legal department” was not good enough. Musk would insist that they know the name of the actual person who made the requirement.
The ultimate goal of the first commandment is to turn Musk’s companies into plausible deniability deniers when it comes to the important question: why are we doing things this way? When, in other words, it comes to the status quo.
I find the algorithm fascinating and informative because it reveals the truly, deeply difficult problems to fix in any organization — the problems that remain even after one of the most talented, ambitious, tenacious, no-detail-too-small company builders applies his “manaical sense of urgency” to a couple organizations for more than fifteen years.
After all that effort and all that time, Musk’s biggest foes aren’t regulators, Industrial Era incumbents, or other business geeks. They’re instead aspects of human nature. They’re two of the things evolution has shaped us to cling to: the status quo, and plausible deniability.
I don’t think we’re going to be able to cure ourselves of these addictions; they’re too deeply rooted. But what we can do, and what Musk is working toward with the algorithm, is create cultures that have strong norms against accepting the status quo and tolerating plausible deniability. Once those norms are in place — once we’ve activated the community policing of norm enforcement, which is also deeply rooted within us humans — then we’ll really be able to accelerate our cars, rockets, and who knows what else.
I mean sure, any of us could do that once, but when someone does it twice we should pay some attention.
Evolutionary biologists also pursue proximate questions, which are about mechanism: how does that eye focus? How do those gills remove oxygen from the water?
“Fitness” here means, according to the Understanding Evolution website, “how good a particular genotype is at leaving offspring in the next generation relative to other genotypes. So if brown beetles consistently leave more offspring than green beetles because of their color, you’d say that the brown beetles had a higher fitness. In evolution, fitness is about success at surviving and reproducing, not about exercise and strength.”
It follows, of course, that a big part of the reason that chimps’ cultural repertoires remain so comparatively small and simple is that they don’t learn from those around them as well as we humans do. At some point in our evolutionary past, current thinking goes, we diverged from chimps by becoming better (more exact) copiers. This accelerated the evolution of our cultures. As that acceleration happened evolution started selecting for individuals who copied more exactly. The resulting virtuous cycle helped bring us to where we are today.
And at your organization and mine
And be sure to put back in the stuff that you learned was actually necessary.
I’m really glad that the profession of ‘experimental social scientist’ exists, for two reasons. First, it generates knowledge. Second, it provides gainful employment to a lot of fiendishly clever manipulators of other human beings. If these folk didn’t have their labs in which to play puppetmaster they’d be out running three-card monte games, creating memecoins, and doing other socially destructive things.
My image for these internal conversations is Homer Simpson’s brain occasionally piping up to try to steer him in a better direction than the one he’s chosen. All business scholars agree that the greatest of these interactions culminated in Homer’s brain informing him that “money can be exchanged for goods and services.”