It was cargo cult science, a term of mockery coined much later by the physicist Richard Feynman to describe what happened after American airbases from World War II were removed from remote South Pacific islands, ending the islanders’ only contact with the outside world. The planes had brought wondrous goods. The Islanders wanted more. So they “Arranged to make things like runways, to puth fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas–he’s the controller–and they wait for the planes to land.” But the planes never returned. So cargo cult science has the outward form of science but lacks what makes it truly scientific.

  • Superforecasting, Tetlock and Gardner.

This post serves as a braindump of a situation which consistently rears its head in my career working with startups: A lack of understanding of levels of stupidity. One would expect that humans, after many years of existing with brains would be able to grasp their own stupidity, and that humans would be willing to measure and quantify their own stupidity. Unfortunately, everyone can attest that stupidity is strangely one of the most difficult things to come to terms with. What worker wants to tell their manager about uncertainty? What manager wants to explain to the stakeholders why we should delay a project merely for more information? How can business exist when CEO’s send out press releases about what they don’t know? Stupidity is arguably a dirty word. It shouldn’t be. Stupidity is actually an opportunity. This post will explain levels to stupidity, and how at the highest level, one can model their stupidity by thinking about the effects of what they don’t know. By doing this, areas where one is uncertain will stand out, and one can reduce their stupidity over time by addressing these uncertainties up front.

Level 0 Stupidity: Effects Don’t Exist

A common form of stupidity which I will spend the least amount of words on because it’s quite prevalent for humans who may be reading this is Level 0. Level 0 stupidity is a rationale that effects don’t exist at all, and that all things that happen, happen because it is meant to be. This level of stupidity is quite dangerous, since it not only ascribes to lazy ignorance, where humans not only accept their fate as stupid, but do nothing to improve their situations.

Avoid this level. Humans have found a strange knack for accepting this stupidity, and have even built entire systems which sell you on the idea that this is the happiest and best form of life, where one “let’s go” of all levels of thinking. People who wish to sell you this idea likely want you to not think about parting ways with your money, and many people make a fortune doing this.

Level 1 Stupidity: Effects Exist, but One-Way

Let’s revisit the cargo cult. They noticed that men on planes bring them good things. When the planes left, they seeked to rebuild the conditions that they associated with planes by building runways to bring planes. We clearly see the problem. If one things about this logically, they saw over the course of many years, “If planes come to our island, then there is a runway.” When the planes left, they committed to the affirming the consequent by asserting “If there is a runway, then planes will come to our island.” They were conditioned on one effect, and assumed by “flipping” the outcome as the effect, it will bring about the original effect.

Why do smart people only achieve this level of thinking?

This is by far the number one error in startups that I see. For example, why does your company do “technical assessments” proving basic algorithms assessments under strangely arbitrary time constraints for software engineering positions? If you work your way up this folklore chain in your own company today, you’ll find you might find an initial early engineer who was taught this was the best way to hire from their previous employer, and they were taught this by their previous employer. Recurse until the base case, and you’ll find the only real answer is “because Google does it”. Startups do many things because we see a large successful company does it, and they wish to replicate that success. Why wouldn’t you take an existing case study and try to replicate it? Of course “If the company is currently successful, then they screened technical talent with time limited tests”. Naturally, “If we screen technical talent with time limited tests, the company will be successful.” Of course, there’s no solid evidence that this actually makes for a successful company. Purely through annecdotal evidence, I can strongly say that rarely a companies failure is attributed by making bad technical hires. If your managerial staff can’t identify strong technical candidates, it’s likely you have bigger issues.

I have a wide range of issues that I likely disagree with by being in the industry and seeing how they have no effect on success, and I believe there’s a ton of folklore around these issues and they are mostly predicated on the idea that “A successful company exists, then they did it”. It’s largely falsifiable with plenty of counter-examples, and I have yet to see any data disproving or proving it:

  1. 100% code coverage.
  2. Candy in the office.
  3. Anything Machine Learning, Blockchain, or any other buzzword.
  4. “Encouraging” 10-16 hour workdays and rewarding it.
  5. 1+ hour technical screens.
  6. Stock options, venture capitalism, IPOs.

Level 2 Stupidity: Complex Effects

IPOs are an interesting mention above, because we are currently in the mix of Uber, Tesla, and a few other companies trying to hit profitability after IPO. WeWork is somehow trying to continue a trend of IPO before profitability. If you entered IPO with Uber this year with Level 1 thinking, I do believe you are better than average, but missing the mark. Complex effects take into consideration real interactions between variables, and understand their combination is the effect, not the single variables themselves.

I don’t have much to say about this, because I have yet to work on a team that breaks down problems, measures their certainty of completion, while also accounting for the fact that projects may not work. I do believe this is a social construct, mostly because the science is there. Unfortunately, shareholders have bought into complex systems as a way to make money, by validating software companies as real businesses. The trick is, real projects that are complex are more than corn-shucking business that have a “units-in-units-out” revenue models. Along with this, the scale of companies introduces a host of regulatory issues, geopolitical issues, and social issues that may be far too complex to really measure. I’m not sure if real stakeholders, let alone individuals, are ready to except complex effects as reality. Ignorance is bliss.

Level 3 Stupidity: Uncertainty Estimation

Known unknowns. Let’s say you do understand complex effects. Could you measure your uncertainty in particular variables? A fun part about Boston is that weather is predictable up to about two days. Imagine being an event coordinator at a golf course in the area, knowing you have absolutely no control of the weather 6 months before an event that you’re planning. I’m not sure I would want this job. The thing is, there are entire frameworks focused on understanding uncertainty estimation. It’s a foundational aspect of statistics. No one uses it. I’m not a believer that you need an amazingly strong background in statistics to reach level 3 stupidity either. For example, let’s say you were interested in financing a coffee shop project in your local neighborhood. You know ahead of time, the components that effect a successful launch are financing, licensing, and a solid marketing campaign. You can go further and ask, what inputs are required for a good marketing campaign? Is it enough up front capital? If so, financing is even more important to the launch of the project. Now one can ask, how likely are each of these inputs? One could say, financing and licensing is easy, but your marketing skills needs work, so there’s some uncertainty in pulling off a marketing campaign that brings in new customers. In that regard, maybe you’ll pull in some extra help. By thinking about inputs to a project, and identifying uncertainty you can start to build a framework to not only identify components that are at risk, but seek to drive removing those uncertainties.

Build a Graph

Graphs are amazing. They’re visual aids to understanding structure. I believe people who achieve Level 3 stupidity:

  1. Build a graph of effects on a target output they’re trying to realize.
  2. Determine which effects are unlikely, or are at risk of threatening projects.
  3. Ask outside opinions on this graph. Update their structure accordingly.
  4. Seek to remove uncertainty in the effects by improving their chances of being successful, or by realizing the effects by doing them.
  5. Return to the graph. Was their graph right? Was their estimation of uncertainty correct?

Be stupid, but structure your stupidity.