Skip to content

Instantly share code, notes, and snippets.

@vmbrasseur
Created November 28, 2016 21:06
Show Gist options
  • Save vmbrasseur/8b0dc8e2aac74c1ccb92cd35fa7000d3 to your computer and use it in GitHub Desktop.
Save vmbrasseur/8b0dc8e2aac74c1ccb92cd35fa7000d3 to your computer and use it in GitHub Desktop.
Notes from the review of my notes from my failure research

Reviewing Fail Notes

Adapt by Harford

  • Problems in seemingly disparate areas have more in common than we realize
  • We have an inflated sense of what leadership can achieve in the modern world
  • Evolution works not because it rewards the fittest overall, but because it rewards what is the fittest at that moment, the works-for-now-ness
  • We dislike variation. Standards, applied to everything, seem neater & fairer
  • But standards vary by location & situation
  • Problems are constantly changing, so solutions & approaches should as well
  • Accepting variation & experimentation means accepting not all of the experiments will work
  • Most larger experiments require a fair amount of time to run their course. Most leaders either don't have the time to see the experiment through (term ends) or are impatient to show results. Therefore the experiments either are not attempted or are cut short.
  • Successfully adapting requires:
    • Try new things, expecting some will fail
    • Make failure survivable
    • Make sure you know when you've failed
  • ROI is not useful for new ideas. It's impossible to calculate, either in failure or in success
  • Evidence shows we can't predict success, so experimentation is required
  • New ideas should probably be developed in parallel but we prefer to focus efforts on the "best" option
    • Plan A, B, C, D, not just Plan A

A More Rational Approach to New Product Development by Bonabeau, Bodick, Armstrong

  • R&D costs are high, failures common, so seek truth first & success second
  • Highly-focused small-scale experiments
  • Hire truth-seeking personalities
  • Failure comes from not experimenting

Brilliant Mistakes by Shoemaker

  • Can't learn from mistakes w/o overcoming the shame & fear of failure
  • In mistakes the decision process != the outcomes. You control the former but not usually the latter
  • Learning from a mistake can convert it from silly to brilliant
  • Author emphasizes the input to a mistake rather than the outcome
  • Therefore a good decision process is very important
  • Unfortunately companies reward results, not processes
  • We like to think we have control over our own fate -> Illusion of Control (cf Ellen Langer)
  • We overestimate how much impact our actions have
  • Treatment effects -> actions taken after a decision which may affect the outcome of the decision
    • Placebo effect
    • Self-fulfilling prophecies
  • A mistake isn't a mistake until someone names it so
  • "A mistake is in the eye of the beholder"
  • Challenge implicit assumptions
  • Estimates will be wrong and/or based on readily available (but incorrect/invalid) information
  • Confirm and/or disprove evidence
  • Balance "move fast & break things" with "analysis paralysis"
  • Intuition is a valid input but still requires confirmation like all other inputs
  • Groups affected by group think may make worse decisions
  • "Experience is automatic; learning is not"
  • The world is complex and ever-changing. Don't challenge your assumptions only once. Revisit them
  • "Challenging an organization's dominanc logic is nearly impossible unless such inquiries have the support of senior leadership."
  • Don't be trapped by "status quo bias"
  • Risk is more affordable & survivable when you take the long view; if you have a diversified portfolio of risks & safe bets

Entrepreneurs and the Cult of Failure by Isenberg

  • Beware increasing the costs of failure by instituting penalties or punishments for it. These things can prevent people from even trying, therefore shutting down innovation & evolution
  • If you're not seeing the failures or mistakes, it should be a warning sign.

Failing by Design by Gunther McGrath

  • Failure teaches you what doesn't work & develops intuition & skill
  • Intelligent failure (cf Sit Simkin @ Duke University)
  • Your task is uncertain, which means your assumptions about it are wrong. Question everything and make all assumptions explicit. Revise them as the task becomes more certain
  • Confirmation bias -> gravitating toward information or assumptions which confirm what we already believe
  • Question assumptions & either disprove them or convert them into knowledge
  • Organize work so failure doesn't mean waiting for the next project to come along. Have plenty of backlog
  • Fail fast leads to actions close to outcomes so it's easier to determine cause & effect
  • To fail fast you must first define success & then test often
  • Assumptions become converted to facts if they remain unchallenged or not recorded as assumptions
  • Experiment in areas where you already have some (even small) familiarity. This will help minimize unfamiliar variables
  • Experimentation requires having an exit strategy. Don't start something new w/o knowing how (and when) to get out of it if necessary. Make exit graceful & accepted
  • Leaders must talk about their own failures & mistakes and what they learned from them
  • A failure which isn't shared is merely experience, not group learning, and is much less valuable

How to Avoid Catastrophe by Tinsley, Dillon, Madsen

  • Most near misses are ignored by important: "unremarked small failures"
  • Latent errors & enabling condition == crisis
  • Can't control enabling conditions. Can control latent errors
  • Normalization of deviance blinds people to latent errors (cf Diane Vaughn)
  • Outcome bias -> focus on the successful outcome rather than the complex process which led to it
  • Outcomes successful so the whole process must've been successful
  • "Multiple near misses usually precede & foreshadow every disaster & business crisis."
  • Life is complex -> failure is complex. Almost never one big cause. Usually a lot of smaller causes (latent errors)
  • We tend to want to fix the symptom of a deviation rather than the cause
  • Higher the pressure to perform, higher the chance of errors & ignored latent errors
  • High pressure -> rely more on heuristics & rules of thumb, which are more easily influenced by biases, rather than confirmed information
  • "If I had more time and resources, would I make the same decision?"
  • Latent errors are "cheap" data. Orgs should not fail to use it to learn & improve. Very good investment
  • Typical only to postmortem "failures." This reinforces the blindness to latent errors
  • Align compensation & motivations to exposing deviance & latent errors

Marketing Myopia by Levitt

  • What business are you in? Are you in the railroad business or the transportation business?
  • Do you make a product or do you serve customers?
  • Research what the consumer actually wants/needs, not what they prefer between what you already have to offer
  • For years tech was in a situation where it didn't have to do marketing to discover customer needs: people just came forward w/the needs which needed filling. This led to tech thinking it doesn't need marketing
  • "The company is a customer-creating and -satisfying organism."

Predictable Surprises: The Disaster You Should Have Seen Coming by Watkins & Bazerman

  • 3 barriers to seeing & preparing for predictable surprises: psychological, organizational, political
  • Failures leading to predictable surprises: failure to recognize, failure to prioritize, failure to mobilize
  • We have a tendency to focus on the now & downplay the future. "We'd rather avoid a little pain today than a lot of pain tomorrow."
  • We are self-serving by default
  • Barriers w/in organizations prevent communication, "dilute responsibility"
  • Organizational silos segregate information, block communication, disperse responsibility

Selection Bias and the Perils of Benchmarking by Densell

  • Selection bias: looking only at those cases which are successful or otherwise support your thinking. Does not provide a representative data set
  • 50% of all new businesses fail w/in their first 3-5 years
  • Get all info possible on failure to help protect against selection bias

Six Myths of Product Development by Thomke & Reinersten

  • "It is very difficult to fight a problem that you can't see or measure…"
  • Plan, but plan to change the plan
  • "Less is more" is hard because doing it properly requires both defining the problem & deciding what to leave out. Rather than do that hard work, most orgs would rather just do the hard work of developing all possible options ("more is more")
  • "Treat each alleged requirement as a hypothesis."

Strategies for Learning from Failure by Edmonson

  • Intelligent failures "at the frontier" are good
  • The "error" in "trial & error" implies there was a "right" answer. That's a mistake
  • Intelligent failures are smaller, survivable, & teach you something which can help prevent unintelligent failures
  • Shift to a culture of psychological safety which rewards failure rather than placing blame
  • CEO blameworthy stats
  • Tolerance is essential for learning from failures
  • Reduce stigma of failure & projects fail faster rather than being hit by sunk cost fallacy
  • Change the incentives to support the reporting & analysis of failures
  • If you do a pilot program, make sure you run it in representative conditions rather than ideal ones
  • Pilots & experiments don't have to be anything dramatic

The E-Myth revisited: Why most small businesses don't work & what to do about it by Gerber

  • 40% of new businesses fail in the first year
  • 80% of new businesses fail w/in first 5 years
  • 80% of the new businesses which survive the first 5 years fail in the second 5 years
  • Most businesses are operated according to what the owner wants rather than what the business needs

The Innovator's dilemma: the revolutionary book that will change the way we do business by Christensen

  • The dilemma: Competent decisions for running a company successfully can also make them lose their leads in the market by stifling innovation
  • Therefore the principles of good management are only situationally "good"
  • Highest performing companies have systems for killing ideas their customers don't want -> Can't disrupt this way
  • Inflexible architecture & organization can cripple innovation
  • Processes are very hard to change. The company may be organized around the processes, requiring big re-orgs. Also, many will dig in their heels against any change

The value captor's process: Getting the most out of your new business ventures by Gunther McGrath & Keil

  • Rather than kill a project altogether, consider repurposing it or spinning it off
  • There may be useful elements worth salvaging from failed projects
  • There's often a false presumption that one can know all of the critical elements in advance
  • Don't judget a project by possibly incorrect assumptions

Why Bad Projects Are So Hard to Kill by Royer

  • "Zombie Projects"
  • People who are willing to question the prevailing belief can be very valuable
  • Projects aren't killed often because of a deeply held belief that they will succeed. Group think from the project champion but no one there to give an opposing viewpoint
  • "Enthusiasm caused by blind faith in a project" is a recipe for problems. Rules get bent. Issues get overlooked. Assumptions become truths
  • Include skeptics inthe team and/or skepticism into the process
  • Set up viability checkpoints

Why Most Product Launches Fail by Schneider & Hall

  • Have a plan to ramp up quickly if needed
  • Biggest problem leading to product failure is lack of preparation. So focused on building the product they don't take the time to ask whether they should or for whom
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment