A summary of “Evidence Guided” by Itamar Gilad — Part 2 (Ideas and Steps)

Thomas Ziegelbecker
6 min readApr 18, 2024

Part 2 of the summary, focuses on Ideas and Steps. Itamar advocates for grounded idea generation and a systematic evaluation, validation, and delivery process. Ideas are scored using ICE (Impact, Confidence, Ease), guiding the selection process based on confidence levels. The Steps approach emphasizes validation and testing to mitigate risks and quickly validate assumptions. Itamar provides a framework for categorizing validation methods into Assessment, Fact Finding, Tests, Experiments, and Release Results.

All parts:

Ideas

Based on multiple examples from Slack, Netflix, and Booking.com, Itamar points out how most ideas are often not worth pursuing because they don’t have a measurable positive impact or, even worse, have a negative impact.

This is caused by humans being bad at predicting the future due to our cognitive biases. For instance, we often rely too much on our intuition and experience when picking and prioritizing ideas. Or we easily fall prey to anchoring or consensus effects, focusing on only one piece of evidence or starting too narrow. All resulting in making idea selection an ineffective and often political process, where the one selling an idea best wins.

As a solution, Itamar suggests two things:

  1. Generate ideas based on scientific research methods such as interviews, surveys, etc. So that our ideas are grounded in what users need.
  2. Follow a process that guarantees only the ideas with evidence and validation are built. Itamar proposes that any idea goes through three stages, evaluation, validation, and delivery. Starting with evaluation, you score each idea within an idea bank based on initial evidence and judgment. For example, using Impact, Confidence, and Ease (ICE) scores. Next, one has to validate by picking some candidates from the evaluation phase (that is, from the idea bank) and testing the underlying assumptions. Validation has one central purpose to make sure weak ideas are parked and strong ideas are further developed. And finally, whenever you’ve gained enough evidence and confidence, you go from product discovery to product delivery.

Idea bank/repositories

Managing an often overwhelming amount of ideas requires the organization to facilitate comparison and development. Itamar emphasizes the necessity of a gatekeeper for the idea bank, steering away from treating it merely as an external suggestion box. The focus remains on tracking numerous ideas, especially those aligned with the team’s metrics while sidelining technical proposals lacking direct impact.

Key suggestions include:

  • Try to reject as many ideas upon entry as possible.
  • Create a list of up to 40 high-value ideas as candidates.
  • Incorporate 3–5 ideas per key result in quarterly planning for further validation and potential implementation.
  • Allow ideas to be demoted based on evolving evidence or validation during experimentation or implementation phases.
  • Conduct initial brief evaluations (minutes, not hours), involving only as many people as needed. Others can review and challenge later.

Ideas scoring using ICE

ICE = Impact * Confidence * Ease.

  • Impact: Quantifies the idea’s influence on target metrics such as company Key Results or team quarterly Key Results. It integrates goals into discussions. Determining impact includes guesstimating from impact tables, referencing past ideas, and analyzing data and feedback. In the book, Itamar demonstrates how an initial impact score of 8 decreases to 4 through a back-of-the-envelope calculation due to limited customer reach.
  • Ease: Measures the effort and complexity of implementing the idea, contrasting with its opposite, effort. Breaking down the idea into components and conducting proof of concept (POC) or research aids in defining ease. Utilizing a mapping table can facilitate this process.
  • Confidence: Complements impact and ease scores by addressing biases and uncertainties. Itamar suggests incorporating confidence to gauge certainty and ease in estimating projected impact. A confidence meter considers the evidence gathered and its strength to calculate a confidence score.

Overall, ICE scoring facilitates communication, streamlines discussions, and elucidates reasoning for engineering teams. However, its true value lies in ongoing research and evaluation of ideas, ensuring development is evidence-based.

Choosing ideas

Just ranking ideas in isolation isn’t effective, as valuable ideas often begin with low confidence,” Itamar emphasizes. Instead, he advocates for a continuous approach involving categorizing ideas into these three buckets:

  • Low-confidence ideas are utilized for validation.
  • Medium confidence ideas are tested with team members.
  • High-confidence ideas undergo further development and delivery.

The selection could be based on the highest ICE score in each bucket and working on these items. In essence, Itamar advocates for ongoing research and establishing an evaluation (ICE) and validation (tests/experiments) funnel to thoroughly assess ideas before committing resources to implementation.

Steps

Investing solely based on opinions can lead to false positives (implementing bad ideas) and false negatives (rejecting good ideas).
To mitigate this, Itamar advocates for taking “steps”, a learning-based approach, echoing principles from major methodologies like Design Thinking, Lean Startup, and Product discovery.

Itamar demonstrates that taking steps or “testing” doesn’t require complex infrastructure; it can range from simple data analysis to full beta implementation. Each step in the process advances the idea and validates assumptions, yielding valuable insights and increasing confidence.

While each step should minimize investment, they vary in complexity and duration. Early steps, such as data gathering, tend to be brief, while later steps, like beta implementation, are more time-intensive. Itamar stresses the importance of capturing and sharing insights using tools like idea banks.

Illustrating with a case study of competing features, Itamar shows how iterative development increases fidelity and confidence in ideas over time. The example highlights the importance of keeping options open and avoiding premature commitments based on low confidence. While in the example, the total time used might have been higher when compared to just starting with one idea, the result was probably way better because of the learnings made with low-fidelity versions of the solution and thus, the opportunity to pivot fast, avoiding to ship less impactful increments.

Itamar also underscores that the goal isn’t blind trust in ICE scores but a rigorous comparison of alternatives until confidence levels justify full implementation and answer, “Do we know enough to build this fully? The number of steps depends on the idea’s size, but every idea warrants careful validation to manage costs effectively. The simple rule of thumb here is the bigger the idea, the more steps you should plan to take.

Itamar references Marty Cagan’s risk types and David J. Bland’s assumption mapping technique, offering a range of validation methods categorized into Assessment, Fact-Finding, Tests, Experiments, and Release results or short AFTER.

from https://itamargilad.com/idea-validation-much-more-than-just-a-b-experiments/
  • Assessment: Discuss ideas, gather data, and conduct simple pre-calculations, as seen in Amazon’s PR/FAQ process.
  • Fact-Finding: Dive deeper into data analysis through surveys, user research, or competitive research recommended for ongoing use.
  • Tests: Put prototypes or features before users, employing methods like fake door tests, usability tests, or beta releases, emphasizing methodical planning and analysis.
  • Experiments: Conduct scientific tests, typically requiring a fully built solution and a control group, such as A/B or multivariate tests.
  • Release results: Gradually introduce new solutions or compare results by selectively releasing or withholding features to monitor their impact over time.

Acknowledging objections, such as concerns about slow progress, Itamar emphasizes the importance of investing as little as possible and only when assumptions are validated, leading to sufficient evidence and confidence.
And that after each step, reevaluation, including recalculating ICE scores, ensures consistent and methodical decision-making is needed.

Conclusions

My main takeaways from Ideas and Steps are:

  1. Grounded Idea Generation and Systematic Process: Itamar advocates for generating ideas grounded in user needs and following a systematic evaluation, validation, and delivery process.
  2. ICE Scoring for Idea Selection: Ideas are scored using ICE scores, guiding the selection process based on confidence levels.
  3. Steps Approach for Validation and Testing: Itamar emphasizes a Steps approach, focusing on validation and testing to mitigate risks and quickly validate assumptions. This involves categorizing validation methods into Assessment, Fact-Finding, Tests, Experiments, and Release Results.
  4. Continuous Research and Evaluation: Itamar underscores the importance of ongoing research and evaluation of ideas to ensure evidence-based development. Itamar highlights the need to invest as little as possible and only when assumptions are validated, leading to sufficient evidence and confidence. Regular reevaluation, including recalculating ICE scores, ensures consistent and methodical decision-making.

Read on part 3, which answers:

  • Why did agile and the proliferation of roles lead to a widening gap between engineers and their end-users?
  • What are tasks?
  • How do you become such a company/Product Manager?

--

--

Thomas Ziegelbecker

Hi, I’m a Product Management enthusiast at Dynatrace, a dad, a husband, and an idealist who believes that we can make the world a better place.