Internal Reference Document

First Principles
Innovation Framework

A structured methodology for identifying, validating, and de-risking innovation initiatives. Surface the risks that exist whether you look for them or not — before they surface themselves at a much higher cost.

Overview

What This Framework Does

The First Principles Innovation Framework was originally designed for startups to help them convert problem hypotheses into testable assumptions that drive specific activities and their prioritisation. It has since evolved to help corporations identify how AI can improve business performance by focusing first on an urgent problem.

The framework operates on a foundational belief: risks exist as underlying truths whether or not an organisation invests in finding them. Technical debt, organisational debt, competitive factors — these will surface eventually regardless of when they are invested in. The purpose of the framework is to surface those risks as early as possible, ensuring that investment decisions are informed by evidence rather than assumption.

Through the framework, organisations produce a set of artifacts used for change management, planning, measurement, and implementation of a solution designed to drive their business forward. This is as much art as it is science — data will always likely be incomplete, which is why the unit of investment is called a bet.

This is an internal reference document. It serves as the system of record for the framework — ensuring that any team assembled to leverage it (human or AI) operates from the same first principles. AI agents reference this document to identify risk areas that might be overlooked and to generate consistent, on-framework artifacts and guidance.

Section 01

Theoretical Foundations

The framework draws on several established methodologies and intellectual traditions. These are explicitly acknowledged as foundational influences.

Framework / Author Contribution
Eric RiesThe Lean Startup Hypothesis-driven development, build-measure-learn loop, innovation accounting, MVP as a learning vehicle
Marty CaganInspired / Empowered Four risks model: Value, Feasibility, Usability, Viability. Risk-first product thinking
Dan OlsenThe Lean Product Playbook Gap analysis: quantifying the delta between importance of an outcome and satisfaction with the current state
Everett RogersDiffusion of Innovations Adoption curve as a heuristic for stakeholder mindset (Innovators through Laggards)
Clayton ChristensenJobs to Be Done Framing user needs as jobs to be hired for, rather than feature requests
Dai Clegg (Oracle)MoSCoW Method Prioritisation: Must-have, Should-have, Could-have, Won't-have
John Doerr / Andy GroveOKRs Objectives and Key Results with the formula: verb X to Y by Z
Section 02

Core Principles

These principles underpin every stage of the framework and should guide all decisions made within it.

Principle 01

Problems Before Solutions

A problem must be defined from the perspective of the person experiencing it, without describing it as the absence of a solution. If the problem cannot be articulated without referencing a solution, it is not yet well understood. The organisation is the expert on its own problems — they can describe negative impacts with ample evidence, and the absence of that evidence is itself a signal that the status quo may be the right answer.

Principle 02

Hypothesis-Driven

Everything is a theory until proven otherwise. The framework treats all assumptions as hypotheses to be disproven as quickly as possible. The goal is not to articulate a broad vision of a solution, but to test whether the theory aligns with reality. A validated solution emerges as a byproduct of that alignment.

Principle 03

Risk-First

Innovation risk exists whether or not an organisation invests in surfacing it. The framework prioritises finding risk early — before it compounds — by identifying which assumptions, if proven false, would end the investment. This is the opposite of a waterfall approach where all planning happens upfront and risk is discovered during execution.

Principle 04

Evidence Over Opinion

Decisions are based on collected data, not consensus or authority. Qualitative research, prototype testing, and structured interviews gather objective evidence. The framework surfaces inconsistencies that are only visible when research is conducted at a higher level across the organisation.

Principle 05

Innovation Accounting

Innovation initiatives must be measured differently from business-as-usual operations. If innovation is evaluated using the same KPIs as current operations, it will always appear to be unjustifiable risk and will be avoided. A distinct set of metrics must be established to fairly evaluate progress.

Principle 06

The Bet as the Unit of Investment

A bet is the cost of a sprint — the people, time, and resources allocated to a time-boxed period of discovery. Each bet must be justified by the data collected to date. Earlier sprints carry higher risk because the team is data-poor. The framework progressively reduces that risk, building the evidence base for continued investment.

Principle 07

First Principles Thinking

The framework seeks underlying truths — constraints, debts, competitive factors, organisational dynamics — that will surface eventually regardless of when they are invested in. The goal is to inventory the real risk associated with any initiative by looking past assumptions to what is objectively true.

Section 03

Framework Stages

The framework follows six stages that operate as an iterative loop. Completion of Stage 6 feeds back into earlier stages as new data refines the understanding of the problem, the stakeholders, and the risks.

01 Problem Definition
Define the problem objectively from the perspective of the person(s) experiencing it.

A well-formed problem statement answers six questions without proposing a solution:

ComponentQuestion
WhoWho is the person with the problem?
WhatWhat problem are they experiencing?
ImpactWhat negative impact do they experience because of the problem?
FrequencyHow often is the problem occurring?
Root CauseWhat is the underlying cause of the problem?
Desired OutcomeWhat would be achieved if the problem were solved?

A poorly framed problem statement describes someone's perspective on a gap without being objective. It often simply states the absence of a perceived solution. A properly framed statement provides enough context to test underlying assumptions using experimentation — it can be verified or disproven.

02 Motivations
Identify who within the organisation represents each buyer persona and what motivates their support or resistance.
Buyer PersonaPrimary MotivationKey Question
Economic Buyer Return on investment (ROI) Is there willingness to pay / invest?
Functional Buyer Quality or speed improvement Will this make operations measurably better?
Technical Buyer Risk mitigation, security, privacy Can this be done safely and sustainably?

In smaller organisations, a single person may represent multiple buyer personas. Without Economic Buyer buy-in, innovation will not be supported — a lack of alignment signals either a perceived lack of value or urgency.

03 Mindset
Assess where each buyer sits on the innovation adoption curve to understand their disposition toward change.

Using Everett Rogers' Diffusion of Innovations model, each buyer is mapped to a position on the curve:

Innovators → Early Adopters → Early Majority → Late Majority → Laggards

This is a heuristic, not a diagnostic. It provides shared language for why someone might resist change or fail to see value. Innovators and Early Adopters see value because progress is central to their contribution. The rest of the curve is increasingly comfortable with the status quo.

If key buyers sit in the later parts of the curve, the initiative may not survive the internal environment regardless of solution quality. Resistance, digging in, or discontinuity debates are symptoms of this misalignment.

04 Gap Analysis
Quantify the urgency by measuring the delta between importance and satisfaction.

Based on Dan Olsen's framework, the gap is assessed across two dimensions:

  • Importance: How important is this outcome? (Target: 10/10)
  • Satisfaction: How satisfied are stakeholders with the current state? (Target: 1/10)

The gap is the delta between these scores. A genuine gap exists when something is critically important AND the current state is deeply unsatisfying.

"Current solution" does not always mean a specific tool — it can be the absence of any initiative to improve. The status quo includes deliberate inaction.

If there is no gap — or if the three buyers are misaligned on the gap — innovation is likely not possible. Misalignment means change management costs will increase or become so high that the organisation will not invest or achieve an intended return.

The gap analysis can be applied quantitatively (Dan Olsen's formula producing a numerical score) or qualitatively (as a conversation tool to identify where risk and resistance live). Both are valid.

05 Risk Alignment
Classify risks using Cagan's four-risk model and identify which are intolerable if proven true.

The four risks are organised into quadrants, assessed counter-clockwise:

Organisation
Value

Is there a real gap? Is there urgency? A lack of gap means no foundation for investment.

Organisation
Viability

Does the gap close sufficiently to sustain adoption of the new solution over the long term?

Solution
Feasibility

Does the team have the resources and talent to build or source a solution?

Solution
Usability

Can everyone involved stop using the current solution and start using the new one?

Upper quadrants (Value + Viability): The organisation's ability to change and align around value.

Lower quadrants (Feasibility + Usability): The solution's ability to be built and adopted.

Value Risk is the gatekeeper. If there is no gap, look for other areas of the business with more dissatisfaction and better alignment. A lack of evidence of negative impact is a signal that business as usual is still the right approach.

06 Experimentation
Design and execute time-boxed sprints to test assumptions carrying the highest intolerable risk.

Experimentation is the operational engine of the framework. It uses sprints — time-boxed periods of discovery — to collect data that validates or disproves assumptions. See the Sprints section for full detail on how experimentation works in practice.

Every activity within a sprint records four core ingredients:

  1. The problem statement this activity supports
  2. The assumption being tested
  3. The expected result
  4. The actual result

These inputs and outputs form the findings of each experiment and become the inputs for the next sprint.

The decision to stop is a successful outcome. Ending a sprint without proceeding means the framework saved the organisation from a larger investment in something that could not have succeeded under its constraints.

Section 04

How Sprints Work

A sprint is a time-boxed period (commonly two weeks) of structured discovery and experimentation. There is one sprint structure — what varies is the activities within it, selected based on which risks need testing.

Sprint Activities

A single sprint may include any combination of:

The Orientation: Disprove, Don't Confirm

The team identifies the risk that is too high to tolerate and forms a hypothesis about it. The sprint is designed to disprove the hypothesis — the orientation is to find reasons to stop, not reasons to continue. Data is collected, aggregated, and brought back to the team with a recommendation to continue, pivot, or stop.

In practice, when these structured conversations take place across the organisation, inconsistencies surface that would only be visible through higher-level research. Assumptions are disproven or validated with objective evidence.

Sprints as Proof of Concept

Sprints also include discovery in the form of prototypes that bring higher fidelity to decision-making. A feasibility prototype tests whether a solution can be designed and developed within constraints. A clickable high-fidelity prototype collects feedback on why users would struggle with a proposed interface. Both continuously lower the risk that, if proven wrong, would end investment.

The Bet

The cost of a sprint is the bet — the percentage of people's hours (opportunity cost), allocated resources, and direct investment going into the time box. The bet is the heuristic for understanding the organisation's tolerance of risk. The earlier in the process, the higher the risk of the bet, because data is sparse.

Iteration

Sprint outputs feed back into the framework. New data may redefine the problem, shift buyer alignment, change mindset assessments, reveal new gaps, or reprioritise risks. The loop continues until the team agrees the gap between expected and actual outcomes is sufficiently closed and MVP criteria have been met.

Completion: When the MVP criteria are met, the validated learnings and requirements become inputs for formal development or hardening. The framework does not build the final product — it de-risks the path to building it.

Section 05

Artifacts

The following artifacts are produced through the application of the framework.

01

Problem Statement

A detailed, solution-agnostic description answering: Who has the problem? What is it? What is the negative impact? How frequent? What is the root cause? What is the desired outcome?

02

Stakeholder Map

Key stakeholders with their buyer persona (Economic, Functional, Technical), diffusion curve position, champion status, and motivations.

03

Current State Documentation

Existing solutions, processes, technologies, workflows, performance characteristics, and known constraints or debt.

04

Objectives & Key Results (OKRs)

Following the formula verb X to Y by Z. Example: "Increase time to value from 2 hours to 2 minutes by Q3 2026." These serve as innovation accounting metrics distinct from BAU KPIs.

05

Jobs to Be Done / User Stories

User needs framed as jobs, prioritised using MoSCoW: Must-have, Should-have, Could-have, Won't-have. Ensures no critical job is overlooked in the first version.

06

Baseline Metrics

The X values from the OKR formula — the current measurable state before intervention, including target movements and measurement methods.

07

Solution Hypothesis

Progressive articulation: user requirements → functional requirements → prototypes (clickable for software, process documentation for operations). Emerges from validated learning.

08

Sprint Plan

Name, goal, start/end date, status, selected activities, and bet size (resource cost) for each sprint.

09

Risk Register

Living document: risk area (Value/Feasibility/Usability/Viability), severity (Blocking/High/Medium/Low), description, owner, mitigation, target date, and status.

10

Experiment Record

Atomic-level output of each activity: problem statement supported, hypothesis tested, expected result, actual result, observations, and recommendations.

11

Supporting Evidence

All sprint byproducts: process documentation, interview transcripts, research findings, prototype feedback, and other material generated during execution.

Section 06

Qualification Assessment

An internal rubric used to assess whether an engagement is viable before committing resources. This is not a client-facing artifact — it qualifies whether the conditions for successful innovation exist.

Scored on a Likert scale (1 = Strongly Disagree, 5 = Strongly Agree), weighted by factor. Minimum qualification score to proceed: 7.5

Workflow Specificity & Baseline Documented workflow with counts, timers, error rates
20%
Champion & Ownership Frontline manager with budget and calendar time reserved
15%
Partner Posture Openness to external vendor co-development and shared SLAs
15%
90-Day Decision & Access Approvals and data access can be lined up within weeks
10%
Value Model & KPIs Time saved can be redeployed; revenue/employee hypothesis exists
10%
Data & Integration Readiness APIs/exports known; security and privacy requirements clear
10%
Urgency & Pain Intensity Competitive or operational pressure driving the initiative
10%
Shadow AI & Team Literacy Team members already experimenting with tools informally
5%
Adoption Readiness & Change Management Training, norms, and feedback loops planned
5%
Section 07

Glossary

Bet
The cost of a sprint — the people, time, money, and opportunity cost invested in a time-boxed period of experimentation. Used as a heuristic for understanding the organisation's tolerance of risk.
Champion
An attribute of a key stakeholder, not a formal role. Champions are more likely to be innovators or early adopters with intrinsic motivation to contribute to the initiative's success. Any individual involved can be a champion if they demonstrate this mindset.
Current Solution
The existing state against which satisfaction is measured. May be a specific tool, system, or process — or the absence of any initiative to improve. Status quo includes deliberate inaction.
Discovery
Lean research specifically targeted at disproving theories. Brings back objective evidence as inputs to the next bet. The primary mode of work within sprints.
Gap
The delta between the importance of a desired outcome (target: 10/10) and satisfaction with the current solution or state (target: 1/10). A true gap exists only when importance is high and satisfaction is low.
Hypothesis
A theory about the problem, stakeholders, or solution that the team's job is to disprove as quickly as possible. A validated solution emerges as a byproduct of hypotheses aligning with reality.
Initiative
The overarching problem-centric engagement. Defined by the core challenge being solved and may contain multiple sprints.
Innovation Accounting
Measuring innovation differently from BAU operations. Uses distinct KPIs (captured in OKRs and baseline data) rather than holding innovation to operational metrics that make all innovation look like unjustifiable risk.
Misalignment
Disagreement between the Economic, Functional, and Technical Buyers on the gap, priority, or risks. Increases change management costs and may make the initiative non-viable. A first principle — it exists whether or not it is uncovered.
MVP (Minimum Viable Product)
The minimum solution required to validate that the gap between expected and actual outcomes is sufficiently closed. A learning vehicle, not a first product release.
Sprint
A time-boxed period (commonly two weeks) of structured discovery and experimentation. All sprints follow the same structure; what varies is the activities within them, selected based on which risks need testing.
Solution Hypothesis
The proposed solution framed as a theory, subject to the same disproval-oriented testing as all hypotheses in the framework. Distinguished from the problem hypothesis in that it describes what might solve the problem.
Section 08

Agent Guidance

Guidance for AI agents supporting engagements that use this framework. The framework itself does not change between clients. What changes is:

Role of the AI Agent

  1. Ensure framework adherence — Verify all six stages are addressed and no stage is skipped or under-developed
  2. Challenge assumptions — Look for assumptions not stated as hypotheses. If something is treated as fact without evidence, flag it
  3. Identify missing perspectives — Check that all three buyer personas are identified and assessed
  4. Surface gap inconsistencies — Look for overstated importance, understated satisfaction, or conflicting stakeholder assessments
  5. Validate problem framing — Check problem statements against the six-question framework
  6. Prioritise risk identification — Verify sprint activities target the highest-priority risk. Question if Value Risk hasn't been addressed first
  7. Maintain artifact quality — Ensure artifacts follow defined structures and don't contradict each other
  8. Respect the stop signal — If data suggests no gap, buyer misalignment, or a disproven assumption, recommend stopping or redirecting

Systematic Questions

When analysing a client's situation, work through these areas:

Problem Definition

Has the problem been defined without referencing a solution? Are all six components addressed? Is there sufficient evidence of negative impact and frequency?

Buyer Alignment

Who represents each buyer type? Is there willingness to invest (Economic)? Where does each sit on the diffusion curve? Are any in the late majority or laggard range?

Gap Validation

Is the outcome truly 10/10 important? Is satisfaction truly low? Are the three buyers aligned? Could the "current solution" be inaction?

Risk Assessment

Which of the four risks is highest? Which, if proven false, would end investment? Has Value Risk been addressed before solution-side risks?

Experimentation

Is the sprint designed to disprove, not confirm? Are activities aligned with the risk being tested? Is the bet justified by data collected to date?

Common Pitfalls to Flag

Solution-First Thinking

The team has decided on a solution and is working backwards to justify it, rather than testing whether the problem and gap warrant it.

Missing Economic Buyer

Functional and technical support exists but no one with willingness to invest. Without economic buy-in, the initiative will not survive.

Assumed Alignment

The team believes all stakeholders agree without evidence. Misalignment is a first principle — it exists whether or not it has been surfaced.

BAU Metrics for Innovation

Evaluating the initiative using the same KPIs as current operations, making innovation look like pure risk.

Skipping Value Risk

Jumping to Feasibility or Usability testing without first validating that a genuine gap exists.

Confirmation Bias in Experiments

Sprint activities designed to validate rather than disprove. The orientation should always be: "Find reasons this won't work."

Scope Creep Beyond MVP

Requirements expanding beyond what is needed to validate the hypothesis. The MVP is a learning vehicle, not a product launch.

Insufficient Problem Definition

The problem statement is vague, subjective, or describes the absence of a solution. All six components must be addressed with specificity.

Section 09

References

Ries, E. (2011). The Lean Startup. Crown Business.
Cagan, M. (2018). Inspired: How to Create Tech Products Customers Love. Wiley.
Olsen, D. (2015). The Lean Product Playbook. Wiley.
Rogers, E. M. (2003). Diffusion of Innovations (5th ed.). Free Press.
Christensen, C. M. et al. (2016). Competing Against Luck: The Story of Innovation and Customer Choice. Harper Business.
Doerr, J. (2018). Measure What Matters. Portfolio/Penguin.