Playbook

Principles and frameworks for AI-first leadership

A collection of principles, frameworks, and practices that guide how I build teams and ship products in the AI era.

Core Principles

1. Human Accountability for AI

As IBM stated in a 1979 internal training slide: "A computer can never be held accountable, therefore a computer must never make a management decision." This principle is even more relevant in the age of AI. A human must always be responsible for any decision an AI helps make. AI can inform, accelerate, and augment—but accountability stays with people.

2. Leverage Over Headcount

Traditional software scales linearly—more features need more people. AI changes that equation. A small team with the right AI tooling can ship what previously required 10x the headcount.

Implication: Hire for judgment and taste, not raw coding throughput. AI handles the mechanical work; humans handle the "should we" questions.

3. Context is the Constraint

AI systems are only as good as the context they receive. The bottleneck isn't AI capability—it's our ability to provide the right context at the right time.

Implication: Invest in documentation, clear abstractions, and context management systems. Make it easy for both humans and AI to understand the codebase.

4. Ship to Learn, Not to Perfect

In fast-moving technology, the cost of being wrong is lower than the cost of being slow. Ship small, learn fast, course-correct.

Implication: Optimize for iteration speed. Smaller PRs, faster deploys, tighter feedback loops.

5. Written Communication Scales

Meetings don't scale. Real-time chat creates chaos. Written docs, RFCs, and decision logs create shared context that compounds over time.

Implication: Default to writing. If it's important, write it down. If it's urgent, it's probably an incident—not a Slack message.

6. Distributed Ownership Beats Centralized Control

The people closest to the problem make the best decisions—if they have the right context and authority.

Implication: Push decisions down. Give teams ownership of their domain. Your job as a leader is to provide context, not make every call.

Where AI Creates Leverage

AI impact falls into three distinct categories. Understanding which lever you're pulling shapes strategy, investment, and team structure.

1. Product Engineering

How we build software is fundamentally changing. Agentic coding, AI-assisted review, automated testing, and knowledge-capturing workflows like TkDD transform the economics of shipping. The same team ships more, faster, with fewer defects. This is the "how we make the sausage" lever.

2. Workflow Automation

Any knowledge work activity can now be automated more easily and more deeply than before. This is especially powerful with a trained team, shared tooling and infrastructure, and a strong ontology that helps AI understand your domain. The opportunities compound as your AI fluency increases.

3. Product Capabilities

AI enables entirely new human-AI interaction patterns in the products we build. Knowledge retrieval across vast datasets. Reasoning that feels intelligent (even if non-deterministic and imperfect). Personalization at scale. These aren't just features—they're new categories of user value that weren't possible before.

Strategic note: Most organizations under-invest in #1 and #2 while chasing #3. The teams that build AI leverage into how they work will outpace those who only add AI to what they sell.

Frameworks I Use

Dual Track Agile Meets AI-First Product Development

Traditional Dual Track Agile separates discovery (figuring out what to build) from delivery (building it). Product teams run continuous discovery—prototyping, testing, validating—while engineering delivers validated solutions in parallel tracks.

AI amplifies both tracks:

Discovery Track (AI-Enhanced):

  • AI assists in user research synthesis and pattern recognition
  • Rapid prototyping with AI-generated mockups and functional prototypes
  • Faster hypothesis generation and validation cycles
  • AI agents can simulate user interactions and edge cases

The Bridge: TkDD (Ticket-Driven Development):

TkDD is the connective tissue between discovery and execution. Tickets become knowledge containers that capture not just what to build, but why, what alternatives were considered, and how thinking evolved. This context travels with the work—from discovery conversations through agent execution and back.

Execution Track (AI-Native):

  • Match the tool to the work: Use autonomous agents like Claude Code for well-defined, larger scoped work where the ticket provides full context. Switch to collaborative tools like Windsurf, Cursor, or Augment for smaller tasks or exploratory work where human-in-the-loop iteration is valuable.
  • Context is everything: Pulling in the right context—and excluding irrelevant noise—is the difference between quality code and hallucinated garbage. This is the hard problem.
  • Enablers for context quality: Sub-agents that fetch specific context on demand. Graph RAG for traversing relationships in your codebase. Ontologies and structured context systems that know what's relevant to the task at hand.
  • Human engineers shift from writing code to validating intent, reviewing AI output, and curating context.

Key insight: AI doesn't replace dual track—it accelerates both tracks while demanding better knowledge capture. The teams that win will be those who treat discovery artifacts as first-class inputs to agentic delivery systems, with TkDD as the bridge.

The AI-First Development Framework

Before AI:

  1. Write spec
  2. Write code
  3. Write tests
  4. Code review
  5. Deploy

With AI + TkDD:

  1. Discovery in Claude/AI chat — research, discuss tradeoffs, build specifications
  2. Capture to ticket — spec, rationale, alternatives considered, constraints identified
  3. Agent pulls ticket and implements — code + tests generated with full context
  4. Agent writes findings back to ticket — edge cases discovered, decisions made, thinking evolution
  5. AI reviews code for correctness, style, and test coverage
  6. Human reviews for correctness and intent
  7. Ship with confidence — knowledge persists for future work

Key insight: The bottleneck moves from "writing code" to "validating intent." TkDD ensures the research, reasoning, and dead ends explored don't evaporate at the end of each session—they compound into a searchable knowledge base.

The Context Pyramid

AI systems need context at multiple levels:

  1. Code-level: Function signatures, type definitions, inline comments
  2. Module-level: README files, architecture docs, API contracts
  3. System-level: System design docs, data flow diagrams, decision logs
  4. Domain-level: Business context, user needs, strategic direction

Application: Structure your codebase and docs to make each level easily accessible to both humans and AI.

The Decision Doc Template

For important technical decisions:

# [Decision Title]

## Context
What's the situation? What led to this decision point?

## Options
What are the alternatives? (Include "do nothing")

## Recommendation
What should we do?

## Reasoning
Why this option over the others?

## Risks & Mitigation
What could go wrong? How do we handle it?

## Open Questions
What don't we know yet?

Why it works: Forces clarity, creates shared understanding, becomes artifact for future reference.

The Team Scaling Framework

Teams scale in phases, each requiring different leadership:

Phase 1 (1-5 people): Everyone does everything. Leader is hands-on IC.

Phase 2 (5-15 people): Specialization emerges. Leader focuses on coordination.

Phase 3 (15-30 people): Sub-teams form. Leader builds leaders.

Phase 4 (30+ people): Organization design matters. Leader sets vision and culture.

Key: Don't import Phase 4 patterns into Phase 1 teams. Match your structure to your size.

Practices & Habits

Daily Practices

  • Morning focus block: 2-3 hours of deep work before meetings
  • AI pairing session: Use Claude for thinking through hard problems
  • Public by default: Share learnings, decisions, and updates in team channels
  • Read code daily: Stay connected to the actual work, even as a leader

Weekly Practices

  • 1:1s with all directs: Their time, their agenda, my full attention
  • Friday write-up: What shipped this week? What did we learn?
  • Code review rotation: Review at least 5 PRs from across the org
  • Learning hour: Dedicate time to exploring new tools, techniques, patterns

Monthly Practices

  • Team retrospective: What's working? What's not? What should we try?
  • Metrics review: Are we trending in the right direction?
  • Skip-level conversations: Talk to everyone, not just directs
  • Personal reflection: Journal on leadership challenges and growth areas

Mental Models

Bezos's Two-Way Doors

Decisions come in two types:

  • One-way doors: Hard to reverse (e.g., choosing a database)
  • Two-way doors: Easy to reverse (e.g., changing a UI layout)

Application: Move fast on two-way doors. Be thoughtful on one-way doors.

Drucker's "What Gets Measured Gets Managed"

If you want behavior change, change what you measure.

Application: Measure outcomes, not activity. Deploy frequency > lines of code. Customer value > feature count.

Conway's Law

"Organizations design systems that mirror their communication structure."

Application: Want better architecture? Fix your team structure first.

Resources & Inspiration

Books That Shaped My Thinking

  • "The Effective Executive" by Peter Drucker: Timeless wisdom on leverage and priorities
  • "Inspired", "Empowered", and "Transformed" by Marty Cagan: Product operating model and empowered teams
  • "Scalability Rules: 50 Principles for Scaling Web Sites": Practical patterns for building systems that scale
  • "The Almanack of Naval Ravikant": Leverage, judgment, and decision-making

People I Learn From

  • Anthropic team: Pushing the boundaries of what AI can do
  • Marty Cagan: Product management and empowered teams
  • Elon Musk: First principles thinking and bias for action
  • Paul Graham: Essays on startups, work, and taste

Ideas That Stick

  • AI as thought partner, not replacement: The best use of AI is to make humans more effective, not to eliminate them
  • Compounding knowledge: Written docs, decision logs, and public learning compound over time
  • Small teams, big leverage: The future favors small teams with AI assistance over large teams with manual processes