SECTION 01
SECTION 01

Intro

Why this exists + the process at a glance

01 / 35

Tau

How We Deliver

The engagement model that takes an opportunity from first conversation to an ongoing managed service. Eight phases. Eight gates. One process.

April 2026 — scroll down or use arrow keys

02 / 35
The shift

Where we're going

Now

Overpromise.
Underdeliver.

Future

Underpromise.
Overdeliver.

This process is how we get there.

03 / 35
The problem

Solution-focused → Problem-focused

Now

Rush to solutions

Client says "build me X." We start building X. Two months in we discover the real problem was Y. Rework. Overruns. Eroded trust.

We jump to what before we've understood why.

Future

Laser-focused on problem discovery

Client says "build me X." We say: "Let's understand the problem first." Discovery defines the question. Scoping defines the answer. The solution comes from understanding, not assumption.

We invest in why so the what is right first time.

04 / 35 The process

The full pipeline

Pre-sale

Qualification

Commercial checklist

Opportunity

SE feasibility + T-shirt size

Letter of Engagement

Process + D&S pricing + terms

Discovery

Reverse Brief

Scoping

Proposal Package — SALE

Post-sale

Onboarding

Runbook signed

Build

Engineering delivers

Review

Tech lead × 3 rounds

QA

Delivery lead × 3 rounds

UAT

Acceptance Pack signed

Retro

3 retros → MS

Managed Service

Ongoing support

Dashed = gate (must pass to proceed)

05 / 35 Ownership

Who owns each phase

Commercial Delivery lead Tech lead Client

Pre-sale

Qualification

Commercial

Opportunity

Commercial + SE

Discovery

Delivery lead

Scoping

Delivery lead

Onboarding

Delivery lead

Post-sale

Build

Tech lead

Review

Tech lead

QA

Delivery lead

UAT

Delivery + Client

Retro

Delivery lead

Managed Service

Tech lead

Commercial during Discovery + Scoping — delivery leads, but commercial is in the room:

06 / 35 Pre-sale

Qualification → commercial checklist

Commercial owns this entirely. Before delivery or engineering spend a single hour, commercial confirms the opportunity is real. Items that used to sit in Discovery now live here — earlier qualification = less wasted delivery time.

Qualification checklist

  • Budget confirmed (range, not exact)
  • Decision maker identified
  • Sign-off process fully understood + captured
  • Timeline appetite confirmed
  • Problem statement captured (high-level, 1 paragraph)
  • Sponsor identified + bought in
  • Value indication (rough cost-to-client of the problem)

Moved from Discovery. Decision maker, budget, timeline, sponsor — these used to be Discovery outputs. But commercial already captures them during sales. Moving them here means Discovery starts with context, not cold.

Then: Opportunity (SE conversation)

Once qualified, a solutions engineer joins a 30-min call with the client to assess initial technical feasibility:

  • Initial systems + integration feasibility
  • tau-core fit assessment (Tier 1–3 or bespoke Tier 4?)
  • Data availability check (does the data exist? who owns it?)
  • Rough effort signal (T-shirt size: S / M / L / XL)

Soft gate: Letter of Engagement (LOE). Before Discovery starts, client signs the LOE. It outlines the engagement process, covers Discovery + Scoping pricing, and sets payment terms. If they proceed to Build, the D+S fee is absorbed into the Build price. If they walk away, they keep the Reverse Brief + Proposal as deliverables.

07 / 35 Why gates matter

What happens when you skip steps

No delivery in Discovery Scope priced without feasibility Blockers found mid-build Workarounds + overruns Project stalls

This has happened. The process exists to prevent it.

  • Discovery gate prevents pricing without feasibility
  • Scoping gate prevents building without signed scope
  • Onboarding gate prevents build dates without client access
  • Build gate prevents QA starting without engineering finishing
  • Review + QA gates prevent UAT without happy path passing twice (tech + delivery)
  • UAT gate prevents go-live without signed Acceptance Pack

Gates are enforced in Linear via required-fields.com. They are not suggestions — the system blocks you.

SECTION 02
SECTION 02

Delivery

Every step from opportunity to managed service

08 / 35 Stage 1

Opportunity → Brief

Commercial finds the opportunity and produces a brief. Not a scope, not a plan — just enough for delivery to decide whether to investigate.

The brief contains:

  • Client name + who commercial spoke to
  • The problem as commercial understands it
  • Budget signals (appetite, not a price)
  • Decision maker named (or "unclear" flagged)
  • Why commercial thinks it's worth pursuing
  • Red flags observed at qualification

This is the handover.

Commercial's job is to find the opportunity. Delivery's job is to validate and deliver it. The brief is the boundary — everything before it is commercial, everything after it is delivery.

Flag red flags early. Tire-kicker signals ("budget is flexible", "just send a rough estimate", no named decision maker, unclear urgency) are cheapest to catch at Opportunity. If commercial doesn't flag them, delivery burns Discovery time finding out.

GATE: Opportunity
Brief delivered with red flags noted. Delivery lead decides whether to accept for investigation.

09 / 35 Tooling

The AI Brief

Commercial uses a Claude skill in Cowork to generate structured briefs. Claude asks the right questions — including red-flag probes to surface tire-kicker signals at the earliest possible point.

Open Claude Run Brief skill Answer questions Brief generated Delivery reviews

Consistent structure

Every brief follows the same format. No "I forgot to mention the budget."

Low friction

Conversational questions, not a form. Claude extracts and structures.

The bridge

Commercial doesn't need Linear. Brief in Claude, delivery takes it into tracking.

claude.ai → Projects → Opportunity Brief
Available to everyone in the Tau org.

10 / 35 Stage 2

Discovery → Reverse Brief

Discovery is defining the question. We investigate the problem, the systems, the stakeholders. The output is a Reverse Brief — our structured response back to the client.

GATE: Discovery
Can't pass without: feasibility verdict, DPA status, delivery lead sign-off, red flag register, operational buy-in.

# Reverse Brief

## Problem Statement
Problem + success metrics + client's definition of done

## Stakeholders + User Personas
Sponsor, Client Owner, actual end users

## Current State (As-Is)
Process, user stories, pain points today

## Systems, Data & Scale
Inventory + volume + throughput

## Security & Compliance
DPA, SSO/RBAC, audit, GDPR, ISO

## Constraints & Dependencies
What we can't do, what we're assuming, client-side blockers

## Red Flag Register
Risks identified, mitigations

## Feasibility Verdict
Feasible / Conditional / Not Feasible

## Sign-offs
Tech / Commercial / Delivery / Sponsor / Client Owner
11 / 35 Stage 2 — in detail

Discovery — what actually happens

Five structured sessions over 1-2 weeks. Delivery leads, commercial in the room.

1. Kickoff workshop

Introduce the team, explain the delivery model, set expectations about time commitment and stakeholder involvement.

2. Problem framing

Deep-dive on the problem. Success metrics, value of solving, client's definition of done. Sponsor + owner present.

3. As-is walkthrough

With operational stakeholders. Current process, current user stories, actual day-to-day pain.

4. Systems + data session

Technical deep-dive with client's tech/data lead. Systems inventory, data access, scale, security.

5. Discovery sign-off

Walk through the Reverse Brief. Verdict captured. DPA status confirmed. Sign-offs collected. Gate closed.

Output: Reverse Brief

The structured document capturing everything. Client signs off. If feasible: Scoping begins. If not: we stop or reframe.

12 / 35 Stage 3

Scoping → Proposal Package

Scoping is defining the answer. What we'll build, in what order, for how much. Pricing happens here — not before. The output is a package of documents the client signs.

GATE: Scoping
Can't pass without: MSA signed, Proposal (SOW) signed, DPA signed (if needed), managed service terms agreed.

The Proposal Package

MSA (Master Service Agreement)

Legal framework: IP, liability, confidentiality, disputes. Signed once per client — reused for all future engagements.

Proposal (SOW)

Delivery plan + phasing strategy, to-be process + user stories, low-fi wireframes, effort estimates, acceptance criteria, happy path, risks, training + launch plans, pricing + managed service terms.

DPA (Data Processing Agreement)

GDPR data processing terms. Required if personal data is involved. Signed once per client.

13 / 35 Stage 3 — in detail

Scoping — what actually happens

Solution design → three parallel workstreams → Proposal Package reviewed + signed.

1. Solution design workshop

Co-design the to-be process. Walk through architecture. Align on wireframes and phasing.

2. Proposal review

Present full package: PRD + Design Doc + Proposal. Client feedback captured.

3. Scoping sign-off

Contracts signed: MSA + Proposal/SOW + DPA. Engagement moves to Onboarding.

After solution design, three workstreams in parallel:

Delivery lead → PRD

Product requirements: user stories, acceptance criteria, happy path, phasing, success metrics, training + launch plan.

Tech lead → Design Doc

Architecture overview, data model + schema, integration plan (APIs, auth, ingestion), deployment approach, testing strategy, security, performance, monitoring, risk register + trade-offs.

Commercial → Proposal

Pricing, payment schedule, managed service terms, commercial narrative. Effort estimates from delivery + tech feed pricing.

All three form the Proposal Package. PRD + Design Doc + Proposal (SOW) + MSA + DPA. Reviewed together at Proposal review meeting. Tech lead writes design doc, delivery lead reviews for scope alignment. Internal doc — client sees the Proposal, not the design doc.

14 / 35 Stage 4

Onboarding → Dependencies resolved

Scope is signed but the build doesn't start yet. Don't commit build dates until the client has delivered everything we need.

API credentials received and tested

Working credentials, not "they said they'd send them"

Platform access confirmed

Can actually log in and see data

Data dumps / sample data received

Historical data for building + validation

DPA fully executed

Not "in progress" — actually signed

Staging environment access

Somewhere to develop against

Technical contact named

One person for technical questions during build

Output: Onboarding Runbook. All of the above captured in a doc that goes through the same readback cycle as Reverse Brief + Proposal — pre-read → presentation → feedback → incorporation → readback + sign-off. Becomes the tech lead's reference during Build.

15 / 35 Stage 5

Build → Engineering delivers

Build is where the Engineering team owns the work. Delivery runs cadence + keeps the client informed. QA + UAT are their own phases after Build, not inside it.

Engineering project (concurrent)

A separate Linear project in the Engineering team, named <Client> - <Engagement> [Build]. Holds the actual implementation work — tickets, PRs, deploys. Tech lead owns. The Delivery project and Engineering project run in parallel both at status Build.

Delivery project (thin)

Weekly cadence meeting + 4+ weekly status emails to the client. No engineering detail in Delivery — that's Engineering's job.

GATE: Build closes when Engineering declares its project done. Delivery then moves to Review.

What you won't find here:
No QA, no UAT, no tech-lead sign-off, no retro. Those are their own phases now (next four slides).

Further work for the same client?
New engagement project. One project = one scope = one price.

16 / 35 Stage 6

Review → Tech lead sign-off

First internal quality gate. Tech lead walks the happy path end-to-end three times. Three successful rounds = ready for Delivery Lead QA.

Owner: Tech lead

Duration: ~1 week

Happy path defined in: the Scoping Proposal (signed by the client)

Three rounds, all pass = sign-off. Failing a round = log issues, fix, rerun. Counter resets on failure so the final round is always a clean pass.

Sub-issues on GATE: Review

  • 01 Round 1 tech lead happy path walkthrough + issues logged
  • 02 Round 2 tech lead happy path walkthrough + issues logged
  • 03 Round 3 tech lead happy path walkthrough + issues logged
  • 04 Tech lead sign-off captured — build ready for Delivery Lead QA

Why tech lead first: catches engineering defects (architecture drift, missing error handling, regressions). Delivery lead shouldn't be hunting these.

17 / 35 Stage 7

QA → Delivery lead sign-off

Second internal quality gate. Delivery lead walks the same happy path three more times — from the user's perspective this time. Catches scope drift + usability before UAT.

Owner: Delivery lead

Duration: ~1 week

What changes vs Review: same happy path, different lens — does this match the Proposal? Does the flow make sense to a non-technical user? Edge cases + empty states?

Push the deadline, not the quality. If either internal QA phase surfaces structural issues, UAT gets delayed. A delayed UAT is cheaper than a bad one.

Sub-issues on GATE: QA

  • 01 Round 1 delivery lead happy path walkthrough + issues logged
  • 02 Round 2 delivery lead happy path walkthrough + issues logged
  • 03 Round 3 delivery lead happy path walkthrough + issues logged
  • 04 Delivery lead sign-off captured — ready for UAT

Why two phases not one: different lenses catch different defects. Tech lead sees code; delivery lead sees outcome. The client should only ever see the third pass.

18 / 35 Stage 8

UAT → Acceptance Pack signed

Client gets a fixed window to test against the happy path they already signed in the Proposal. Output: signed Acceptance Pack.

UAT kickoff · 3-comms pattern

Pre-comm (UAT pack sent 48h pre) → kickoff meeting → same-day window-confirmation. Then: test accounts provisioned, feedback template distributed, deadline in writing.

Acceptance Pack doc cycle

Happy path walk-through + UAT results + sign-off. Standard doc pattern: v1 pre-read → presentation meeting → feedback gathered + incorporated → readback meeting with slides + signed docs → v2 signed + distributed.

Out-of-scope = CR, not blocker. UAT is where "I also want..." appears. Strictly align to scope: out-of-scope items get logged as change requests and don't block sign-off.

Structured + positive feedback.

Template nudges balanced feedback: "what works well" → "issues" → "out of scope". Not just a bug list. The happy path is already defined in the Proposal, signed by the client — UAT just verifies.

Why this is a full phase now.

Used to be rolled into Build. Separating makes the client-facing UAT flow explicit in the pipeline + allows Review + QA to run silently inside the delivery team first (client only sees polished work).

19 / 35 Scope changes

Change requests

When something is identified as out of scope — during UAT, build, or managed service — it becomes a change request. Every CR is a new project.

Out-of-scope identified CR created in Linear New project under same Initiative Discovery (lightweight) Scoping + pricing Build

Full process, separate commercial entity.
Each CR goes through Discovery → Scoping → Build with its own Proposal (SOW) and price. Discovery is lighter (context already exists), but the gates still apply.

No silent scope absorption.
If it's not in the original Proposal, it's a change request. No exceptions, no verbal agreements. This protects both Tau's margin and the client's expectations.

20 / 35 Post-engagement

Managed Service → Ongoing support

Mandatory for every first engagement. The client gets ongoing access to our core technology, monitoring, and support. Every build enters managed service.

What Tau provides

Monitoring & observability

Grafana Cloud — dashboards, alerting, uptime tracking

Pipeline orchestration

Prefect Cloud — scheduling, retries, failure alerts

Bug tracking

Sentry — error detection, alerting, triage

SLA, security & patches

Uptime guarantees, security updates, ongoing maintenance

What the client gets

Flexible hosting

Tau hosts the GitHub repo by default (preferred). Client can host on their own repo for an additional fee — full flexibility to modify application code either way.

Access to Tau's core technology

Licensed access to the proprietary platform layer — not ownership. The foundation every build runs on. Access is maintained through the managed service agreement.

Multi-project absorption
Subsequent builds for the same client are absorbed into the existing MS agreement. Fee re-evaluated at defined intervals.

SECTION 03
SECTION 03

Operations

How we run the work: meetings, Linear, comms, ownership

21 / 35 Structure

Meetings & workshops

Ad hoc calls without agendas kill engagements. Every meeting has a purpose, an agenda, and the right people.

Stage Meeting Time Leader Who (client) Output Comms after
Discovery Kickoff workshop 1h Delivery lead Sponsor + owner Process + expectations set Thanks + next meeting + pre-work
Discovery Problem framing workshop 2h Delivery lead Sponsor + owner Problem statement, success metrics Problem + metrics summary → refined version
Discovery As-is process walkthrough 1.5h Delivery lead Operational stakeholders Current state mapped, user stories Current process summary → formal map
Discovery Systems + data session 1.5h Tech lead Tech/data lead Systems inventory, integration feasibility Action items → feasibility summary
Discovery Discovery sign-off 30m Delivery lead Sponsor + owner Reverse Brief signed Verdict + next steps → signed Reverse Brief
Scoping Solution design workshop 2h Delivery lead Sponsor + owner + operational To-be process, target user stories Solution + wireframe direction → draft Proposal
Scoping Proposal review 1h Commercial lead Sponsor + owner Feedback on plan + price Feedback + commitments → revised Proposal + contracts
Scoping Scoping sign-off 30m Commercial lead Sponsor (+ legal) MSA + Proposal + DPA signed Signed contracts → welcome-to-build
Onboarding Access setup session 1h Tech lead Tech lead + owner All credentials tested Outstanding-items checklist → weekly chasers
Build Weekly update cadence 30m Delivery lead Owner Progress, blockers, client actions Written Friday status (× 4+ weeks)
Review Tech lead walkthrough × 3 rounds internal Tech lead Happy path passes 3 rounds + sign-off None (internal phase)
QA Delivery lead walkthrough × 3 rounds internal Delivery lead Happy path passes 3 rounds + sign-off None (internal phase)
UAT UAT kickoff 1h Delivery lead Owner + operational UAT pack distributed, window confirmed Window confirmation → mid-week reminder
UAT Acceptance Pack presentation 1h Delivery lead Sponsor + owner Feedback gathered on Pack v1 Per-item response (in scope / out / CR / deferred)
UAT Acceptance Pack readback + sign-off 30m Delivery lead Sponsor + owner Pack v2 signed (slides + signed docs) Signed Pack + go-live plan + MS kickoff
Retro Internal / External / Final internal 2h + 1h + 1h Delivery lead Client on external only Feedback-action email + sheet updates “What we’re changing” to client
Managed Service Monthly health check 30m Delivery lead Owner Monitoring, small enhancements Monitoring report + action items
Managed Service Quarterly review 2h Delivery lead Sponsor + owner Performance, roadmap, MS fee Review summary + forward plan

Rules: No meeting without an agenda sent 24 hours in advance. Decisions captured in writing in Linear within 24 hours. Invite list is non-negotiable. Ad hoc calls only for urgent blockers — never for exploration.

22 / 35 System

How it runs in Linear

Two teams

Delivery — Discovery, Scoping, Onboarding, Build, Review, QA, UAT, Retro
Engineering — Build (concurrent with Delivery Build phase)

Project statuses = phase

Custom project statuses for each phase: Discovery → Scoping → Onboarding → Build → Review → QA → UAT → Retro → Completed. Canceled is the off-ramp.

One project per engagement

Milestones per phase. Gate issues with sub-issues (checklist, meeting, comms, output, doc, qa, retro — each with a type label). Documents attached to the project.

Initiative per client

Every client gets a workspace-level initiative. All their engagement projects (past + current) roll up into it. Matching icon + colour across the initiative and its projects.

Config in Google Sheets

Statuses, labels, all 8 phases' sub-issues (meetings, comms, outputs, docs, QA, retros) and gate descriptions defined in a sheet. Scripts read the sheet and create Linear content. Change the sheet, change the process.

Due dates on milestones + gates

Every milestone and gate issue gets a target date. Overdue items surface in views and trigger alerts.

Labels drive custom views

stage:*, type:*, tier:*, gate:*, verdict:*, dpa:* — every sub-issue carries a stage marker + a type marker. Custom views filter by label combinations: "All open Review walkthroughs", "All pending comms this week", etc.

Hard gates via required-fields.com

Gate issues can't close without required labels + sign-offs. The system blocks you — not a reminder, a hard stop.

23 / 35 Communication

Communication rhythm

Every client meeting has a communication loop. Pre-meeting prep, same-day summary, and sometimes a follow-up after internal work. The client is never left wondering what happens next.

Pre-meeting
agenda + prep
(24-72h)
Meeting Same-day summary
+ action items
Internal work
(if needed)
Follow-up comms
(refined output)

Example: Problem Framing workshop

Pre: Prep questions + agenda 24h before
Same-day: "Here's the problem statement + success metrics as we heard them today"
Follow-up: "Refined version after our internal review — please confirm"

Example: Proposal Review

Pre: Full Proposal sent 72h before
Same-day: Summary of client feedback + next steps
Follow-up: Revised Proposal + contract package for signing

Email templates (TBD). Each standard communication needs a template — kickoff welcome, meeting summaries, feedback requests, chasing emails. Standardises tone and makes sure nothing is forgotten. Drafting these is the next piece.

24 / 35 Communication

Shared IM channel with the client

Email is formal. Meetings are scheduled. Real-time questions need a shared IM channel — without it, clients go silent for days and delivery lead becomes a routing hub. Set this up in Onboarding. Non-negotiable.

Platform: client chooses

We're flexible — they decide

Microsoft Teams (guest access), Slack Connect, Google Chat — whatever's already in their workflow. We adapt; they get zero switching cost. Whichever platform they choose is what we use.

One channel per engagement, not per client

Each engagement gets its own channel so context stays focused. Multi-engagement clients have multiple channels (or threaded sections in a parent channel).

Who's on it

Working channel (primary)

Tau: Delivery lead + Tech lead
Client: Client owner + their tech contact
DL owns the relationship; tech lead lets clients get unblocked without DL becoming a bottleneck.

Commercial — separate channel

Tau: Commercial lead
Client: Sponsor
Pricing, invoices, fee reviews, escalations. Don't mix with the working channel — different tone, different cadence.

Rules of engagement. IM is for quick questions + async updates, not for decisions or formal comms. Decisions still go in meetings + email. Response time expectation: within a working day for non-urgent, within an hour during agreed working hours for urgent. Set these expectations at Onboarding kickoff.

25 / 35 Automation

Automated client comms

Email automations triggered by Linear status changes. The client always knows when their window of action is.

Onboarding starts

"Please provide API access, data dumps, and staging credentials by [date]. Here's exactly what we need."

UAT window opens

"Your 1-week UAT window starts now. Here's the feedback template. Here's what's in scope. Deadline: [date]."

Chasing / reminders

"Reminder: UAT feedback due in 3 days." / "API credentials still outstanding — this blocks build start."

Build complete

"Your platform is live. Here's your runbook. Managed service starts now. Your SLA: [terms]."

Status updates

Weekly automated summary: what was done, what's next, any blockers, any actions needed from client.

MS check-ins

Automated monthly health report. Uptime, incidents, pipeline status. Quarterly review scheduling.

Controls the process. The client doesn't wonder what's happening. They know their windows, their deadlines, and their actions. We're not chasing — the system is.

26 / 35 Ownership

Who owns what

Role Opportunity Discovery Scoping Onboarding Build Review QA UAT Retro Managed Svc
Commercial lead Owns stage Attends standup Pricing + terms Client relationship Attends external Account reviews
Delivery lead Reviews brief Owns stage Owns stage Owns stage Cadence + comms Oversight Owns stage Owns stage Owns stage Owns stage
Tech lead Feasibility Effort estimates Verifies access Owns stage Owns stage Fix support Fix support Attends internal Monitoring + patches
Client sponsor Initial contact Sign-off Sign-off + budget Acceptance sign-off External retro Commercial review
Client owner Sign-off + context Sign-off Provides access + data Tests + signs External retro Day-to-day user

Delivery lead owns every gate. They confirm the engagement is ready to move forward. If they're not confident, it doesn't proceed — regardless of commercial pressure.

SECTION 04
SECTION 04

Commercial

Pricing, filtering, profitability

27 / 35 Commercial

Pricing Discovery + Scoping

Discovery and Scoping are real work — we charge for them, and we credit the fee back to the build if the client proceeds. Same price to the client either way. Protection for Tau if they walk.

Pricing model

Fixed fee

~10-15% of expected total engagement value. Same model for Discovery and Scoping — typically bundled.

Full credit if they proceed

100% of the fee credits toward the build price once a Proposal is signed. Client pays no premium for our upfront work.

Retained if they don't

If the client walks away after Discovery/Scoping, Tau keeps the fee. Real work was done — feasibility assessment, stakeholder alignment, recommendations.

Sizing

Engagement size Discovery + Scoping fee
Quick-win / advisory (< £15k) £750 — £1,500
Moderate build (£25–60k) £3,000 — £6,000
Large build (£60–150k) £6,000 — £15,000

Why charge?
Industry data: clients who pay for Discovery convert to full engagements at 60-75%. Free Discovery attracts tire-kickers. Charging signals the work is real — and protects our time if they walk.

When NOT to charge

Qualification call

30-60 min initial conversation is always free — no Discovery work is done.

Simple, fully-specified briefs

If Discovery is < half a day, absorb it. Not worth the invoicing overhead.

Existing client, small incremental work

Context is already established. Absorb into retainer. But charge for anything substantial.

28 / 35 The filter

How Discovery + Scoping filter tire-kickers

What's a tire-kicker engagement?

A prospect who extracts value from Tau — feasibility opinions, architectural advice, effort estimates, even wireframes — without real intent or authority to buy.

Common patterns

  • Price shoppers — using Tau to validate a number they got elsewhere
  • Internal justification — already decided to build in-house, want a "second opinion"
  • No real budget — exploring without authority to spend
  • Nice-to-have problem — no real pain, easily deprioritised
  • Competitor intelligence — extracting our approach

The cost. Free Discovery for 5 tire-kickers = 2 weeks of delivery lead time with zero revenue. At typical rates, that's £5–10k of work given away.

How the process filters them out

1. The Discovery fee

Real buyers pay. Tire-kickers push back immediately — and that's the signal.

2. Stakeholder requirement

Sponsor + Client Owner + operational stakeholders must be in the room. Tire-kickers can't produce all three.

3. Decision-maker confirmation

Discovery checklist requires naming the decision maker + sign-off process. Fuzzy budget authority gets flushed out.

4. The feasibility gate

Gives them a dignified stop. If verdict is "not feasible" — or "conditional" with hurdles they won't meet — they can walk without losing face. We keep the fee.

5. Scoping commits to a number

Tire-kickers rarely commit to Scoping. If they do, and then walk at Proposal — we still have a signed MSA, two paid fees, and real feasibility data.

Red flags at Qualification. "Can you just send us a rough estimate first?" — "Budget is flexible" (but never quoted) — "We're talking to a few vendors" (fine, but they should pay for Discovery regardless) — "We just need to validate feasibility" (that's exactly what Discovery is, and it's not free). Commercial should flag these in the brief.

29 / 35 Profit

Engagement profitability

The process tells us how to deliver. Profitability tracking tells us whether it was worth it. Per-engagement P&L, powered by data already flowing through Linear.

The formula

Revenue = signed Proposal/SOW value

Contracted, recognised on signing. Simple and predictable. Managed Service fees accrue separately, per month.

Cost = story-point-weighted effort

For each person each cycle: role_rate × days × (points_on_project ÷ total_points).
Sum across all contributors = engagement cost for that period.

Margin = Revenue − cumulative Cost

Tracked per engagement over time. Shows profitability at any point in the build, not just at the end.

Worked example: Midnite CRM, month 1

Proposal value: £50k over 3 months
Revenue recognised month 1: £16,667 (⅓)

Contributors this cycle
Jack (delivery lead, £800/day, 20 days): cost = £16,000
200pts total — 50pts on Midnite = 25%
Midnite share: £4,000

Mark (dev, £500/day, 20 days): cost = £10,000
100pts total — 100pts on Midnite = 100%
Midnite share: £10,000

Midnite cost month 1: £14,000
Midnite margin month 1: £16,667 − £14,000 = £2,667 (16%)

What we need

  • Role-based rates (one config file, ~5 roles)
  • Proposal values on Linear projects (custom field)
  • Cycle + point data (Linear already has)

Built on tau-pulse

  • Tau-pulse already pulls cycles + issues + points
  • Extend with rates + revenue layer
  • New dashboard tab: "P&L per engagement"

Review cadence

  • Weekly: delivery lead spots overruns
  • Monthly: engagement-level review
  • Quarterly: client + company roll-up
SECTION 05
SECTION 05

Technology

The platform and how we deploy it

30 / 35 Our platform

tau-core — the foundation every build runs on

tau-core is our proprietary platform. Client applications import it as library packages and customise on top. Every engagement should use tau-core — it's our scalability, our margin, and our product.

What tau-core ships

Data plumbing

Ingestion, Prefect pipelines, storage, transformations.

Platform plumbing

Auth, RBAC, Grafana, Sentry, deploy automation.

Reporting + dashboards

Framework, visualisations, report templates.

API + integrations

Expose data outward to other systems.

Common tooling

Media plans, optimisation, trafficking, write-back. Shared utilities.

Engineering standards

1000+ linting rules, architecture guardrails, AI coding guidance, testing framework, CI/CD patterns.

The model

Client app
(per engagement)
tau-core packages
Client app Bespoke code
(rules, UX, data)
Managed Service
(licensed)

Commercial benefit: lets us deliver a mature platform faster than anyone building from scratch — how we compete on time and price while making margin.

Licensed, not sold. Client access to tau-core is via Managed Service. Bespoke code they build on top is theirs.

31 / 35 Deployment

Tier 1 / 2 / 3 / 4 — how we deploy

Four ways we deploy. The first three use tau-core. Default to Tier 1, 2, or 3. Avoid Tier 4.

What the client owns vs licenses. In Tier 1–3, the client owns their custom application code — the business logic, data model, UI customisation built specifically for them. They license the tau-core library that application depends on. Their app is theirs; the foundation it runs on is ours, accessed under Managed Service. In Tier 4 there is no licensed code — everything is bespoke and owned by the client, but they lose the benefits of our platform entirely.

Tier Hosting Domain tau-core? Tenancy When it fits
Tier 1
Multi-tenant SaaS
Tau cloud app.taums.ai ✓ Yes Multi-tenant
(segregated data)
Fastest + cheapest. Good for POV, smaller MVPs, clients without strong infra preferences.
Tier 2
Tau-hosted, single-tenant
Tau cloud client.taums.ai ✓ Yes Single-tenant Clients who want their own space but no infra responsibility. Branded URL.
Tier 3
Client-hosted, single-tenant
Client cloud
(AWS / GCP)
Custom client domain ✓ Yes (as dependency) Single-tenant
(client infra)
Enterprise clients with strict infra/compliance. Their cloud, our platform.
Tier 4
Fully bespoke
Client cloud Custom ❌ No Fully bespoke
(client GitHub)
Avoid. Every build from scratch on client infra. Doesn't scale for us.

Why avoid Tier 4. Every Tier 4 engagement rebuilds plumbing we already have in tau-core. It doesn't scale: we can't spread fixed cost across clients, can't reuse improvements, can't maintain it without dedicated capacity. Should be a rare exception, not a default.

How to steer clients toward Tier 1–3. Lead with the client benefit: they get access to our collective assets and hard-won experience baked into tau-core. Years of pipeline patterns, reporting frameworks, optimisation tools, write-back functionality — things they'd otherwise pay to build from scratch. By licensing tau-core, they inherit our scale advantage. Tier 4 strips that away — same price, less platform.

32 / 35 Engagement sizing

POV, MVP, Enterprise — right-sizing the commitment

Commercial wants clients to sign quickly. Delivery wants builds to succeed. Tier choice is where those two intersect. Each tier is self-contained — not a downpayment on the next.

Tier Fee Duration What it IS What it ISN'T Ends with
POV
Proof of Value
£10–25k 4–6 wks Narrow, time-boxed demo with real data. Numbers they can act on. Production-ready. Not iterable. Not supported. Go/no-go: proceed, or stop.
MVP
Min Viable Product
£25–75k 2–4 mo Production-ready core flows. Live, supported, real users. Full platform. Doesn't cover everything they want. Live platform + MS. More = CR or new engagement.
Enterprise
Full Platform
£75k+ 4–6 mo+ Full bespoke build. Hardened, scaled, multiple phases. Infinite scope. Acceptance criteria still bound it. Full platform + MS at scale.

Three patterns to avoid.

  • "POV is good enough, let's run it" — client treats POV as production. Fails.
  • Under-scoped MVP — client signs MVP, really needs Enterprise. Delivery overruns.
  • Overservicing small engagements — Enterprise-level work on POV fee.

How we protect against all three.

  • Explicit "what it isn't" in every Proposal — client signs the ceiling.
  • Full Scoping rigour regardless of tier size.
  • Delivery lead recommends the tier in Discovery — not commercial.
  • CRs even on small engagements — no favours.

Upgrade path: POV → MVP → Enterprise is a valid journey — each step is a new engagement with its own Discovery, Scoping, and price. Not an extension.

SECTION 06
SECTION 06

Learning

Retros, process evolution, client health

33 / 35 Stage 9 + Learning

Retro → process evolves, client stays

Retro is a full project phase — the project stays in status Retro for the 4–6 weeks between UAT sign-off and the final internal retro. Three retros: internal — external — final internal. Internal first so the team can be honest; external after to hear the client; final to integrate feedback into the process.

Internal retro
(Tau only)
External retro
(Tau + client)
Final internal
(integrate feedback)
Process improvements
applied to next engagement

1. Internal retro

When: 2 weeks after UAT sign-off
Who: Delivery, tech, commercial
Duration: 2h workshop

Covers: what worked, what didn't, process gaps, technical learnings, commercial friction, team tensions. Safe space — honest over diplomatic.

2. External retro

When: 3–4 weeks after UAT sign-off
Who: Tau + Sponsor + Client Owner
Duration: 1h meeting

Covers: what client valued, what they'd want different, relationship health, Managed Service working well?, future needs. NPS-style question at the end.

3. Final internal

When: 1 week after external
Who: Delivery + process owner
Duration: 1h

Covers: reconcile internal vs external view, agree action items, decide what changes to checklists/templates/process. Outputs logged + applied.

Why internal first. The team can't be fully honest if the client is in the room. Issues like "commercial over-promised X" or "the scope was wrong" need a safe conversation before they become diplomatic.

Where feedback lives. Action items go into Linear as sub-issues on a "Process Improvements" project. Checklist/template changes get pushed to the Google Sheet so future engagements pick them up automatically.

34 / 35 Learning

How the process evolves

Retros produce action items. Items that generalise get pushed into the system itself — next engagement inherits the learning automatically.

Retro Actions categorised Pushed to the system Next engagement inherits

Sheet changes

New checklist items, gates, labels in the Google Sheet. Next engagement auto-includes.

Template changes

Agendas, emails, output docs updated in the repo. Drive sync picks them up.

Process review

Quarterly review of the process itself. Are gates still right? Discovery right length?

What we don't change: a one-off issue from a single engagement doesn't earn a process change. Patterns across 2–3 engagements do. Avoids pendulum swings.

35 / 35 Learning

Client health & NPS

Retros happen once per engagement. Between retros, we track client health continuously so problems surface before they become renewal risks.

What we track

NPS at key moments

Quarterly review question: "How likely are you to recommend Tau to someone in your network? (0–10)". Tracked per client over time.

Platform health indicators

Uptime vs SLA, incident count, time-to-resolve, support ticket volume. Monthly report to client, internal alerting on degradation.

Engagement signals

Are they still using the platform? Active users, feature adoption, CRs requested. Silence is a warning sign.

Commercial signals

MS fee renewed? Invoice payment cadence? Any friction on commercial items?

What triggers action

NPS 9-10, metrics healthy, engaged:
Happy client. Opportunity for case study, referral ask, further engagement discussion.

NPS 7-8, or declining engagement:
Yellow flag. Delivery lead has a directed conversation at the next monthly. Identify the friction.

NPS <7, or two consecutive red indicators:
Escalate. Unscheduled client health meeting. Root cause analysis. Corrective action plan. Commercial lead involved.

Where it lives. Client health dashboard in tau-pulse, updated continuously. Flagged clients surfaced in weekly delivery standup. Quarterly trend review at company level.

SECTION 07
SECTION 07

Appendix

Full checklists for reference

APPX / 1 Appendix

Discovery checklist

Problem & stakeholders

  • Problem statement agreed with client sponsor
  • Success metrics defined
  • Value of solving understood
  • Budget and timeline appetite confirmed
  • Decision maker + sign-off process identified
  • Sponsor + operational stakeholders identified and bought in
  • User personas identified (actual end users)
  • Client's definition of done captured

Current state

  • Current state (as-is) process mapped
  • Current user stories captured

Systems & feasibility

  • Systems and data sources inventoried
  • Data access confirmed (owner, method, format)
  • Scale requirements captured (users, volume, throughput)
  • Security + auth requirements confirmed (SSO, RBAC, audit)
  • DPA / data processing agreement status checked
  • Compliance requirements checked (ISO, SOC2, GDPR)
  • Integration requirements identified + feasibility assessed
  • Constraints + assumptions documented explicitly
  • Dependencies on other client teams identified

Meetings

  • Kickoff workshop (1h)
  • Problem framing workshop (2h)
  • As-is process walkthrough (1.5h, operational)
  • Systems + data technical session (1.5h)
  • Reverse Brief presentation (30m)
  • Reverse Brief readback + sign-off (30m)

Each output-producing meeting has 3 comms around it: pre-meeting prep (24-48h before), same-day post-meeting summary, and a follow-up comm when the output is packaged for client confirmation. The Reverse Brief also goes through a two-meeting doc cycle (presentation → feedback → readback + sign-off).

APPX / 2 Appendix

Scoping checklist

Defining the answer

  • Future state (to-be) process mapped
  • Target user stories defined
  • Low-fi wireframes produced + client walkthrough complete
  • Phased delivery plan drafted
  • Phasing strategy + rationale documented
  • Effort estimates produced (delivery lead, not commercial)
  • Acceptance criteria defined for first deliverable

Delivery planning

  • Risks + mitigations register
  • Training / enablement plan drafted
  • Launch plan defined (rollout strategy)
  • Success measurement plan (post-launch metrics)
  • Communication plan agreed (client meeting cadence)

Commercial + sign-offs

  • Commercial proposal drafted (pricing + MS terms)
  • Client review + feedback incorporated
  • Delivery lead sign-off captured
  • Commercial sign-off captured

Meetings

  • Solution design workshop (2h)
  • Proposal presentation (1h)
  • Proposal readback + sign-off (30m)

New in Scoping

  • Happy path defined in the Proposal (numbered user flow) — feeds Review + QA rounds

Documents signed

  • MSA (Master Service Agreement) — new client only
  • Proposal (SOW)
  • DPA (if personal data)
APPX / 3 Appendix

Onboarding checklist

All items are client-side dependencies. Build dates aren't committed until every item is closed.

Access & credentials

  • API credentials / access tokens received and tested
  • Platform access confirmed (ad accounts, CRM, analytics)
  • Test accounts / sandbox credentials received
  • Staging / sandbox environment access confirmed

Data, people & legal

  • Data dumps / sample data received
  • Historical data for validation received (if needed)
  • DPA fully executed (not just 'in progress')
  • Technical contact named and available
  • Client owner available and confirmed for UAT

Meetings

  • Access setup session (1h, tech lead + owner)
  • Onboarding Runbook presentation (30m)
  • Onboarding Runbook readback + sign-off (30m)

Doc for Onboarding

  • Onboarding Runbook — access matrix, contacts, rules of engagement; signed + filed
APPX / 4 Appendix

Build checklist

Build on the Delivery side is thin. The core engineering work lives in a concurrent project in the Engineering team.

Delivery (this phase)

  • Weekly update cadence agreed + scheduled (30m, owner)
  • Week 1 status email sent
  • Week 2 status email sent
  • Week 3 status email sent
  • Week 4 status email sent

Longer builds add further weekly status rows to the stages sheet — the template ships with 4 to set the minimum cadence.

Engineering (concurrent project)

Engineering's own project in the Engineering team, named <Client> - <Engagement> [Build]. Holds the actual implementation work — tickets, PRs, deploys. Tech lead owns.

Both Delivery and Engineering projects carry Build as their project status while this phase is active. Engineering closes to Completed when eng work is done; Delivery moves to Review.

GATE: Build closes when Engineering declares its project done. That's the only condition.

APPX / 5 Appendix

Review checklist — Tech Lead QA

First internal quality gate. Tech lead walks the happy path defined in the Proposal end-to-end three times. Three clean rounds = sign-off.

Sub-issues on GATE: Review

  • Round 1 tech lead happy path walkthrough + issues logged
  • Round 2 tech lead happy path walkthrough + issues logged
  • Round 3 tech lead happy path walkthrough + issues logged
  • Tech lead sign-off captured — build ready for Delivery Lead QA

Owner + duration

Tech lead owns. ~1 week end-to-end.

Failing a round: log issues, fix, rerun. Counter resets so the final pass is always clean. Push the deadline, not the quality.

Why tech lead first: catches engineering defects (architecture drift, missing error handling, regressions). Delivery lead shouldn't be hunting these.

APPX / 6 Appendix

QA checklist — Delivery Lead QA

Second internal quality gate. Delivery lead walks the same happy path three more times — from the user's perspective. Catches scope drift + usability before UAT.

Sub-issues on GATE: QA

  • Round 1 delivery lead happy path walkthrough + issues logged
  • Round 2 delivery lead happy path walkthrough + issues logged
  • Round 3 delivery lead happy path walkthrough + issues logged
  • Delivery lead sign-off captured — ready for UAT

Owner + duration

Delivery lead owns. ~1 week end-to-end.

What changes vs Review: same happy path, different lens — does this match the Proposal? Does the flow make sense to a non-technical user? Are edge cases + empty states handled?

The client should only ever see the third pass. Tech lead sees code; delivery lead sees outcome; UAT sees a polished product.

APPX / 7 Appendix

UAT checklist — client sign-off

Client tests against the happy path they already signed in the Proposal. Output: signed Acceptance Pack.

Sub-issues on GATE: UAT

  • UAT kickoff (1h) + feedback template distributed
  • UAT window + deadline confirmed in writing
  • Mid-week UAT reminder sent
  • Acceptance Pack v1 sent for pre-read (48h pre readback)
  • Acceptance Pack presentation (1h, sponsor + owner)
  • Feedback gathered + incorporated → v2
  • Acceptance Pack readback + sign-off (30m, slides + signed docs)
  • Signed Pack + go-live plan + MS kickoff email distributed

Doc for UAT

  • Acceptance Pack — happy path walk-through + UAT results + sign-off

Out-of-scope = CR, not blocker. Out-of-scope items raised during UAT get logged as change requests and don't block sign-off.

Happy path defined in the Scoping Proposal — signed by the client — is the target for UAT. No surprises.

APPX / 8 Appendix

Retro checklist — process evolves, client stays

Project stays in status Retro for 4–6 weeks after UAT sign-off. Three retros, feedback-action email, template changes pushed back into the sheet.

Sub-issues on GATE: Retro

  • Internal retro run + notes captured (2h, 2 weeks after UAT sign-off)
  • External retro run + notes captured (1h, 3–4 weeks after UAT sign-off)
  • Final internal retro + action items committed (1h, 1 week after external)
  • Feedback-action email sent to client (what we're changing as a result)
  • Checklist / template changes pushed back to the Google Sheet

Three retros, three purposes

Internal first — team can be honest. External next — hear the client. Final internal — integrate feedback into the process.

Gate closes → Completed. Once all 5 sub-issues close, the Retro gate closes and the project moves to Linear's built-in Completed status. Managed Service continues under its own project.

APPX / 9 Appendix

Communication pattern per meeting

Meeting Pre (24-72h before) Same-day summary Follow-up (after internal work)
Kickoff workshop Welcome + agenda + delivery model overview (24h) Thanks + next meeting confirmation + pre-work for Problem Framing
Problem framing Agenda + prep questions (24h) "Problem + success metrics as we heard them" Refined problem statement for confirmation
As-is walkthrough Agenda + attendee request (48h) Current process summary + user stories captured Formal as-is process map for operational review
Systems + data session Topics + credentials prep (48h) Action items list (what Tau needs, by when) Feasibility summary + red flags
Discovery sign-off Full Reverse Brief (48h) Verdict + decisions + Scoping kickoff date Signed Reverse Brief distributed
Solution design Reverse Brief + agenda (24h) Solution + to-be + wireframe direction Full wireframes + draft Proposal
Proposal review Full Proposal (72h) Feedback summary + commitments to change Revised Proposal + contract package
Scoping sign-off Final Proposal + MSA + DPA (48h) Signed contracts + Onboarding kickoff scheduled Welcome-to-build + Onboarding expectations
Access setup Required-access list + prep Outstanding-items checklist Weekly chasers until all access confirmed
Weekly update Written summary (progress, next week, blockers, client actions)
UAT kickoff UAT pack (48h) — feedback template + scope alignment Window confirmation + deadline reminder Mid-week reminder during UAT
UAT review + sign-off Feedback reviewed internally Per-item response (in scope / out of scope / CR / deferred) Go-live plan + Managed Service kickoff email

Tracked in Linear. Every meeting has a corresponding "[Comms]" sub-issue in Linear covering the full loop. Delivery lead ticks off as each communication goes out.