Why this exists + the process at a glance
The engagement model that takes an opportunity from first conversation to an ongoing managed service. Eight phases. Eight gates. One process.
April 2026 — scroll down or use arrow keys
Now
Future
This process is how we get there.
Now
Client says "build me X." We start building X. Two months in we discover the real problem was Y. Rework. Overruns. Eroded trust.
We jump to what before we've understood why.
Future
Client says "build me X." We say: "Let's understand the problem first." Discovery defines the question. Scoping defines the answer. The solution comes from understanding, not assumption.
We invest in why so the what is right first time.
Pre-sale
Commercial checklist
SE feasibility + T-shirt size
Process + D&S pricing + terms
Reverse Brief
Proposal Package — SALE
Post-sale
Runbook signed
Engineering delivers
Tech lead × 3 rounds
Delivery lead × 3 rounds
Acceptance Pack signed
3 retros → MS
Ongoing support
Dashed = gate (must pass to proceed)
Pre-sale
Commercial
Commercial + SE
Delivery lead
Delivery lead
Delivery lead
Post-sale
Tech lead
Tech lead
Delivery lead
Delivery + Client
Delivery lead
Tech lead
Commercial during Discovery + Scoping — delivery leads, but commercial is in the room:
Commercial owns this entirely. Before delivery or engineering spend a single hour, commercial confirms the opportunity is real. Items that used to sit in Discovery now live here — earlier qualification = less wasted delivery time.
Moved from Discovery. Decision maker, budget, timeline, sponsor — these used to be Discovery outputs. But commercial already captures them during sales. Moving them here means Discovery starts with context, not cold.
Once qualified, a solutions engineer joins a 30-min call with the client to assess initial technical feasibility:
Soft gate: Letter of Engagement (LOE). Before Discovery starts, client signs the LOE. It outlines the engagement process, covers Discovery + Scoping pricing, and sets payment terms. If they proceed to Build, the D+S fee is absorbed into the Build price. If they walk away, they keep the Reverse Brief + Proposal as deliverables.
This has happened. The process exists to prevent it.
Gates are enforced in Linear via required-fields.com. They are not suggestions — the system blocks you.
Every step from opportunity to managed service
Commercial finds the opportunity and produces a brief. Not a scope, not a plan — just enough for delivery to decide whether to investigate.
This is the handover.
Commercial's job is to find the opportunity. Delivery's job is to validate and deliver it. The brief is the boundary — everything before it is commercial, everything after it is delivery.
Flag red flags early. Tire-kicker signals ("budget is flexible", "just send a rough estimate", no named decision maker, unclear urgency) are cheapest to catch at Opportunity. If commercial doesn't flag them, delivery burns Discovery time finding out.
GATE: Opportunity
Brief delivered with red flags noted. Delivery lead decides whether to accept for investigation.
Commercial uses a Claude skill in Cowork to generate structured briefs. Claude asks the right questions — including red-flag probes to surface tire-kicker signals at the earliest possible point.
Every brief follows the same format. No "I forgot to mention the budget."
Conversational questions, not a form. Claude extracts and structures.
Commercial doesn't need Linear. Brief in Claude, delivery takes it into tracking.
claude.ai → Projects → Opportunity Brief
Available to everyone in the Tau org.
Discovery is defining the question. We investigate the problem, the systems, the stakeholders. The output is a Reverse Brief — our structured response back to the client.
GATE: Discovery
Can't pass without: feasibility verdict, DPA status, delivery lead sign-off, red flag register, operational buy-in.
Five structured sessions over 1-2 weeks. Delivery leads, commercial in the room.
Introduce the team, explain the delivery model, set expectations about time commitment and stakeholder involvement.
Deep-dive on the problem. Success metrics, value of solving, client's definition of done. Sponsor + owner present.
With operational stakeholders. Current process, current user stories, actual day-to-day pain.
Technical deep-dive with client's tech/data lead. Systems inventory, data access, scale, security.
Walk through the Reverse Brief. Verdict captured. DPA status confirmed. Sign-offs collected. Gate closed.
The structured document capturing everything. Client signs off. If feasible: Scoping begins. If not: we stop or reframe.
Scoping is defining the answer. What we'll build, in what order, for how much. Pricing happens here — not before. The output is a package of documents the client signs.
GATE: Scoping
Can't pass without: MSA signed, Proposal (SOW) signed, DPA signed (if needed), managed service terms agreed.
Legal framework: IP, liability, confidentiality, disputes. Signed once per client — reused for all future engagements.
Delivery plan + phasing strategy, to-be process + user stories, low-fi wireframes, effort estimates, acceptance criteria, happy path, risks, training + launch plans, pricing + managed service terms.
GDPR data processing terms. Required if personal data is involved. Signed once per client.
Solution design → three parallel workstreams → Proposal Package reviewed + signed.
Co-design the to-be process. Walk through architecture. Align on wireframes and phasing.
Present full package: PRD + Design Doc + Proposal. Client feedback captured.
Contracts signed: MSA + Proposal/SOW + DPA. Engagement moves to Onboarding.
After solution design, three workstreams in parallel:
Product requirements: user stories, acceptance criteria, happy path, phasing, success metrics, training + launch plan.
Architecture overview, data model + schema, integration plan (APIs, auth, ingestion), deployment approach, testing strategy, security, performance, monitoring, risk register + trade-offs.
Pricing, payment schedule, managed service terms, commercial narrative. Effort estimates from delivery + tech feed pricing.
All three form the Proposal Package. PRD + Design Doc + Proposal (SOW) + MSA + DPA. Reviewed together at Proposal review meeting. Tech lead writes design doc, delivery lead reviews for scope alignment. Internal doc — client sees the Proposal, not the design doc.
Scope is signed but the build doesn't start yet. Don't commit build dates until the client has delivered everything we need.
Working credentials, not "they said they'd send them"
Can actually log in and see data
Historical data for building + validation
Not "in progress" — actually signed
Somewhere to develop against
One person for technical questions during build
Output: Onboarding Runbook. All of the above captured in a doc that goes through the same readback cycle as Reverse Brief + Proposal — pre-read → presentation → feedback → incorporation → readback + sign-off. Becomes the tech lead's reference during Build.
Build is where the Engineering team owns the work. Delivery runs cadence + keeps the client informed. QA + UAT are their own phases after Build, not inside it.
A separate Linear project in the Engineering team, named <Client> - <Engagement> [Build]. Holds the actual implementation work — tickets, PRs, deploys. Tech lead owns. The Delivery project and Engineering project run in parallel both at status Build.
Weekly cadence meeting + 4+ weekly status emails to the client. No engineering detail in Delivery — that's Engineering's job.
GATE: Build closes when Engineering declares its project done. Delivery then moves to Review.
What you won't find here:
No QA, no UAT, no tech-lead sign-off, no retro. Those are their own phases now (next four slides).
Further work for the same client?
New engagement project. One project = one scope = one price.
First internal quality gate. Tech lead walks the happy path end-to-end three times. Three successful rounds = ready for Delivery Lead QA.
Duration: ~1 week
Happy path defined in: the Scoping Proposal (signed by the client)
Three rounds, all pass = sign-off. Failing a round = log issues, fix, rerun. Counter resets on failure so the final round is always a clean pass.
Why tech lead first: catches engineering defects (architecture drift, missing error handling, regressions). Delivery lead shouldn't be hunting these.
Second internal quality gate. Delivery lead walks the same happy path three more times — from the user's perspective this time. Catches scope drift + usability before UAT.
Duration: ~1 week
What changes vs Review: same happy path, different lens — does this match the Proposal? Does the flow make sense to a non-technical user? Edge cases + empty states?
Push the deadline, not the quality. If either internal QA phase surfaces structural issues, UAT gets delayed. A delayed UAT is cheaper than a bad one.
Why two phases not one: different lenses catch different defects. Tech lead sees code; delivery lead sees outcome. The client should only ever see the third pass.
Client gets a fixed window to test against the happy path they already signed in the Proposal. Output: signed Acceptance Pack.
Pre-comm (UAT pack sent 48h pre) → kickoff meeting → same-day window-confirmation. Then: test accounts provisioned, feedback template distributed, deadline in writing.
Happy path walk-through + UAT results + sign-off. Standard doc pattern: v1 pre-read → presentation meeting → feedback gathered + incorporated → readback meeting with slides + signed docs → v2 signed + distributed.
Out-of-scope = CR, not blocker. UAT is where "I also want..." appears. Strictly align to scope: out-of-scope items get logged as change requests and don't block sign-off.
Structured + positive feedback.
Template nudges balanced feedback: "what works well" → "issues" → "out of scope". Not just a bug list. The happy path is already defined in the Proposal, signed by the client — UAT just verifies.
Why this is a full phase now.
Used to be rolled into Build. Separating makes the client-facing UAT flow explicit in the pipeline + allows Review + QA to run silently inside the delivery team first (client only sees polished work).
When something is identified as out of scope — during UAT, build, or managed service — it becomes a change request. Every CR is a new project.
Full process, separate commercial entity.
Each CR goes through Discovery → Scoping → Build with its own Proposal (SOW) and price. Discovery is lighter (context already exists), but the gates still apply.
No silent scope absorption.
If it's not in the original Proposal, it's a change request. No exceptions, no verbal agreements. This protects both Tau's margin and the client's expectations.
Mandatory for every first engagement. The client gets ongoing access to our core technology, monitoring, and support. Every build enters managed service.
Grafana Cloud — dashboards, alerting, uptime tracking
Prefect Cloud — scheduling, retries, failure alerts
Sentry — error detection, alerting, triage
Uptime guarantees, security updates, ongoing maintenance
Tau hosts the GitHub repo by default (preferred). Client can host on their own repo for an additional fee — full flexibility to modify application code either way.
Licensed access to the proprietary platform layer — not ownership. The foundation every build runs on. Access is maintained through the managed service agreement.
Multi-project absorption
Subsequent builds for the same client are absorbed into the existing MS agreement. Fee re-evaluated at defined intervals.
How we run the work: meetings, Linear, comms, ownership
Ad hoc calls without agendas kill engagements. Every meeting has a purpose, an agenda, and the right people.
| Stage | Meeting | Time | Leader | Who (client) | Output | Comms after |
|---|---|---|---|---|---|---|
| Discovery | Kickoff workshop | 1h | Delivery lead | Sponsor + owner | Process + expectations set | Thanks + next meeting + pre-work |
| Discovery | Problem framing workshop | 2h | Delivery lead | Sponsor + owner | Problem statement, success metrics | Problem + metrics summary → refined version |
| Discovery | As-is process walkthrough | 1.5h | Delivery lead | Operational stakeholders | Current state mapped, user stories | Current process summary → formal map |
| Discovery | Systems + data session | 1.5h | Tech lead | Tech/data lead | Systems inventory, integration feasibility | Action items → feasibility summary |
| Discovery | Discovery sign-off | 30m | Delivery lead | Sponsor + owner | Reverse Brief signed | Verdict + next steps → signed Reverse Brief |
| Scoping | Solution design workshop | 2h | Delivery lead | Sponsor + owner + operational | To-be process, target user stories | Solution + wireframe direction → draft Proposal |
| Scoping | Proposal review | 1h | Commercial lead | Sponsor + owner | Feedback on plan + price | Feedback + commitments → revised Proposal + contracts |
| Scoping | Scoping sign-off | 30m | Commercial lead | Sponsor (+ legal) | MSA + Proposal + DPA signed | Signed contracts → welcome-to-build |
| Onboarding | Access setup session | 1h | Tech lead | Tech lead + owner | All credentials tested | Outstanding-items checklist → weekly chasers |
| Build | Weekly update cadence | 30m | Delivery lead | Owner | Progress, blockers, client actions | Written Friday status (× 4+ weeks) |
| Review | Tech lead walkthrough × 3 rounds | internal | Tech lead | — | Happy path passes 3 rounds + sign-off | None (internal phase) |
| QA | Delivery lead walkthrough × 3 rounds | internal | Delivery lead | — | Happy path passes 3 rounds + sign-off | None (internal phase) |
| UAT | UAT kickoff | 1h | Delivery lead | Owner + operational | UAT pack distributed, window confirmed | Window confirmation → mid-week reminder |
| UAT | Acceptance Pack presentation | 1h | Delivery lead | Sponsor + owner | Feedback gathered on Pack v1 | Per-item response (in scope / out / CR / deferred) |
| UAT | Acceptance Pack readback + sign-off | 30m | Delivery lead | Sponsor + owner | Pack v2 signed (slides + signed docs) | Signed Pack + go-live plan + MS kickoff |
| Retro | Internal / External / Final internal | 2h + 1h + 1h | Delivery lead | Client on external only | Feedback-action email + sheet updates | “What we’re changing” to client |
| Managed Service | Monthly health check | 30m | Delivery lead | Owner | Monitoring, small enhancements | Monitoring report + action items |
| Managed Service | Quarterly review | 2h | Delivery lead | Sponsor + owner | Performance, roadmap, MS fee | Review summary + forward plan |
Rules: No meeting without an agenda sent 24 hours in advance. Decisions captured in writing in Linear within 24 hours. Invite list is non-negotiable. Ad hoc calls only for urgent blockers — never for exploration.
Delivery — Discovery, Scoping, Onboarding, Build, Review, QA, UAT, Retro
Engineering — Build (concurrent with Delivery Build phase)
Custom project statuses for each phase: Discovery → Scoping → Onboarding → Build → Review → QA → UAT → Retro → Completed. Canceled is the off-ramp.
Milestones per phase. Gate issues with sub-issues (checklist, meeting, comms, output, doc, qa, retro — each with a type label). Documents attached to the project.
Every client gets a workspace-level initiative. All their engagement projects (past + current) roll up into it. Matching icon + colour across the initiative and its projects.
Statuses, labels, all 8 phases' sub-issues (meetings, comms, outputs, docs, QA, retros) and gate descriptions defined in a sheet. Scripts read the sheet and create Linear content. Change the sheet, change the process.
Every milestone and gate issue gets a target date. Overdue items surface in views and trigger alerts.
stage:*, type:*, tier:*, gate:*, verdict:*, dpa:* — every sub-issue carries a stage marker + a type marker. Custom views filter by label combinations: "All open Review walkthroughs", "All pending comms this week", etc.
Gate issues can't close without required labels + sign-offs. The system blocks you — not a reminder, a hard stop.
Every client meeting has a communication loop. Pre-meeting prep, same-day summary, and sometimes a follow-up after internal work. The client is never left wondering what happens next.
Pre: Prep questions + agenda 24h before
Same-day: "Here's the problem statement + success metrics as we heard them today"
Follow-up: "Refined version after our internal review — please confirm"
Pre: Full Proposal sent 72h before
Same-day: Summary of client feedback + next steps
Follow-up: Revised Proposal + contract package for signing
Email templates (TBD). Each standard communication needs a template — kickoff welcome, meeting summaries, feedback requests, chasing emails. Standardises tone and makes sure nothing is forgotten. Drafting these is the next piece.
Email is formal. Meetings are scheduled. Real-time questions need a shared IM channel — without it, clients go silent for days and delivery lead becomes a routing hub. Set this up in Onboarding. Non-negotiable.
Microsoft Teams (guest access), Slack Connect, Google Chat — whatever's already in their workflow. We adapt; they get zero switching cost. Whichever platform they choose is what we use.
Each engagement gets its own channel so context stays focused. Multi-engagement clients have multiple channels (or threaded sections in a parent channel).
Tau: Delivery lead + Tech lead
Client: Client owner + their tech contact
DL owns the relationship; tech lead lets clients get unblocked without DL becoming a bottleneck.
Tau: Commercial lead
Client: Sponsor
Pricing, invoices, fee reviews, escalations. Don't mix with the working channel — different tone, different cadence.
Rules of engagement. IM is for quick questions + async updates, not for decisions or formal comms. Decisions still go in meetings + email. Response time expectation: within a working day for non-urgent, within an hour during agreed working hours for urgent. Set these expectations at Onboarding kickoff.
Email automations triggered by Linear status changes. The client always knows when their window of action is.
"Please provide API access, data dumps, and staging credentials by [date]. Here's exactly what we need."
"Your 1-week UAT window starts now. Here's the feedback template. Here's what's in scope. Deadline: [date]."
"Reminder: UAT feedback due in 3 days." / "API credentials still outstanding — this blocks build start."
"Your platform is live. Here's your runbook. Managed service starts now. Your SLA: [terms]."
Weekly automated summary: what was done, what's next, any blockers, any actions needed from client.
Automated monthly health report. Uptime, incidents, pipeline status. Quarterly review scheduling.
Controls the process. The client doesn't wonder what's happening. They know their windows, their deadlines, and their actions. We're not chasing — the system is.
| Role | Opportunity | Discovery | Scoping | Onboarding | Build | Review | QA | UAT | Retro | Managed Svc |
|---|---|---|---|---|---|---|---|---|---|---|
| Commercial lead | Owns stage | Attends standup | Pricing + terms | Client relationship | — | — | — | — | Attends external | Account reviews |
| Delivery lead | Reviews brief | Owns stage | Owns stage | Owns stage | Cadence + comms | Oversight | Owns stage | Owns stage | Owns stage | Owns stage |
| Tech lead | — | Feasibility | Effort estimates | Verifies access | Owns stage | Owns stage | Fix support | Fix support | Attends internal | Monitoring + patches |
| Client sponsor | Initial contact | Sign-off | Sign-off + budget | — | — | — | — | Acceptance sign-off | External retro | Commercial review |
| Client owner | — | Sign-off + context | Sign-off | Provides access + data | — | — | — | Tests + signs | External retro | Day-to-day user |
Delivery lead owns every gate. They confirm the engagement is ready to move forward. If they're not confident, it doesn't proceed — regardless of commercial pressure.
Pricing, filtering, profitability
Discovery and Scoping are real work — we charge for them, and we credit the fee back to the build if the client proceeds. Same price to the client either way. Protection for Tau if they walk.
~10-15% of expected total engagement value. Same model for Discovery and Scoping — typically bundled.
100% of the fee credits toward the build price once a Proposal is signed. Client pays no premium for our upfront work.
If the client walks away after Discovery/Scoping, Tau keeps the fee. Real work was done — feasibility assessment, stakeholder alignment, recommendations.
| Engagement size | Discovery + Scoping fee |
|---|---|
| Quick-win / advisory (< £15k) | £750 — £1,500 |
| Moderate build (£25–60k) | £3,000 — £6,000 |
| Large build (£60–150k) | £6,000 — £15,000 |
Why charge?
Industry data: clients who pay for Discovery convert to full engagements at 60-75%. Free Discovery attracts tire-kickers. Charging signals the work is real — and protects our time if they walk.
30-60 min initial conversation is always free — no Discovery work is done.
If Discovery is < half a day, absorb it. Not worth the invoicing overhead.
Context is already established. Absorb into retainer. But charge for anything substantial.
A prospect who extracts value from Tau — feasibility opinions, architectural advice, effort estimates, even wireframes — without real intent or authority to buy.
The cost. Free Discovery for 5 tire-kickers = 2 weeks of delivery lead time with zero revenue. At typical rates, that's £5–10k of work given away.
Real buyers pay. Tire-kickers push back immediately — and that's the signal.
Sponsor + Client Owner + operational stakeholders must be in the room. Tire-kickers can't produce all three.
Discovery checklist requires naming the decision maker + sign-off process. Fuzzy budget authority gets flushed out.
Gives them a dignified stop. If verdict is "not feasible" — or "conditional" with hurdles they won't meet — they can walk without losing face. We keep the fee.
Tire-kickers rarely commit to Scoping. If they do, and then walk at Proposal — we still have a signed MSA, two paid fees, and real feasibility data.
Red flags at Qualification. "Can you just send us a rough estimate first?" — "Budget is flexible" (but never quoted) — "We're talking to a few vendors" (fine, but they should pay for Discovery regardless) — "We just need to validate feasibility" (that's exactly what Discovery is, and it's not free). Commercial should flag these in the brief.
The process tells us how to deliver. Profitability tracking tells us whether it was worth it. Per-engagement P&L, powered by data already flowing through Linear.
Contracted, recognised on signing. Simple and predictable. Managed Service fees accrue separately, per month.
For each person each cycle: role_rate × days × (points_on_project ÷ total_points).
Sum across all contributors = engagement cost for that period.
Tracked per engagement over time. Shows profitability at any point in the build, not just at the end.
The platform and how we deploy it
tau-core is our proprietary platform. Client applications import it as library packages and customise on top. Every engagement should use tau-core — it's our scalability, our margin, and our product.
Ingestion, Prefect pipelines, storage, transformations.
Auth, RBAC, Grafana, Sentry, deploy automation.
Framework, visualisations, report templates.
Expose data outward to other systems.
Media plans, optimisation, trafficking, write-back. Shared utilities.
1000+ linting rules, architecture guardrails, AI coding guidance, testing framework, CI/CD patterns.
Commercial benefit: lets us deliver a mature platform faster than anyone building from scratch — how we compete on time and price while making margin.
Licensed, not sold. Client access to tau-core is via Managed Service. Bespoke code they build on top is theirs.
Four ways we deploy. The first three use tau-core. Default to Tier 1, 2, or 3. Avoid Tier 4.
What the client owns vs licenses. In Tier 1–3, the client owns their custom application code — the business logic, data model, UI customisation built specifically for them. They license the tau-core library that application depends on. Their app is theirs; the foundation it runs on is ours, accessed under Managed Service. In Tier 4 there is no licensed code — everything is bespoke and owned by the client, but they lose the benefits of our platform entirely.
| Tier | Hosting | Domain | tau-core? | Tenancy | When it fits |
|---|---|---|---|---|---|
| Tier 1 Multi-tenant SaaS |
Tau cloud | app.taums.ai | ✓ Yes | Multi-tenant (segregated data) |
Fastest + cheapest. Good for POV, smaller MVPs, clients without strong infra preferences. |
| Tier 2 Tau-hosted, single-tenant |
Tau cloud | client.taums.ai | ✓ Yes | Single-tenant | Clients who want their own space but no infra responsibility. Branded URL. |
| Tier 3 Client-hosted, single-tenant |
Client cloud (AWS / GCP) |
Custom client domain | ✓ Yes (as dependency) | Single-tenant (client infra) |
Enterprise clients with strict infra/compliance. Their cloud, our platform. |
| Tier 4 Fully bespoke |
Client cloud | Custom | ❌ No | Fully bespoke (client GitHub) |
Avoid. Every build from scratch on client infra. Doesn't scale for us. |
Why avoid Tier 4. Every Tier 4 engagement rebuilds plumbing we already have in tau-core. It doesn't scale: we can't spread fixed cost across clients, can't reuse improvements, can't maintain it without dedicated capacity. Should be a rare exception, not a default.
How to steer clients toward Tier 1–3. Lead with the client benefit: they get access to our collective assets and hard-won experience baked into tau-core. Years of pipeline patterns, reporting frameworks, optimisation tools, write-back functionality — things they'd otherwise pay to build from scratch. By licensing tau-core, they inherit our scale advantage. Tier 4 strips that away — same price, less platform.
Commercial wants clients to sign quickly. Delivery wants builds to succeed. Tier choice is where those two intersect. Each tier is self-contained — not a downpayment on the next.
| Tier | Fee | Duration | What it IS | What it ISN'T | Ends with |
|---|---|---|---|---|---|
| POV Proof of Value |
£10–25k | 4–6 wks | Narrow, time-boxed demo with real data. Numbers they can act on. | Production-ready. Not iterable. Not supported. | Go/no-go: proceed, or stop. |
| MVP Min Viable Product |
£25–75k | 2–4 mo | Production-ready core flows. Live, supported, real users. | Full platform. Doesn't cover everything they want. | Live platform + MS. More = CR or new engagement. |
| Enterprise Full Platform |
£75k+ | 4–6 mo+ | Full bespoke build. Hardened, scaled, multiple phases. | Infinite scope. Acceptance criteria still bound it. | Full platform + MS at scale. |
Three patterns to avoid.
How we protect against all three.
Upgrade path: POV → MVP → Enterprise is a valid journey — each step is a new engagement with its own Discovery, Scoping, and price. Not an extension.
Retros, process evolution, client health
Retro is a full project phase — the project stays in status Retro for the 4–6 weeks between UAT sign-off and the final internal retro. Three retros: internal — external — final internal. Internal first so the team can be honest; external after to hear the client; final to integrate feedback into the process.
When: 2 weeks after UAT sign-off
Who: Delivery, tech, commercial
Duration: 2h workshop
Covers: what worked, what didn't, process gaps, technical learnings, commercial friction, team tensions. Safe space — honest over diplomatic.
When: 3–4 weeks after UAT sign-off
Who: Tau + Sponsor + Client Owner
Duration: 1h meeting
Covers: what client valued, what they'd want different, relationship health, Managed Service working well?, future needs. NPS-style question at the end.
When: 1 week after external
Who: Delivery + process owner
Duration: 1h
Covers: reconcile internal vs external view, agree action items, decide what changes to checklists/templates/process. Outputs logged + applied.
Why internal first. The team can't be fully honest if the client is in the room. Issues like "commercial over-promised X" or "the scope was wrong" need a safe conversation before they become diplomatic.
Where feedback lives. Action items go into Linear as sub-issues on a "Process Improvements" project. Checklist/template changes get pushed to the Google Sheet so future engagements pick them up automatically.
Retros produce action items. Items that generalise get pushed into the system itself — next engagement inherits the learning automatically.
New checklist items, gates, labels in the Google Sheet. Next engagement auto-includes.
Agendas, emails, output docs updated in the repo. Drive sync picks them up.
Quarterly review of the process itself. Are gates still right? Discovery right length?
What we don't change: a one-off issue from a single engagement doesn't earn a process change. Patterns across 2–3 engagements do. Avoids pendulum swings.
Retros happen once per engagement. Between retros, we track client health continuously so problems surface before they become renewal risks.
Quarterly review question: "How likely are you to recommend Tau to someone in your network? (0–10)". Tracked per client over time.
Uptime vs SLA, incident count, time-to-resolve, support ticket volume. Monthly report to client, internal alerting on degradation.
Are they still using the platform? Active users, feature adoption, CRs requested. Silence is a warning sign.
MS fee renewed? Invoice payment cadence? Any friction on commercial items?
NPS 9-10, metrics healthy, engaged:
Happy client. Opportunity for case study, referral ask, further engagement discussion.
NPS 7-8, or declining engagement:
Yellow flag. Delivery lead has a directed conversation at the next monthly. Identify the friction.
NPS <7, or two consecutive red indicators:
Escalate. Unscheduled client health meeting. Root cause analysis. Corrective action plan. Commercial lead involved.
Where it lives. Client health dashboard in tau-pulse, updated continuously. Flagged clients surfaced in weekly delivery standup. Quarterly trend review at company level.
Full checklists for reference
Each output-producing meeting has 3 comms around it: pre-meeting prep (24-48h before), same-day post-meeting summary, and a follow-up comm when the output is packaged for client confirmation. The Reverse Brief also goes through a two-meeting doc cycle (presentation → feedback → readback + sign-off).
All items are client-side dependencies. Build dates aren't committed until every item is closed.
Build on the Delivery side is thin. The core engineering work lives in a concurrent project in the Engineering team.
Longer builds add further weekly status rows to the stages sheet — the template ships with 4 to set the minimum cadence.
Engineering's own project in the Engineering team, named <Client> - <Engagement> [Build]. Holds the actual implementation work — tickets, PRs, deploys. Tech lead owns.
Both Delivery and Engineering projects carry Build as their project status while this phase is active. Engineering closes to Completed when eng work is done; Delivery moves to Review.
GATE: Build closes when Engineering declares its project done. That's the only condition.
First internal quality gate. Tech lead walks the happy path defined in the Proposal end-to-end three times. Three clean rounds = sign-off.
Tech lead owns. ~1 week end-to-end.
Failing a round: log issues, fix, rerun. Counter resets so the final pass is always clean. Push the deadline, not the quality.
Why tech lead first: catches engineering defects (architecture drift, missing error handling, regressions). Delivery lead shouldn't be hunting these.
Second internal quality gate. Delivery lead walks the same happy path three more times — from the user's perspective. Catches scope drift + usability before UAT.
Delivery lead owns. ~1 week end-to-end.
What changes vs Review: same happy path, different lens — does this match the Proposal? Does the flow make sense to a non-technical user? Are edge cases + empty states handled?
The client should only ever see the third pass. Tech lead sees code; delivery lead sees outcome; UAT sees a polished product.
Client tests against the happy path they already signed in the Proposal. Output: signed Acceptance Pack.
Out-of-scope = CR, not blocker. Out-of-scope items raised during UAT get logged as change requests and don't block sign-off.
Happy path defined in the Scoping Proposal — signed by the client — is the target for UAT. No surprises.
Project stays in status Retro for 4–6 weeks after UAT sign-off. Three retros, feedback-action email, template changes pushed back into the sheet.
Internal first — team can be honest. External next — hear the client. Final internal — integrate feedback into the process.
Gate closes → Completed. Once all 5 sub-issues close, the Retro gate closes and the project moves to Linear's built-in Completed status. Managed Service continues under its own project.
| Meeting | Pre (24-72h before) | Same-day summary | Follow-up (after internal work) |
|---|---|---|---|
| Kickoff workshop | Welcome + agenda + delivery model overview (24h) | Thanks + next meeting confirmation + pre-work for Problem Framing | — |
| Problem framing | Agenda + prep questions (24h) | "Problem + success metrics as we heard them" | Refined problem statement for confirmation |
| As-is walkthrough | Agenda + attendee request (48h) | Current process summary + user stories captured | Formal as-is process map for operational review |
| Systems + data session | Topics + credentials prep (48h) | Action items list (what Tau needs, by when) | Feasibility summary + red flags |
| Discovery sign-off | Full Reverse Brief (48h) | Verdict + decisions + Scoping kickoff date | Signed Reverse Brief distributed |
| Solution design | Reverse Brief + agenda (24h) | Solution + to-be + wireframe direction | Full wireframes + draft Proposal |
| Proposal review | Full Proposal (72h) | Feedback summary + commitments to change | Revised Proposal + contract package |
| Scoping sign-off | Final Proposal + MSA + DPA (48h) | Signed contracts + Onboarding kickoff scheduled | Welcome-to-build + Onboarding expectations |
| Access setup | Required-access list + prep | Outstanding-items checklist | Weekly chasers until all access confirmed |
| Weekly update | — | Written summary (progress, next week, blockers, client actions) | — |
| UAT kickoff | UAT pack (48h) — feedback template + scope alignment | Window confirmation + deadline reminder | Mid-week reminder during UAT |
| UAT review + sign-off | Feedback reviewed internally | Per-item response (in scope / out of scope / CR / deferred) | Go-live plan + Managed Service kickoff email |
Tracked in Linear. Every meeting has a corresponding "[Comms]" sub-issue in Linear covering the full loop. Delivery lead ticks off as each communication goes out.