In short: AI agents make ticket creation fast and structured, but structure without strategy just produces Zombie Barbies: tickets that look polished and go nowhere. This post covers five ticket templates, when to use each, and an LLM prompt that generates them from conversation context via MCP.

Some teams treat user stories as conversation starters. That can work when a product manager is present and owns the follow-through. In practice, I more often see the other extreme: tickets so vague that no work can start without chasing someone down for context.

AI agents address the vagueness problem by making ticket creation fast and structured. But they introduce a new one: the Zombie Barbie ticket.

A Zombie Barbie is a ticket that looks flawless on the outside—it has proper Markdown, a nicely formatted user story, and clear acceptance criteria—but it lacks a brain. It doesn’t trace back to a larger goal, it doesn’t move a metric, and it exists simply because an AI made it cheap to generate. It is volume without direction.

The problem isn’t AI. Teams have been generating unjustified work for decades. AI just makes it cheaper and faster to do so. It amplifies whatever process (or lack of process) already exists.

To stop generating Zombie Barbies, you need strategic alignment: every ticket should trace back to a goal, whether directly or through an epic. To stop generating vague tickets that stall on arrival, you need structural rigor.

Not everything is a user story

Teams default to “feature” for everything. But a dependency upgrade isn’t a feature. A research question isn’t a feature. An API integration that no user will ever see isn’t a feature (more on this opinionated view later). When you force everything into one format, you lose the signal that tells you what kind of work this is — and that matters for estimation, prioritization, and review.

The five types I use:

  1. Feature: user-facing functionality with acceptance criteria a user could verify.
  2. Non-user feature: system integrations, API endpoints, background jobs. The user never sees it directly.
  3. Bug: something is broken. Reproduction steps, impact, current vs. expected behavior.
  4. Spike: a question that needs answering before real work can start. Timeboxed.
  5. Chore: necessary maintenance. No direct user value. Dependency updates, refactors, CI/CD changes.

This list is not for the sake of bureaucracy but because the type determines what “done” looks like.

A spike’s deliverable is a decision. A chore’s deliverable is a checklist. A feature’s deliverable is acceptance criteria verified by a user. A non-user feature is an API or DB change that will probably be validated by a technical person with API or DB access. A bug’s deliverable is (hopefully) a final fix.

Where zombie barbies come from

If stories aren’t connected to a larger goal, you end up with a backlog of zombie barbies.

Each story should trace back to a feature set, epic, or project — and that project should connect to a roadmap and a north star metric. The chain:

  • company mission
  • north star
  • leading indicators
  • feature hypotheses
  • roadmap
  • individual stories

If you can’t draw that line from a story back to a metric, the story might be well-written but it’s not justified.

I wrote about building that chain (from mission to measurable leading indicators) in Shopping list to North Star.

The problem with AI-accelerated delivery is not speed. It is volume without direction.

A team can now generate 50 polished stories in a sprint instead of 5, but if those stories do not move a metric, solve a customer problem, or advance the same outcome, the team just creates more backlog, more handoffs, and more rework.

The templates in this post help define the work and the acceptance criteria. But you need strategic framing to answer the harder question: why this, instead of something else?

A story’s user value statement:

User Value Statement

As a user
I want feature
So that outcome.

should map to a hypothesis:

Hypothesis

We believe building feature
For user addresses their need/painpoint
And will achieve outcome, measured by leading indicator.

If you can’t make that mapping, the feature needs a spike first — or it needs to be deprioritized. Or acknowledge it’s a bet. Which is fine. As long as you keep track of all the other bets you might have and when to walk away from the ones that aren’t paying off.


The templates

These are templates that work for me but I tend to always tailor them to the client/context I work in. I expect readers to do the same.

Feature (user-facing)

A feature ticket should answer:

  1. who benefits from this
  2. what need or pain-point are we addressing
  3. what this body of work will deliver
  4. how you’ll verify it works

Potentially 1 and 2 are shared across multiple tickets and live in an epic or feature set. Don’t force 1,2 at story level when they make more sense at the epic level. When you try to shoehorn them to smaller features you might end up defining an unnatural benefit or artificial pain-point.

Feature

Story title here

[Clear concise high level description of functionality]

User Value Statement

As a [user role or persona]
I want to [action or outcome]
So that [value realized]

Acceptance Criteria

Scenario: [description of this particular test case]
Given [existing state or prerequisite]
When [user action]
Then [outcome]

Resources

  • Design specs / Screenshots / Wireframes (must have at least one)
  • Example data (optional but SO helpful)

Requiring a design spec forces the person writing the ticket to think through the use cases in the UI — and that’s where blind spots surface.

I was recently helping a client with their software development lifecycle and one analyst had a story to edit an AI extracted field in the UI. He left the UI to the developers, assuming they’d follow existing patterns for editing fields. Sometimes that’s fine. In this case, when I reviewed the requirement and walked through how it would flow in the UI, it was clear we needed to hash out the acceptance criteria with UX: Should the operator leave any context when providing those human-in-the-loop corrections? Do we know of a list of causes for overriding AI decisions? Do we present them in a modal? Is the override reason optional?

Non-user-facing feature

This is an opinionated view but after 25 years in the industry it’s hard not to have one. When a feature is not user facing ( API endpoints, system integrations, background jobs) we need to think about different constraints. You need a template to force you to think about inputs, outputs, failure modes, and security context from a more engineering centered (instead of user centered) point of view.

Non-user-facing Feature

Title for non-user facing feature

As a [machine/system/component]
I want to [perform a specific action]
So that I can [achieve a specific technical outcome]

Structured Details

  • Actor (System/Service/Machine): What automated process or system is this story focused on?
  • Goal (Action): What operation is being performed (GET, POST, PUT, DELETE, etc.)?
  • Endpoint(s): Which endpoint(s) are involved? API? DB?
  • Input Requirements: Required fields, headers, formats, etc.
  • Expected Output: Expected response schema and HTTP status codes
  • Failure Scenarios: Common errors and how they should be handled
  • Security Context (if applicable): Authentication/authorization mechanism
  • Frequency/Trigger: How and when this interaction is initiated (e.g., cron job, event-driven)

Acceptance Criteria (Gherkin Format)

Scenario: [description]
Given [precondition]
When [action]
Then [outcome]

When someone writes a non-user feature as a regular story like “Build the webhook endpoint” with no mention of auth, failure handling, or expected payload the developer fills in the gaps with guesswork.

Bug

Not all bugs need to be fixed. It depends on how many users are affected, whether they cause data corruption, and their priority.

Sometimes people call any undesired behaviour a bug. But often that really is a missing feature. That behaviour was not defined in the first place and these bugs turn out to be scope creep.

My guideline: it’s 100% a bug when:

  1. the “undesired behaviour” spawned from missed acceptance criteria (i.e. the story should have been rejected)
  2. the “undesired behaviour” severely impacts end users or causes data corruption. When the bug is high priority where it came from can be tackled in a retrospective.
Bug

[Component/Area]: Brief description of the issue in present tense

Summary

One-paragraph overview of the issue

Environment details (browser, OS, app version, etc.)

Steps to Reproduce

  1. [Step 1]
  2. [Step 2]
  3. [Step 3]
  • Include specific test data used
  • Note any required preconditions
  • Flag if intermittent/not always reproducible

Current Behavior

What actually happens. Include relevant error messages, screenshots/videos if visual, logs if available.

Expected Behavior

What should happen. Be specific about the expected outcome.

Impact Assessment

  • Who is affected (paid/free users, % of users)
  • Frequency of occurrence
  • Business impact
  • Workarounds if any exist

Additional Context

  • Related bugs/tickets
  • When the issue started (if known)
  • Relevant customer feedback
  • Stack traces or console errors

I wrote a full post on how to write bug reports engineers can work on and on how MCPs and AI agents help the debugging process.

Spike

Spikes exist to answer a question. You don’t ship your spike to production. The deliverable is a decision or documented clarity.

Spikes are exceptionally useful during planning sessions to de-risk areas the team doesn’t know well yet. Rather than estimating a feature you know nothing about, you first run a spike.

Without a timebox spikes become open-ended research projects. The timebox forces a deliverable even if the answer is “we don’t know yet, here’s what we tried”.

Spike

Spike title here

Description

We should [investigate/research/prototype what]
Before [what depends on this answer]
So that [what decision or clarity this enables]

Acceptance Criteria

  • [Question answered / Decision made / Outcome documented]

Time Box

[How long to spend on this investigation, e.g., 2 hours, 1 day]

Resources

  • Instructions
  • Additional context
  • Other assets to help execute the spike

Chore

No direct user value. Dependency updates, refactors, CI/CD pipeline changes, config migrations. The “we should / before / so that” framing forces you to articulate why this maintenance matters now.

Chore

Name the chore here

Description

We should [what needs to be done]
Before [deadline or dependency]
So that [why it matters technically]

Acceptance Criteria

  • [Task/Outcome 1]
  • [Task/Outcome 2]

Resources

  • Instructions
  • Additional context
  • Other assets to help execute the chore

Generating tickets with an LLM and MCP

The templates above work well when a human writes them. But when you’re debugging with an AI agent and find the root cause, the context is already in the conversation. Why re-type it into a ticket?

An LLM agent can use a prompt that analyzes the current conversation (and optionally code context: staged changes, recent commits, open files, errors) and generate a ticket in the correct template format. The prompt routes to the right template based on what it finds in the conversation.

The prompt

The prompt I use (full version in the appendix) reads conversation context, determines ticket type, fills in the template with concrete details from the session.

At a high level, the agent figures out the ticket type based on:

  • Error, failing test, unexpected behavior → Bug
  • New user-facing functionality → Feature
  • API, background jobs, system integration → Non-user feature
  • Refactoring, deps, config → Chore
  • Research, investigation, open question → Spike

You will see a few reminders to ask questions or leave TODO when context is missing:

If the ticket type is ambiguous or key details are missing, ask the user before generating.

or:

If you don’t have enough context for a required field, insert a [TODO: ...] placeholder and note what’s missing.

These are good practices but ultimately a human should check the content of those stories.

The MCP connection

Once the ticket is generated, it can be created directly in your backlog management software (I used Linear via MCP) and Claude Code and Codex as my daily driver.

What follows is an example where I was helping a client course correct a production LLM pipeline and assessing the removal of some unused functionality.

Note: I’ve kept the high-level logic intact, but I’ve obfuscated specific internal details to respect project confidentiality.

After some agentic exploration and planning, instead of context-switching to Linear, I asked the agent to generate the ticket. You will see the agent asking me interactive questions askuserquestion that it’ll use to generate the ticket.

0:00

Assess how the document intake Audit step is used and prepare a plan for its complete removal from stipend, events, and partner.

0:06

I'll start by reading the opened plan file for context, then launch parallel exploration agents to understand the document intake Audit step across stipend, events, and partner domains in this Node/Express + React codebase.

0:20

The plan file doesn't exist yet. Let me launch the exploration agents.

0:23 task
{"task":"Explore document intake Audit step in Node/Express backend and React admin UI"}
0:26 task
{"task":"Explore stipend events partner ingestion flows across Express services, queue workers, and React admin screens"}
2:43
Perfect! Now I have a complete picture. Let me create a comprehensive final report.

COMPREHENSIVE EXPLORATION: Document Intake Audit Step

I've completed a thorough exploration of the document intake Audit workflow in this Node/Express + React codebase. Here's what I found:

1. WHAT IS THE AUDIT STEP IN RECORD INTAKE?

The Audit (Quality Assurance) step is a post-processing phase that runs after documents are extracted and cited during document intake. It generates an AI-powered verification report comparing extracted document data against what was actually imported into the system. Key Components:
  • DocumentAuditRecord persistence layer / ORM model
  • documentAuditRecord job/worker (background queue processor that triggers Audit)
  • documentAudit/ services (AI-powered verification logic)
  • Document Audit Results admin UI (React pages/components for reviewing and verifying results)

... omitted 389 lines ... BridgeAI AI Service:
  • Used for all AI-powered analysis steps
  • Multiple prompts identified in config:
    • Advisor stipend: advisor_salary_audit, advisor_document_audit
    • Event documents: document_audit.event_document_identifier
    • Document type: file_type_classifier
    • Many others for partner arrangements, stipend attributes, etc.
Scanbridge Service:
  • PDF text extraction
  • Returns callbacks with action type
  • action: 'for_document_audit_review' triggers Audit

This is a sophisticated multi-stage document intake system where the Audit step is a critical validation layer that uses AI to verify extracted data quality before final import and user verification.
2:49
Perfect! Now I have gathered comprehensive information. Let me provide a thorough summary of document intake for the three domains.

Document Intake Exploration Report

I've conducted a very thorough exploration of the document intake system for stipend, events, and partner domains. Here's the complete architecture:

1. BASE INGESTION FRAMEWORK

Core Data Models / Tables:
  • /src/db/models/documentIntake.ts — Master ingestion record supporting three document types:
    • advisor_stipend
    • event_document
    • partner_arrangement
  • /src/db/models/documentIntakeItem.ts — Individual PDF/document being processed with state machine:
    • States: pending, importing, import_failed, imported
    • Status workflow: pendingextracting_texttext_extractedcitation_startedcitation_completed → ready for import
    • Has hierarchical parent-child relationships (parent PDFs can have child items)
Backend Services:
  • /src/services/processDocuments/importers/
  • /src/services/documentAudit/
  • /src/controllers/webhooks/parseWebhookHandler.ts
Admin UI (React):
  • /src/client/admin/document-intake/
  • /src/client/admin/document-audit/
Routes:
  • /src/routes/admin.ts
  • /src/routes/webhooks.ts
This is a unified platform where all three domains (stipend, events, partner) share the same underlying ingestion framework but have domain-specific processors, importers, API handlers, and React admin screens.
2:56

Good, I have a thorough understanding of the Audit system. Now let me read the critical files where Audit is triggered and integrated to understand exact removal points.

2:56 read
{"path":"/src/queues/jobs/documentAuditRecordJob.ts"}
2:56
1→ export async function documentAuditRecordJob({
2→ documentId,
3→ documentType,
4→ campusName = '',
5→ documentIntakeItemId = null
6→ }: {
7→ documentId: string
8→ documentType: string
9→ campusName?: string
10→ documentIntakeItemId?: string | null
11→ }) {
12→ await documentProcessor({
13→ documentId,
14→ documentType,
15→ campusName,
16→ documentIntakeItemId
17→ })
18→ }
2:57 read
{"path":"/src/db/models/documentAuditRecord.ts"}
2:57
1→ export interface DocumentAuditRecord {
2→ id: string
3→ documentableType: 'EventDocument' | 'Document' | 'Arrangement'
4→ documentableId: string
5→ verifiedById?: string | null
6→ verifiedAt?: Date | null
7→ documentIntakeItemId?: string | null
8→ userNotes?: string | null
9→ notesAddedBy?: string | null
10→ notesAddedAt?: Date | null
11→ discrepancyPresent: boolean
12→ issues: unknown
13→ createdAt: Date
14→ updatedAt: Date
15→ }
2:57 read
{"path":"/src/services/documentAudit/documentProcessor.ts"}
2:57
1→ export async function documentProcessor({
2→ documentId,
3→ documentType,
4→ campusName,
5→ documentIntakeItemId
6→ }: {
7→ documentId: string
8→ documentType: string
9→ campusName?: string
10→ documentIntakeItemId?: string | null
11→ }) {
12→ const rawDocument = await fetchRawDocument(documentId)
13→ if (!rawDocument) return
14→
15→ if (!rawDocument.scanbridgePdfText) {
16→ await extractTextFromPdf(rawDocument)
17→ return
18→ }
19→
20→ switch (documentType) {
21→ case 'event_document':
22→ await processEventDocument(rawDocument, campusName, documentIntakeItemId)
23→ break
24→ case 'advisor_stipend':
25→ await processAdvisorStipendDocument(rawDocument, documentIntakeItemId)
26→ break
27→ }
28→ }
2:58 read
{"path":"/src/services/processDocuments/importers/documentEventsImporter.ts"}
2:58
1→ export class DocumentEventsImporter {
2→ async call(documentIntakeItemId: string, importedById?: string) {
3→ ...
4→ await enqueueDocumentAuditRecordJob({
5→ documentId: eventDocumentId,
6→ documentType: 'event_document',
7→ campusName,
8→ documentIntakeItemId
9→ })
10→ ...
11→ }
12→ }
2:58 read
{"path":"/src/services/processDocuments/importers/advisorStipendDocumentImporter.ts"}
2:58
1→ export class AdvisorStipendDocumentImporter {
2→ async call(documentIntakeItemId: string, importedById?: string) {
3→ ...
4→ await enqueueDocumentAuditRecordJob({
5→ documentId: documentId,
6→ documentType: 'advisor_stipend',
7→ documentIntakeItemId
8→ })
9→ ...
10→ }
11→ }
2:59 read
{"path":"/src/services/processDocuments/importers/partnerArrangementsImporter.ts"}
2:59
1→ export class PartnerArrangementsImporter {
2→ async call(documentIntakeItemId: string, importedById?: string) {
3→ ...
4→ // No direct audit enqueue here, but partner data is still represented in the audit dashboard / record model
5→ ...
6→ }
7→ }
3:00 read
{"path":"/src/controllers/admin/documentAuditRecords.controller.ts"}
3:00
1→ export async function indexDocumentAuditRecords(req, res) {
2→ const filters = buildFilters(req.query)
3→ const result = await documentAuditRecordRepository.search(filters)
4→ return res.json(result)
5→ }
6→
7→ export async function updateDocumentAuditRecord(req, res) {
8→ ...
9→ }
10→
11→ export async function bulkVerifyDocumentAuditRecords(req, res) {
12→ ...
13→ }
14→
15→ export async function bulkUnverifyDocumentAuditRecords(req, res) {
16→ ...
17→ }
3:05

Now let me check remaining Audit services, the callback trigger point, and find all references to Audit across the codebase.

3:06 read
{"path":"/src/services/documentAudit/eventDocument.ts"}
3:06
1→ export async function processEventDocument(rawDocument, campusName, documentIntakeItemId) {
2→ const result = await generateAuditResult(...)
3→ await documentAuditRecordRepository.create({
4→ documentableType: 'EventDocument',
5→ documentableId: eventDocumentId,
6→ discrepancyPresent: result.disdocuments,
7→ issues: result.issues,
8→ documentIntakeItemId
9→ })
10→ }
3:07 read
{"path":"/src/services/documentAudit/advisorStipendDocument.ts"}
3:07
1→ export async function processAdvisorStipendDocument(document, documentIntakeItemId) {
2→ const scanbridgePdfText = document.rawDocument.scanbridgePdfText
3→ const documentType = await classifyFileType(documentIntakeItemId, scanbridgePdfText)
4→ await verifyAdvisorStipend(document, documentType, documentIntakeItemId)
5→ }
3:08 read
{"path":"/src/services/documentAudit/advisorStipendVerification.ts"}
3:08
1→ export async function verifyAdvisorStipend(document, documentType, documentIntakeItemId) {
2→ const auditResult = await generateAuditResult(...)
3→ await documentAuditRecordRepository.create({
4→ documentableType: 'Document',
5→ documentableId: document.id,
6→ discrepancyPresent: auditResult.disdocuments,
7→ issues: auditResult.issues,
8→ documentIntakeItemId
9→ })
10→ }
3:08 read
{"path":"/src/services/documentAudit/fileTypeClassifier.ts"}
3:08
1→ export async function classifyFileType(documentIntakeItem, scanbridgePdfText) {
2→ return bridgeAiClient.post({
3→ promptId: aiTemplates.advisor_document.file_type_classifier,
4→ body: {
5→ variables: {
6→ document_text: scanbridgePdfText
7→ }
8→ }
9→ })
10→ }
3:09 grep
{"pattern":"DocumentAuditRecord","output_mode":"files_with_matches"}
3:09
Found 27 files
docs/document_intake_flow.mmd
src/controllers/admin/documentAuditRecords.controller.ts
src/routes/admin/documentAuditRecords.routes.ts
src/services/processDocuments/importers/advisorStipendDocumentImporter.ts
src/controllers/webhooks/parseWebhookHandler.ts
src/db/models/documentAuditRecord.ts
src/queues/jobs/documentAuditRecordJob.ts
src/services/documentAudit/eventDocument.ts
src/services/documentAudit/advisorStipendVerification.ts
src/services/documentAudit/documentProcessor.ts
tests/services/processDocuments/importers/advisorStipendDocumentImporter.test.ts
tests/services/processDocuments/importers/documentEventsImporter.test.ts
tests/services/documentAudit/eventDocument.test.ts
tests/services/documentAudit/advisorStipendVerification.test.ts
tests/models/documentAuditRecord.test.ts
scripts/document-audit-record.ts
scripts/llm-audit-workflow.ts
src/client/admin/document-audit/DocumentAuditRecordsPage.tsx
src/client/admin/document-audit/components/DocumentAuditTable.tsx
src/client/admin/document-audit/components/BulkVerificationActions.tsx
...
3:10 grep
{"pattern":"document_audit","output_mode":"files_with_matches"}
3:10
Found 49 files
src/routes/admin.ts
src/controllers/webhooks/parseWebhookHandler.ts
src/config/aiTemplates.ts
src/db/models/documentIntakeItem.ts
src/db/models/eventDocument.ts
src/db/models/document.ts
src/db/models/arrangement.ts
src/client/admin/document-audit/DocumentAuditRecordsPage.tsx
src/client/admin/document-audit/components/DocumentAuditSearchForm.tsx
src/client/admin/document-audit/components/DocumentAuditTable.tsx
src/client/admin/document-audit/hooks/useDocumentAuditFilters.ts
src/client/admin/document-audit/components/BulkVerificationActions.tsx
src/client/admin/dashboard/AdminIndexPage.tsx
tests/services/documentAudit/documentProcessor.test.ts
tests/controllers/webhooks/parseWebhookHandler.test.ts
scripts/document-audit-record.ts
scripts/llm-audit-workflow.ts
...
3:15

Let me check the remaining integration points - the webhook processor, model relationships, scripts, admin nav, and tests.

3:16 grep
{"pattern":"documentAuditRecord","path":"/src/controllers/webhooks/parseWebhookHandler.ts","output_mode":"content","-C":5}
3:16
No matches found
3:17 grep
{"pattern":"documentAuditRecord","path":"/src/db/models/documentIntakeItem.ts","output_mode":"content","-C":3}
3:17
47-
48- export const documentIntakeItemRelations = {
49: documentAuditRecord: one(documentAuditRecords, {
50- fields: [documentIntakeItems.id],
51- references: [documentAuditRecords.documentIntakeItemId]
52- }),
53- citations: many(documentIntakeItemCitations),
3:18 grep
{"pattern":"documentAuditRecord","path":"/src/db/models/eventDocument.ts","output_mode":"content","-C":3}
3:18
10- export const eventDocumentRelations = {
11- rawDocument: one(rawDocuments, ...),
12- events: many(events),
13: documentAuditRecord: one(documentAuditRecords, {
14- fields: [eventDocuments.id],
15- references: [documentAuditRecords.documentableId]
16- }),
3:19 grep
{"pattern":"documentAuditRecord","path":"/src/db/models/document.ts","output_mode":"content","-C":3}
3:19
6- export const documentRelations = {
7- rawDocument: one(rawDocuments, ...),
8- stipends: many(stipends),
9: documentAuditRecord: one(documentAuditRecords, {
10- fields: [documents.id],
11- references: [documentAuditRecords.documentableId]
12- }),
3:20 grep
{"pattern":"documentAuditRecord","path":"/src/db/models/arrangement.ts","output_mode":"content","-C":3}
3:20
15- export const arrangementRelations = {
16- sports: many(arrangementSports),
17: documentAuditRecord: one(documentAuditRecords, {
18- fields: [arrangements.id],
19- references: [documentAuditRecords.documentableId]
20- }),
3:20 read
{"path":"/scripts/document-audit-record.ts"}
3:20
1→ // Backfill documentIntakeItemId for existing DocumentAuditRecord rows
2→ ...
3→ await documentAuditRecordRepository.findWithoutDocumentIntakeItem()
4→ ...
3:21 read
{"path":"/scripts/llm-audit-workflow.ts"}
3:21
1→ // Trigger document audit jobs for advisor stipend documents
2→ ...
3→ await enqueueDocumentAuditRecordJob({
4→ documentId: document.id,
5→ documentType: 'advisor_stipend'
6→ })
3:22 grep
{"pattern":"document-audit","path":"/src/client/admin/dashboard/AdminIndexPage.tsx","output_mode":"content","-C":3}
3:22
55- <h2>Document Intakes</h2>
56- <ul>
57- <li><Link to="/admin/document-intakes">View Document Intakes</Link></li>
58: <li><Link to="/admin/document-audit">View Document Audit Results</Link></li>
59- </ul>
3:27

Let me also check the Scanbridge callback for Audit triggers and the remaining React files.

3:28 grep
{"pattern":"document_audit|for_document_audit","path":"/src/controllers/webhooks/parseWebhookHandler.ts","output_mode":"content","-C":10}
3:28
18-
19- function processCallbackAction(metadata) {
20- const action = metadata.action
21- const extractNormalText = metadata.extract_normal_text
22-
23- switch (action) {
24- case 'for_document_text_extraction':
25- return handleTextExtraction(extractNormalText)
26: case 'for_document_audit_review':
27- return handleAuditAnalysis()
28- }
29- }
30-
31- async function handleAuditAnalysis() {
32- const normalizedText = normalizeString(resultText)
33- await rawDocumentRepository.update(rawDocument.id, { scanbridgePdfText: normalizedText })
34-
35- const documentType = determineDocumentType(rawDocument)
36- if (!documentType) return
37-
38- const document = rawDocument.document ?? rawDocument
39- await enqueueDocumentAuditRecordJob({ documentId: document.id, documentType })
40- }
3:28 read
{"path":"/src/client/admin/document-audit/hooks/useDocumentAuditFilters.ts"}
3:28
1→ export function useDocumentAuditFilters() {
2→ ...
3→ }
3:29 read
{"path":"/src/client/admin/document-audit/components/BulkVerificationActions.tsx"}
3:29
1→ export function BulkVerificationActions() {
2→ ...
3→ }
3:30 glob
{"pattern":"/src/client/admin/document-audit/**/*"}
3:30
/src/client/admin/document-audit/DocumentAuditRecordsPage.tsx
/src/client/admin/document-audit/components/DocumentAuditSearchForm.tsx
/src/client/admin/document-audit/components/DocumentAuditTable.tsx
/src/client/admin/document-audit/components/BulkVerificationActions.tsx
/src/client/admin/document-audit/hooks/useDocumentAuditFilters.ts
3:31 grep
{"pattern":"document-audit","path":"/src/routes/admin.ts","output_mode":"content","-C":5}
3:31
479- router.use('/document-intakes', documentIntakesRouter)
480-
481: router.use('/document-audit', documentAuditRecordsRouter)
3:36

Let me check the webhook audit handler and the AI template config.

3:38 grep
{"pattern":"handleAuditAnalysis","path":"/src/controllers/webhooks/parseWebhookHandler.ts","output_mode":"content","-C":15}
3:38
14- export async function parseWebhookHandler(req, res) {
15- const parsedMetadata = JSON.parse(callbackMetadata)
16- await processCallbackAction(parsedMetadata)
17- }
18-
19- function processCallbackAction(metadata) {
20- const action = metadata.action
21- const extractNormalText = metadata.extract_normal_text
22-
23- switch (action) {
24- case 'for_document_text_extraction':
25- return handleTextExtraction(extractNormalText)
26- case 'for_document_audit_review':
27: return handleAuditAnalysis()
28- }
29- }
30-
31: async function handleAuditAnalysis() {
32- const normalizedText = normalizeString(resultText)
33- await rawDocumentRepository.update(rawDocument.id, { scanbridgePdfText: normalizedText })
34-
35- const documentType = determineDocumentType(rawDocument)
36- if (!documentType) return
37-
38- const document = rawDocument.document ?? rawDocument
39- await enqueueDocumentAuditRecordJob({ documentId: document.id, documentType })
40- }
3:39 grep
{"pattern":"advisor_salary_audit|advisor_document_audit|file_type_classifier|event_document_identifier","path":"/src/config/aiTemplates.ts","output_mode":"content","-C":3}
3:39
6- export const aiTemplates = {
7- advisor_document: {
8: advisor_salary_audit: 'pp-advisor-stip-291864',
9: advisor_document_audit: 'pp-advisor-docs-cfd815',
10: file_type_classifier: 'pp-advisor-file-c148g1'
11- },
12- document_audit: {
13: event_document_identifier: 'pp-event-docs-2b27gg'
14- }
15- }
3:40 glob
{"pattern":"/tests/services/documentAudit/**/*"}
3:40
/tests/services/documentAudit/advisorStipendVerification.test.ts
/tests/services/documentAudit/fileTypeClassifier.test.ts
/tests/services/documentAudit/eventDocument.test.ts
/tests/services/documentAudit/documentProcessor.test.ts
3:41 grep
{"pattern":"document_audit","path":"/src/client/admin/shared/selectDivision.tsx","output_mode":"content","-C":5}
3:41
103- ...
108: if (prefix === 'document_audit_record') {
109- ...
110- }
3:42 grep
{"pattern":"document_audit","path":"/src/client/admin/advisor-documents/AdvisorDocumentsTopFiltersBar.tsx","output_mode":"content","-C":3}
3:43
2- <Row data-controller="shared--filter-replace">
3- <Col md={3}>
4: <Select name="document_by_division_id_eq" ... prefix="document_audit_record" />
3:47

Let me check a few more integration points — the test factory/fixtures, the filters hook, and any remaining references.

3:48 read
{"path":"/tests/factories/documentAuditRecordFactory.ts"}
3:48
1→ export const documentAuditRecordFactory = ...
3:49 grep
{"pattern":"documentAuditRecordJob","output_mode":"files_with_matches"}
3:49
Found 13 files
docs/document_intake_flow.mmd
src/controllers/webhooks/parseWebhookHandler.ts
src/services/processDocuments/importers/advisorStipendDocumentImporter.ts
src/services/processDocuments/importers/documentEventsImporter.ts
src/queues/jobs/documentAuditRecordJob.ts
tests/controllers/webhooks/parseWebhookHandler.test.ts
tests/services/processDocuments/importers/advisorStipendDocumentImporter.test.ts
tests/services/processDocuments/importers/documentEventsImporter.test.ts
scripts/llm-audit-workflow.ts
...
3:50 grep
{"pattern":"determineDocumentType","path":"/src/controllers/webhooks/parseWebhookHandler.ts","output_mode":"content","-C":10}
3:50
39- function determineDocumentType(rawDocument) {
40- const documentType = rawDocument.documentType.name
41- const documentTypeMapping = {
42- GAME: 'event_document',
43- COACH: 'advisor_stipend'
44- }
45-
46- return documentTypeMapping[documentType]
47- }
3:55

Let me check the remaining React hook and any additional references.

3:56 glob
{"pattern":"/src/client/admin/document-audit/hooks/*"}
3:57
/src/client/admin/document-audit/hooks/useDocumentAuditFilters.ts
4:00 read
{"path":"/src/client/admin/document-audit/hooks/useDocumentAuditFilters.ts"}
4:00
1→ export function useDocumentAuditFilters() {
2→ const [documentableType, setDocumentableType] = useState('EventDocument')
3→ ...
4→ }
4:16

I now have a comprehensive understanding of the Audit system. Before writing the final plan, I have a couple of clarifying questions.

4:22 askuserquestion
DB Migration

Should the removal include a database migration to drop the document_audit_records table, or just remove the application code?

Webhook

The Scanbridge webhook processor handles a for_document_audit_review action that extracts PDF text and then triggers the Audit job. Should this entire webhook action path be removed, or should only the Audit job call be removed (keeping the PDF text extraction part)?

16:07
User has answered your questions: "Should the removal include a database migration to drop the document_audit_records table, or just remove the application code?"="Drop the table (Recommended)", "The Scanbridge webhook processor handles a for_document_audit_review action that extracts PDF text and then triggers the Audit job. Should this entire webhook action path be removed, or should only the Audit job call be removed (keeping the PDF text extraction part)?"="Remove entire action path (Recommended)". You can now continue with the user's answers in mind.
16:50

Let me read a few more files to confirm the exact scope of changes needed.

16:51 read
{"path":"/src/client/admin/document-audit/components/DocumentAuditSearchForm.tsx"}
16:51
1→ export function DocumentAuditSearchForm() {
2→ ...
3→ }
19:27 askuserquestion
Prefix rename

Would you like me to also rename the document_audit_record prefix in the Advisor Documents QC filter bar to something less confusing (e.g., advisor_documents_qc) as part of this cleanup, or leave it as-is?

20:27
User has answered your questions: "Would you like me to also rename the document_audit_record prefix in the Advisor Documents QC filter bar to something less confusing (e.g., advisor_documents_qc) as part of this cleanup, or leave it as-is?"="Leave as-is". You can now continue with the user's answers in mind.
20:30

Got it — leaving the Advisor Documents QC page untouched. The plan already reflects this. Ready to proceed when you are.

E Enrico

Here I call a skill I defined (you'll find the full prompt in the appendix) to create stories for me.

20:37

/create-linear-ticket

20:53

Based on the plan we've been working on, this is a Chore — removing the Audit step is a cleanup/refactoring task with no direct user-facing value.

Remove document intake Audit step from stipend, events, and partner

## Description

We should remove the entire Document Audit Results feature from the document intake pipeline
Before further ingestion work builds on top of it
So that we eliminate unused AI-powered post-import verification code, reduce BridgeAI API costs, and simplify the ingestion workflow

## Acceptance Criteria

- [ ] Drop the `document_audit_records` database table via migration
- [ ] Delete the `DocumentAuditRecord` ORM model / repository, the `documentAuditRecord` queue job, and all `documentAudit/*` services (`documentProcessor`, `eventDocument`, `advisorStipendDocument`, `advisorStipendVerification`, `fileTypeClassifier`)
- [ ] Remove `enqueueDocumentAuditRecordJob(...)` calls from `documentEventsImporter` and `advisorStipendDocumentImporter`
- [ ] Remove the `for_document_audit_review` webhook action path from `parseWebhookHandler`
- [ ] Remove `documentAuditRecord` relations from `documentIntakeItem`, `eventDocument`, `document`, and `arrangement` models
- [ ] Delete admin API/controller (`documentAuditRecords.controller.ts`), React pages/components/hooks (`DocumentAuditRecordsPage.tsx`, `DocumentAuditSearchForm.tsx`, `DocumentAuditTable.tsx`, `BulkVerificationActions.tsx`, `useDocumentAuditFilters.ts`), and routes
- [ ] Remove Audit-related prompt identifiers from `src/config/aiTemplates.ts` (`advisor_salary_audit`, `advisor_document_audit`, `file_type_classifier`, `document_audit.event_document_identifier`)
- [ ] Delete maintenance scripts (`scripts/document-audit-record.ts`, `scripts/llm-audit-workflow.ts`)
- [ ] Remove admin nav link from `AdminIndexPage.tsx`
- [ ] Delete all related tests and factories
- [ ] Advisor Documents QC page remains unaffected (shares `document_audit_record` prefix name but is independent)
- [ ] All existing importer and webhook tests pass after changes

## Resources

- ~20 files to delete, ~10 files to modify, 1 migration to create
- Key trigger points: `documentEventsImporter`, `advisorStipendDocumentImporter`, `parseWebhookHandler.handleAuditAnalysis`
- Document Audit Results Dashboard (Metabase) — shows low usage of this feature
- Operator feedback Slack thread — context on why this is being removed
23:45 create_issue [linear-server]
{"team":"Northstar","title":"Remove document intake Audit step from stipend, events, and partner","description":"## Description\n\nWe should remove the entire Document Audit Results feature from the document intake pipeline\nBefore further ingestion work builds on top of it\nSo that we eliminate unused AI-powered post-import verification code, reduce BridgeAI API costs, and simplify the ingestion workflow\n\n## Acceptance Criteria\n\n- [ ] Drop the `document_audit_records` database table via migration\n- [ ] Delete the `DocumentAuditRecord` ORM model / repository, the `documentAuditRecord` queue job, and all `documentAudit/*` services (`documentProcessor`, `eventDocument`, `advisorStipendDocument`, `advisorStipendVerification`, `fileTypeClassifier`)\n- [ ] Remove `enqueueDocumentAuditRecordJob(...)` calls from `documentEventsImporter` and `advisorStipendDocumentImporter`\n- [ ] Remove the `for_document_audit_review` webhook action path from `parseWebhookHandler`\n- [ ] Remove `documentAuditRecord` relations from `documentIntakeItem`, `eventDocument`, `document`, and `arrangement` models\n- [ ] Delete admin API/controller (`documentAuditRecords.controller.ts`), React pages/components/hooks (`DocumentAuditRecordsPage.tsx`, `DocumentAuditSearchForm.tsx`, `DocumentAuditTable.tsx`, `BulkVerificationActions.tsx`, `useDocumentAuditFilters.ts`), and routes\n- [ ] Remove Audit-related prompt identifiers from `src/config/aiTemplates.ts` (`advisor_salary_audit`, `advisor_document_audit`, `file_type_classifier`, `document_audit.event_document_identifier`)\n- [ ] Delete maintenance scripts (`scripts/document-audit-record.ts`, `scripts/llm-audit-workflow.ts`)\n- [ ] Remove admin nav link from `AdminIndexPage.tsx`\n- [ ] Delete all related tests and factories\n- [ ] Advisor Documents QC page remains unaffected (shares `document_audit_record` prefix name but is independent)\n- [ ] All existing importer and webhook tests pass after changes\n\n## Resources\n\n- ~20 files to delete, ~10 files to modify, 1 migration to create\n- Key trigger points: `documentEventsImporter`, `advisorStipendDocumentImporter`, `parseWebhookHandler.handleAuditAnalysis`\n- [Document Audit Results Dashboard](https://plfxsduqkzlm-analytics.northad-hq.com/dashboard/64-document-audit-results-aka-step-3?archived=&entity_type=&date=thisday&id=&status_name=&text=&alert_severity=)\n- [Operator feedback Slack thread](https://northad.slack.com/archives/C07KQMD4B19/p1865182049637112)\n","labels":["Chore"]}
23:59
Created NORTH-7812 with the dashboard and Slack thread links included. Branch name: north-7812-remove-record-intake-audit-step-from-stipend-events-and. Ready to proceed with the implementation plan whenever you approve it.
E Enrico

As I was working through this removal I realized I wanted to add to the ticket the prompt removal from our AI Gateway.

24:45

also add to the ticket a task to remove Portkey prompts from BridgeAI

24:56 save_issue [linear-server]
{"id":"5f47ff1c-39ae-5d1b-a767-e5fcc8cf36fd","description":"## Description\n\nWe should remove the entire Document Audit Results feature from the document intake pipeline\nBefore further ingestion work builds on top of it\nSo that we eliminate unused AI-powered post-import verification code, reduce BridgeAI API costs, and simplify the ingestion workflow\n\n## Acceptance Criteria\n\n- [ ] Drop the `document_audit_records` database table via migration\n- [ ] Delete the `DocumentAuditRecord` ORM model / repository, the `documentAuditRecord` queue job, and all `documentAudit/*` services (`documentProcessor`, `eventDocument`, `advisorStipendDocument`, `advisorStipendVerification`, `fileTypeClassifier`)\n- [ ] Remove `enqueueDocumentAuditRecordJob(...)` calls from `documentEventsImporter` and `advisorStipendDocumentImporter`\n- [ ] Remove the `for_document_audit_review` webhook action path from `parseWebhookHandler`\n- [ ] Remove `documentAuditRecord` relations from `documentIntakeItem`, `eventDocument`, `document`, and `arrangement` models\n- [ ] Delete admin API/controller (`documentAuditRecords.controller.ts`), React pages/components/hooks (`DocumentAuditRecordsPage.tsx`, `DocumentAuditSearchForm.tsx`, `DocumentAuditTable.tsx`, `BulkVerificationActions.tsx`, `useDocumentAuditFilters.ts`), and routes\n- [ ] Remove Audit-related prompt identifiers from `src/config/aiTemplates.ts` (`advisor_salary_audit`, `advisor_document_audit`, `file_type_classifier`, `document_audit.event_document_identifier`)\n- [ ] Remove Portkey prompts from BridgeAI / AI Gateway configuration\n- [ ] Delete maintenance scripts (`scripts/document-audit-record.ts`, `scripts/llm-audit-workflow.ts`)\n- [ ] Remove admin nav link from `AdminIndexPage.tsx`\n- [ ] Delete all related tests and factories\n- [ ] Advisor Documents QC page remains unaffected (shares `document_audit_record` prefix name but is independent)\n- [ ] All existing importer and webhook tests pass after changes"}
25:04
Done — I added the AI Gateway / BridgeAI prompt cleanup task to the ticket.

And here’s the final story:

Chore

Remove document intake Audit step from stipend, events, and partner

Description

We should remove the entire Document Audit Results feature from the document intake pipeline
Before further ingestion work builds on top of it
So that we eliminate unused AI-powered post-import verification code, reduce BridgeAI API costs, and simplify the ingestion workflow

Acceptance Criteria

  • Drop the document_audit_records database table via migration
  • Delete DocumentAuditRecord model/repository, the documentAuditRecord queue job/worker, and all documentAudit/* services (documentProcessor, eventDocument, advisorStipendDocument, advisorStipendVerification, fileTypeClassifier)
  • Remove enqueueDocumentAuditRecordJob(...) calls from documentEventsImporter and advisorStipendDocumentImporter
  • Remove the for_document_audit_review webhook action path from parseWebhookHandler
  • Remove documentAuditRecord relations from documentIntakeItem, eventDocument, document, and arrangement models
  • Delete admin API/controller, React pages/components/hooks, and route registrations for Document Audit Results
  • Remove Audit-related prompt identifiers from the AI template config (advisor_salary_audit, advisor_document_audit, file_type_classifier, document_audit.event_document_identifier)
  • Delete maintenance scripts for audit backfill and audit workflow triggering
  • Remove the admin nav link from the React admin index/dashboard
  • Delete all related tests and factory/fixture data
  • Remove Audit prompts from BridgeAI:
    - pp-advisor-stip-291864 (advisor salary Audit)
    - pp-advisor-docs-cfd815 (advisor document audit)
    - pp-advisor-file-c148g1 (document type identifier)
    - pp-event-docs-2b27gg (event document audit)
  • Advisor Documents QC page remains unaffected (shares document_audit_record prefix name but is independent)
  • All existing importer and webhook tests pass after changes

Resources

  • ~20 files to delete, ~10 files to modify, 1 migration to create
  • Key trigger points:
    • documentEventsImporter
    • advisorStipendDocumentImporter
    • parseWebhookHandler.handleAuditAnalysis
  • Document Audit Results Dashboard (Metabase) — shows low usage of this feature
  • Operator feedback Slack thread — context on why this is being removed

I have another example walking through triaging and creating a bug with MCP.


Wrapping up

The templates aren’t about process for its own sake, they’re about making the work legible to the person (or agent) picking it up next.

A ticket that answers “what, why, and how will we know it’s done” is one that gets work done. That’s table stakes. You can follow and customize the templates above. But on their own they’re just forms to fill in. You need a strategic framework to justify and prioritize. Without both you’re just generating more zombie barbies.

Appendix

Here’s the full prompt I use:

PROMPT NAME: Create Linear Ticket
# Create Linear Ticket

Analyze the current conversation context and, if available, code context (staged changes, recent commits, open files, errors) to generate a well-structured Linear ticket. The trigger may be a bug discovered during discussion, a feature idea explored in conversation, a technical investigation, or code-level changes. Determine the appropriate ticket type based on context, then output the ticket in the correct template format.

## Ticket Type Selection

Determine the type based on these rules:

1. **Bug** — if context involves an error, failing test, unexpected behavior, regression, or a problem discovered/discussed in the current session
2. **Feature** (user-facing) — if context involves new user-facing functionality or a user story, whether discovered through discussion or code
3. **Non-User Feature** — if context involves API endpoints, system integrations, background jobs, or infrastructure that doesn't directly face end-users
4. **Chore** — if context involves refactoring, dependency updates, config changes, CI/CD, or cleanup with no direct user value
5. **Spike** — if context involves research, investigation, prototyping, or answering an open technical question

## Context Gathering

Before generating the ticket, gather context from **all available sources**, prioritized:

1. **Conversation context** — What has been discussed, discovered, debugged, or decided in this session? What problem was identified? What solution was proposed? Pull specific details: error messages, root cause analysis, affected components, user impact discussed.
2. **Code context** (if applicable) — git diff (staged and unstaged), recent commits, any visible errors or test failures, and the files currently being worked on.
3. **User clarification** — If the ticket type is ambiguous or key details are missing, ask the user before generating.

## Templates

### Bug
'''
[Component/Area]: Brief description of the issue in present tense

## Summary

One-paragraph overview of the issue

Environment details (browser, OS, app version, etc.)

## Steps to Reproduce

1. [Step 1]
2. [Step 2]
3. [Step 3]

- Include specific test data used
- Note any required preconditions
- Flag if intermittent/not always reproducible

## Current Behavior

What actually happens. Include relevant error messages, screenshots/videos if visual, logs if available.

## Expected Behavior

What should happen. Be specific about the expected outcome.

## Impact Assessment

- Who is affected (paid/free users, % of users)
- Frequency of occurrence
- Business impact
- Workarounds if any exist

## Additional Context

- Related bugs/tickets
- When the issue started (if known)
- Relevant customer feedback
- Stack traces or console errors
- How the issue was discovered (e.g., "found during session debugging X")
'''

### Feature (User-Facing)
'''
Story title here

[Clear concise high level description of functionality]

## User Value Statement

As a [user role or persona]
I want to [action or outcome]
So that [value realized]

## Acceptance Criteria

Scenario: [description of this particular test case]
Given [existing state or prerequisite]
When [user action]
Then [outcome]

## Resources

- Design specs / Screenshots / Wireframes (must have at least one)
- Example data (optional but SO helpful)
'''

### Non-User-Facing Feature
'''
Title for non-user facing feature

As a [machine/system/component]
I want to [perform a specific action]
So that I can [achieve a specific technical outcome]

## Structured Details

- **Actor (System/Service/Machine):** What automated process or system is this story focused on?
- **Goal (Action):** What operation is being performed (GET, POST, PUT, DELETE, etc.)?
- **Endpoint(s):** Which endpoint(s) are involved? API? DB?
- **Input Requirements:** Required fields, headers, formats, etc.
- **Expected Output:** Expected response schema and HTTP status codes
- **Failure Scenarios:** Common errors and how they should be handled
- **Security Context (if applicable):** Authentication/authorization mechanism
- **Frequency/Trigger:** How and when this interaction is initiated (e.g., cron job, event-driven)

## Acceptance Criteria (Gherkin Format)

Scenario: [description]
Given [precondition]
When [action]
Then [outcome]
'''

### Chore
'''
Name the chore here

## Description

We should [what needs to be done]
Before [deadline or dependency]
So that [why it matters technically]

## Acceptance Criteria

- [ ] [Task/Outcome 1]
- [ ] [Task/Outcome 2]

## Resources

- Instructions
- Additional context
- Other assets to help execute the chore
'''

### Spike
'''
Spike title here

## Description

We should [investigate/research/prototype what]
Before [what depends on this answer]
So that [what decision or clarity this enables]

## Acceptance Criteria

- [ ] [Question answered / Decision made / Outcome documented]

## Time Box

[How long to spend on this investigation, e.g., 2 hours, 1 day]

## Resources

- Instructions
- Additional context
- Other assets to help execute the spike
'''

## Instructions

1. **Review the full conversation history first.** Identify what was discussed, what problem was found, what conclusions were reached, and any specific technical details (error messages, file names, function names, root causes, affected users).
2. **Then examine code context if available:** git diff (staged and unstaged), recent commits, any visible errors or test failures, and the files currently being worked on.
3. Determine the ticket type using the rules above. If ambiguous, ask the user.
4. Fill in the appropriate template with as much concrete detail as possible derived from **both conversation and code context**. Use actual file names, function names, error messages, and component names — not generic placeholders. Reference specific findings from the session.
5. If you don't have enough context for a required field, insert a `[TODO: ...]` placeholder and note what's missing.
6. Present the ticket as a markdown code block so it can be copied directly into Linear.
7. After presenting the ticket, ask if the user wants to adjust the type, add details, or create it via the Linear MCP tool (if available).