• Prompt/Deploy
  • Posts
  • From Unclear Ticket to Aligned Spec: A Before/After Walkthrough

From Unclear Ticket to Aligned Spec: A Before/After Walkthrough

Instead of jumping into implementation, start complex features with a clarification prompt—when it's worth the time.

Most vague tickets aren't the PM's fault. They're just under-scoped.

But here's the thing: not every vague ticket needs perfect clarity before you start. Sometimes building with smart assumptions and iterating is faster than waiting for complete requirements. The skill is knowing when to clarify versus when to start coding.

The Real Problem We're Solving

You get a ticket: "Add 2FA to login." You have 30 minutes before standup. The PM is in back-to-back meetings. You could:

  1. Wait for perfect requirements (2-3 days)

  2. Guess and build (2 hours, might need rework)

  3. Use AI to identify critical questions (5 minutes, then build smart)

Option 3 isn't about generating every possible question—it's about quickly spotting the decisions that would fundamentally change your implementation.

The Actual Before/After

What Developers Really Do

Typical prompt:

Create a React login component with email/password fields, 
2FA support using TOTP, form validation, and error handling.

What you get: A working component that makes a dozen hidden assumptions about error messages, retry logic, session handling, and recovery flows.

The problem: These assumptions might be fine, or they might cause rework. You don't know which until later.

The 5-Minute Clarification

Better prompt (when you need it):

I'm implementing 2FA for our login. 
What are the TOP 5 questions that would most affect my implementation?
Focus on decisions that would require rework if I guess wrong.

What you get:

1. Is 2FA mandatory or optional for users?
2. How do users recover if they lose their 2FA device?
3. Should we remember devices to skip 2FA?
4. What happens after 3 failed attempts?
5. Are we using SMS, TOTP, or both?

The key: These are implementation-changing questions. The answers affect your component structure, not just details.

When to Clarify vs. When to Build

Generate Questions When:

  • High-risk features: Auth, payments, data deletion

  • Cross-team dependencies: "Integrate with team X's service"

  • Vague requirements: "Make it work like Slack"

  • First implementation: No existing patterns to follow

Skip Questions When:

  • Clear tickets: Has mockups, acceptance criteria, API docs

  • Established patterns: "Another CRUD form like the others"

  • Time-sensitive: Hotfix, demo tomorrow, PM available now

  • Prototypes: Requirements will change anyway

The 80/20 Rule

80% of tickets need zero AI clarification. For the 20% that do, focus on questions that would cause architectural rework, not UI polish.

Real Example: The Messy Middle

Ticket: "Add user notifications"

Without AI clarification: You build email notifications. Later discover they wanted in-app. Major rework.

With over-clarification: You generate 25 questions about delivery guarantees, retry strategies, template systems, localization, rate limiting, batching strategies... PM gets overwhelmed, answers none of them.

With focused clarification (realistic):

Quick question before I start: 
1. Email or in-app notifications? 
2. Real-time or okay with 5-min delay?
3. Users control preferences or system decides?

PM responds in Slack: "In-app, real-time not required, users control."

You build the right thing first time.

The Time/Value Reality

Let's be honest about ROI:

Approach

Time Investment

Risk of Rework

Best For

No clarification

0 minutes

High (30-40%)

Low-risk features

AI clarification

5 minutes

Medium (10-15%)

Complex features

Full requirements gathering

2-3 days

Low (5%)

Critical systems

Build and iterate

2 hours + refactor

Guaranteed some

Prototypes

The uncomfortable truth: For many features, building with assumptions and refactoring is faster than perfect upfront clarity.

Practical Patterns That Work

Pattern 1: The Quick Gut Check

This ticket seems vague. 
What's the ONE thing I should clarify before coding?

Use when: You have 2 minutes

Pattern 2: The Assumption List

I'm building [feature] with these assumptions:
- [assumption 1]
- [assumption 2]
Which of these would cause problems if wrong?

Use when: You're already building

Pattern 3: The Rework Predictor

If I build this wrong, what would be expensive to change later?

Use when: Deciding whether to start or clarify

What AI Gets Wrong (And Right)

AI Often Generates Irrelevant Questions:

  • "What about GDPR compliance?" (for internal tools)

  • "How should this scale to 1M users?" (you have 100)

  • "What's the internationalization strategy?" (US-only product)

Filter aggressively. Most AI questions are noise.

AI Is Good At Spotting:

  • State management complexity you missed

  • Error cases you forgot

  • Integration points you overlooked

  • Security considerations

Use AI as a checklist, not an oracle.

The Team Context Problem

The article examples always assume perfect context:

  • "Our stack: React, TypeScript, PostgreSQL..."

  • "Design system: Mobile-first, accessible..."

  • "Future scope: SSO integration planned..."

Reality: You might know the stack. Everything else is scattered across Slack, old docs, and the PM's head.

Practical approach:

Building login with 2FA for our React app.
What should I check with the PM?
(I don't know the business context yet)

This acknowledges what you don't know instead of pretending you have perfect context.

Building Better Team Habits

Instead of using AI to compensate for unclear tickets:

  1. Create a ticket template

    • Required: What success looks like

    • Required: What shouldn't change

    • Optional: Implementation suggestions

  2. Do 5-minute kickoffs

    • Complex features only

    • Developer asks their top 3 questions

    • PM clarifies or says "use your judgment"

  3. Document assumptions in code

    // Assuming: 2FA is optional for all users
    // If this changes: need to add forced enrollment flow
    const is2FARequired = false;
    
  4. Make reversal cheap

    • Use feature flags

    • Keep PRs small

    • Deploy frequently

The Bottom Line

Using AI for requirements clarification is one tool in your toolkit. It's useful maybe 20% of the time, crucial maybe 5% of the time, and unnecessary the rest.

The skill isn't writing perfect clarification prompts—it's quickly identifying:

  1. What could cause expensive rework

  2. Whether clarification is worth the time

  3. Which assumptions are safe to make

Most tickets don't need 20 questions. They need 0-3 good questions, smart defaults for everything else, and the ability to iterate quickly when you're wrong.

Remember: Perfect requirements are expensive. Good enough requirements plus fast iteration usually wins.

Note: This is one perspective on using AI for requirements. Your mileage will vary based on team size, domain complexity, and organizational culture. The principle—think before you build—is universal. The practice—using AI for clarification—is situational.

This upgrade comes from Stage 1: Frame the Feature of Prompt-to-ProductionRead the full 6-stage system

Reply

or to participate.