- Prompt/Deploy
- Posts
- Level 2: AI-Curious – What to Do When Nothing Sticks
Level 2: AI-Curious – What to Do When Nothing Sticks
Prompting once in a while doesn't make it part of your system.
Level 2 developers are open to AI — they've seen Copilot autocomplete something useful, or asked ChatGPT to help with a regex. But nothing sticks.
There's no workflow. Just occasional pokes at the tool. And when it breaks or confuses — the habit doesn't form.
What this usually looks like:
You've used AI for a few isolated tasks — but still code the hard stuff manually.
You prompt reactively, without saving or reusing what worked.
You assume inconsistency is just "how these tools are."
Why it matters
If every prompt starts from scratch, you never build intuition or improvement. When you reuse a prompt, you control a key variable, making it easier to tell what's broken—the model, the prompt, or your specific inputs. Useful patterns get lost in browser history, and you can't manage the model's variance.
The missed pattern
Prompt reuse is the turning point. When one prompt works across multiple tickets, that's when devs start to systematize.
The difference isn't the AI tool—it's the feedback loop. When you save what worked, note what didn't, and iterate on the same prompt across similar problems, you start building prompt intuition. That's when AI shifts from 'random helper' to 'predictable tool.'
Try this today
The first step to building a system is to treat your prompts like engineering assets. We'll create a reusable, professional-grade prompt for generating unit tests.
1. Create Your Prompt Log
Start a simple prompts.md file in a personal project or create a private GitHub Gist. This is your lab notebook.
2. Craft a Professional-Grade Prompt
Instead of a simple command, use a structured template. This gives the AI clear guardrails and dramatically improves the consistency and quality of the output. Pick a helper function you've written and use this template with a model like Claude 3.5 Sonnet or GPT-4o:
# SYSTEM PROMPT
You are a Senior QA Engineer specializing in writing robust, easy-to-read unit tests for TypeScript applications. You follow the Arrange-Act-Assert (AAA) pattern strictly.
# USER PROMPT
Write a comprehensive Jest test suite for the following TypeScript function.
## CONSTRAINTS
1. **Framework:** Use Jest with `describe` and `it` blocks.
2. **Pattern:** Adhere strictly to the Arrange-Act-Assert (AAA) pattern for test structure.
3. **Naming:** Test descriptions (`it` blocks) must be clear and descriptive of the specific case being tested.
4. **Coverage:** Ensure you cover the following categories:
- Happy path (e.g., a standard, valid discount)
- Edge cases (e.g., discount greater than base price, zero or negative values, null/undefined inputs)
- Type safety (e.g., what happens if inputs are not numbers)
## FEW-SHOT EXAMPLE (Example of a good test)
Here is an example of the style I want:
```typescript
it('should return the base price when the coupon amount is zero', () => {
// Arrange
const basePrice = 100;
const coupon = { amountOff: 0 };
// Act
const finalPrice = calculateDiscountedPrice(basePrice, coupon);
// Assert
expect(finalPrice).toBe(100);
});
```
## FUNCTION TO TEST
```typescript
export function calculateDiscountedPrice(basePrice: number, coupon?: { amountOff?: number }): number {
if (!coupon?.amountOff || coupon.amountOff <= 0) return basePrice;
return Math.max(0, basePrice - coupon.amountOff);
}
```
3. Log the Results
Paste the prompt and the AI's full output into your log. Then, add structured notes. This feedback loop is the most critical part of the process:
Goal:
Speed up test coverage for core business logic.Model Used:
GPT-4oWorked Well:
The AAA pattern was followed perfectly. The edge case where the discount exceeds the base price was correctly tested to return 0.Needs Refinement:
The model didn't generate a test for anull
coupon object, onlyundefined
. I'll addnull
explicitly to theCoverage
requirements next time.
4. Reuse prompt
On your next ticket with similar logic, don't start from scratch. Copy this entire prompt, swap out the function, and see if your refinements improve the output. That reuse cycle is the shift from "trying AI" to "engineering with AI."
What's next
Once you have 3-4 logged prompts that work reliably, you're ready for Level 3.
Graduating from Gists: While a markdown file is a great start, as you build a library of trusted prompts, you'll want to move them into a version-controlled directory within your team's repo (e.g., /prompts
). This makes them shareable, reviewable, and a true team asset.
You're now building the foundation for turning these ad-hoc patterns into systematic, automatable workflows.
Reply