• Prompt/Deploy
  • Posts
  • Level 2: AI-Curious – What to Do When Nothing Sticks

Level 2: AI-Curious – What to Do When Nothing Sticks

Experimenting with AI occasionally? Here's how to find your first reliable use case.

You're AI-curious. You've seen Copilot autocomplete something useful, or asked ChatGPT to help with a regex. But nothing sticks.

There's no workflow. Just occasional pokes at the tool. And when it doesn't work perfectly — the habit doesn't form.

This is completely normal. You're figuring out where AI fits in your work, and that takes experimentation.

What AI-Curious Actually Looks Like

  • You've used AI for a few isolated tasks — regex, error messages, maybe some boilerplate

  • You prompt reactively when stuck, not proactively as part of your flow

  • Sometimes it helps brilliantly, sometimes it wastes your time

  • You're not sure which tasks are worth the effort

Why Nothing Sticks (The Real Reasons)

It's not you — it's the approach.

When you use AI randomly for different tasks, you can't build trust or intuition. Every interaction feels like starting over. You don't know if poor results are because:

  • The task isn't suited for AI

  • Your prompt needs adjustment

  • You're using the wrong model

  • Your expectations are off

The solution isn't complex prompt engineering. It's finding one simple, repeatable use case where AI consistently saves time.

The Simplest System That Works

Forget elaborate frameworks. Here's what actually helps AI-curious developers build their first reliable pattern:

Step 1: Pick ONE Boring Task

Choose something you do weekly that's tedious but straightforward:

  • Writing test cases for simple functions

  • Generating sample JSON data

  • Writing commit messages

  • Creating basic documentation

  • Formatting SQL queries

Why boring? Because boring tasks have clear success criteria. You'll know immediately if AI helped.

Step 2: Use the Same Simple Prompt

Don't overthink it. Start with something like:

Write Jest tests for this function:
[paste function]

Include: happy path, edge cases, null checks

That's it. No system prompts, no few-shot examples, no complex instructions.

Step 3: Use It Five Times

Use this exact prompt for similar tasks over the next week. Don't optimize yet — just observe:

  • Does it save time?

  • Is the output good enough?

  • What would make it better?

Step 4: Make One Improvement

After five uses, make ONE adjustment based on what you noticed:

  • Add "use describe/it blocks" if structure was inconsistent

  • Add "keep it concise" if output was too verbose

  • Add "include error cases" if coverage was lacking

One change. Test again.

A Real Example (Without the Fluff)

Task: Writing tests for utility functions

Week 1 Prompt:

Write tests for this function: [code]

Result: Works 60% of the time, inconsistent structure

Week 2 Prompt:

Write Jest tests for this function: [code]
Include edge cases

Result: Better structure, works 80% of the time

Week 3 Prompt:

Write Jest tests for this function: [code]
Include: edge cases, null inputs, empty arrays
One test per 'it' block

Result: Consistent, useful 90% of the time

Time to reach "good enough": 3 weeks, 15 total uses, 2 refinements

Common Traps to Avoid

Trap 1: Starting Too Complex

❌ 300-word prompts with role-playing and complex instructions ✅ Start with one sentence and build from there

Trap 2: Switching Tasks Too Often

❌ Using AI for different things each time ✅ Same task, same prompt, building consistency

Trap 3: Optimizing Too Early

❌ Perfecting your prompt before you know if the task is worth automating ✅ Get it working, then improve

Trap 4: Expecting Perfection

❌ Abandoning AI after one bad output ✅ Looking for "good enough to save time"

What Success Actually Looks Like

You know you've found your first reliable pattern when:

  • You reach for AI automatically for that specific task

  • You trust the output enough to use it with minor edits

  • It genuinely saves time (even accounting for review)

  • You could teach someone else your approach in 30 seconds

This might be your only AI use case for months — and that's perfectly fine.

Moving Beyond AI-Curious (When You're Ready)

Once you have ONE reliable use case:

  1. Use it consistently for a month

  2. Find a SECOND task (don't abandon the first)

  3. Apply the same simple process

  4. Build your repertoire slowly

You don't need to:

  • Use AI for everything

  • Build complex systems

  • Save every prompt

  • Become an AI evangelist

You just need: One or two reliable patterns that save you time.

The Reality Check

Many developers stay AI-curious indefinitely, using AI occasionally when it makes sense. That's not failure — it's selective adoption based on actual value.

Signs you're exactly where you should be:

  • AI helps with specific tasks but isn't essential

  • You can work with or without it

  • You're not stressed about "falling behind"

  • Your code quality hasn't suffered

Try This This Week

  1. Monday: Pick one boring, repetitive task

  2. Tuesday-Friday: Use the same simple prompt each time you do that task

  3. Weekend: Note if it saved time overall

If yes, you've found your first pattern. If no, try a different task next week.

That's it. No complex systems, no prompt libraries, no pressure.

What's Next?

Once you have 2-3 tasks where AI reliably helps, you might naturally evolve toward more systematic use (Level 3). Or you might not — and that's equally valid.

The goal isn't to reach a higher level. It's to find what genuinely helps your work.

Some developers will build elaborate AI workflows. Others will use it for commit messages and nothing else. Both are successful patterns if they serve your needs.

Remember: AI adoption isn't a race. Finding one small use case that saves you 10 minutes a week is better than forcing complex workflows that add cognitive overhead.

Reply

or to participate.