Skip to content

15 Years of Finding Bugs Taught Me How to Build Software

2026-03-09 · 8 min · Oleg Neskoromnyi

It's March 2026, and I'm sitting here looking back at over three years of experiments, side projects, failed demos, and tools I never thought I'd be capable of building. Somewhere between late 2022 and now, I went from being the person who finds bugs in other people's code to the person who builds software — and then finds bugs in his own.

I never wrote any of it down properly. The lessons, the failures, the moments where everything clicked — they're scattered across project repos and half-finished notes. This series is me going back and documenting it while it's still fresh enough to be useful. Not just for me, but for any QA professional wondering if they could build something too.

Let me start from the beginning.

15 Years on the Other Side of the Wall

I've spent 15 years breaking other people's software. Finding the edge case nobody thought of. Writing the test plan that catches the bug before it ships. Sitting in postmortem meetings explaining how a single unchecked input brought down a payment flow on a Friday afternoon.

After 15 years, you develop a sense for it. You look at a feature spec and you already know where it's going to fail. You know that the developer will handle the happy path and forget what happens when the user double-clicks, submits an empty form, or loses connection halfway through a transaction. You've seen it hundreds of times.

But knowing where software breaks and being able to build something better — those were always two different worlds. I had ideas. Lots of them. Better workflows, smarter tools, solutions to problems I saw every single day. But I was the QA guy. I filed the bugs. I didn't write the code.

Then, on November 30, 2022, OpenAI released ChatGPT. About a week later, my manager at General Motors asked me: "Hey, have you heard about ChatGPT?" I hadn't. But the moment he described it, something clicked. The early adopter in me woke up. I had to try it that same day.

The First Experiment

I'm an early adopter by nature. When a new tool shows up, I don't read five articles about it and wait for consensus. I open it, I try it, I push it until it breaks. That's how I've always operated — in QA and in life.

So while most people were still figuring out what ChatGPT even was, I wasn't thinking about content generation or chatbots. I was thinking about testing. I sat down and asked it to write test cases for an API endpoint. The first result was generic — textbook stuff. But when I fed it the actual requirements, the validation rules, the response codes — the output got useful. Fast.

Within a few weeks, I had built a simple agent — a structured prompt that could take API documentation and produce something I could actually use at work. It wasn't sophisticated. It was a text file with a prompt template and a ChatGPT window. But it was the first time I had built something that solved a real problem.

That feeling — I made this, and it works — was new. After 15 years of evaluating what other people built, I was building.

From Prompt to Product

That simple agent became Sarah — a Custom GPT for test management that went through 13 versions. Each version fixed what the previous one got wrong. It was the same process I'd used my entire career in QA: define expected behavior, test against reality, find the gaps, fix them, repeat.

I realized something during those iterations. The skills I'd built over 15 years weren't just useful for testing. They were useful for building.

When you've spent years writing test plans, you think in terms of requirements and acceptance criteria. When you've spent years finding edge cases, you anticipate failure modes before they happen. When you've spent years in postmortems, you understand what goes wrong when systems aren't designed carefully.

I wasn't becoming a developer. I was a QA professional who could now build things — and I was bringing 15 years of understanding why software fails into the process of creating it.

What AI Actually Unlocked

I want to be specific about what changed, because it wasn't magic.

Before AI tools, building software required years of learning a programming language, a framework, a build system, deployment pipelines. The barrier wasn't ideas or understanding — it was implementation. I knew what I wanted to build. I couldn't translate that into code.

AI tools like Cursor and Claude Code removed the implementation barrier. Not completely — I still need to understand what the code does, review it, test it, and fix what the AI gets wrong. But the gap between "I know what this should do" and "I have working code that does it" went from months to hours.

That gap is where QA professionals have an unfair advantage. Most people learning to build with AI come from a design or business background. They know what they want but don't know how to verify it works correctly. I come from the opposite direction. I know exactly how to verify software. I know what to test, what to question, what to distrust. I just couldn't write the code before.

Now I can.

The barrier to building software was never understanding — it was implementation. AI removed the implementation barrier. What's left is exactly what QA professionals are trained for: knowing what "correct" looks like and verifying that you actually got there.

Three Years of Experiments — And Counting

Since that first ChatGPT session in late 2022, I haven't stopped experimenting. Some projects worked. Some failed spectacularly. Some taught me lessons about scope and discipline that I'm still applying.

I built tools for my own work. I built tools for other QA professionals. I built this blog — the site you're reading right now — from scratch using Next.js, a framework I'd never touched before AI made it accessible.

Every project follows the same pattern: I have an idea, I use AI to build it, I apply my QA instincts to test and improve it, and I learn something new about both building and testing in the process. Sometimes the QA mindset saves me — I catch bugs that would have shipped if I didn't think in edge cases. Sometimes it slows me down — I over-test a prototype that just needs to exist so I can evaluate the concept.

Finding that balance is its own skill. One I'm still developing.

Why I'm Writing This Now

It's been over three years. I've accumulated enough experiments, enough failures, enough lessons that they're worth documenting properly. Not as a highlight reel — as an honest account of what happens when a QA professional with 15 years of experience starts building with AI.

This post is the start of a series I'm calling QA Who Builds. Each post will cover a real project, a real experiment, or a real lesson from this journey. I'll share what worked, what broke, and what I learned — including the things that are uncomfortable to admit.

Some of what's coming:

  • How I learned to test AI-generated code after shipping a bug I should have caught
  • What happened when I did a live AI demo at a meetup and it failed in front of everyone
  • The projects that actually saved companies time and money — and how QA thinking made that possible
  • The moments where my testing instincts held me back instead of helping

I'm not writing this as someone who has it figured out. I'm writing it as someone who's in the middle of it. Learning, building, failing fast, and moving forward.

If you're a QA professional who's been watching AI tools evolve and wondering whether you could build something yourself — you can. The skills you've developed testing software are more valuable for building it than you might think.

The wall between testing and building is gone. What you do with that is up to you.


Are you a QA professional who's started building with AI? I'd love to hear what you're working on and what you've learned — reach out on the contact page.

Stay Connected

Subscribe and get instant access to 50 free AI prompts for software testers — plus new articles on AI-powered testing, automation strategies, and quality leadership. No spam, unsubscribe anytime.