Skip to content
aiengineering

Vibe Coding Is Addictive — And It Will Wreck Your Product

📅 2026-02-09⏱️ 7 min✍️ By Oleg Neskoromnyi

I didn't notice it happening at first. I had a clear idea — a simple companion tool for creating Xray test cases. My team was using the Xray Cloud interface directly, and it was slow. I wanted a faster workflow: a form, a local draft system, and a button to import to Xray. That's all RayDrop needed to be.

I opened Cursor, described what I wanted, and watched the code appear. Twenty minutes later I had the form working. An hour later I had drafts saving locally. By the end of the evening I was building an entire Xray entity browser — Test Plans, Test Sets, Test Executions, Preconditions — with nested folder navigation.

Nobody asked for that. The tool was supposed to create test cases.

That's vibe coding. And it's a trap.

What Vibe Coding Actually Feels Like

If you haven't tried it yet — vibe coding is building software by describing what you want to an AI, and letting it generate the code. You talk, it builds. You adjust, it rebuilds. The feedback loop is almost instant.

And that loop is the problem.

Traditional coding has natural friction. You hit a bug, you debug for an hour, you lose momentum, you step back and ask yourself: do I actually need this feature? That friction is annoying, but it serves a purpose. It forces you to evaluate whether what you're building is worth the effort.

Vibe coding removes that friction almost entirely. Want to add a multi-phase import progress tracker with step-by-step validation? Describe it, get it in a few minutes. Want dark mode with a full theme context? Done before your coffee gets cold. Want to build code detection that auto-identifies JSON, JavaScript, and TypeScript in test data fields and wraps them in syntax highlighting? Sure, why not — it only takes five minutes.

Each individual addition feels small. Each one feels free. But they compound. And by the time you look up from the screen, your simple tool has become a platform.

The dangerous moment in vibe coding isn't when the AI produces bad code. It's when it produces good code — fast. Because then there's nothing stopping you from adding the next thing, and the next thing, and the next.

The Dopamine Loop

Here's what I think is happening psychologically. Vibe coding gives you a hit of completion every few minutes. You describe a feature, you see it work, you feel the satisfaction of building something. That cycle — describe, generate, see result — is incredibly fast compared to traditional development.

In normal coding, you might get that feeling of completion a few times a day. A function works. A test passes. A feature ships. With vibe coding, you get it every two to three minutes. Your brain starts chasing that feeling. One more feature. One more improvement. One more thing that would be cool.

I built a Matrix rain easter egg animation in RayDrop. A full canvas animation with test-themed keywords raining down the screen. In a test case management tool. Because the AI made it easy, and the pull of "well, it's already done" was strong enough that I kept it.

That's not engineering. That's compulsion.

How Over-Engineering Sneaks In

Nobody sits down and decides to over-engineer their product. It happens feature by feature, each one feeling reasonable in the moment.

Here's how it went with RayDrop:

Stage one — I solved the original problem. A form for writing test cases, a local draft system, and bulk import to Xray. This is what my team needed. The core took maybe a day. Good.

Stage two — I added reasonable improvements. Input validation, loading states, a project selector so the tool could work across multiple Jira projects. The kind of things that make the difference between a prototype and something you'd actually give to your team. Still fine.

Stage three — I started imagining users I don't have. What if someone needs to browse Xray entities without leaving the tool? What if we need per-project customization with colors and functional areas? What if test data fields contain code and we should detect the language automatically? Each question was hypothetical. Each answer added code. This is where vibe coding gets dangerous, because the AI doesn't push back. It doesn't say "your team needs a test case form, not a platform." It just builds whatever you ask for.

Stage four — I'm now maintaining a system with 37 React components, 16 UI components, a 400-line import hook with multi-phase state tracking, a full Xray entity browser, a TC Review page, rate limiting on a locally-running app, and 30 test files. The original need was a form and a button.

Before asking the AI to add something, write down who specifically asked for it. If the answer is "nobody, but it would be nice" — close the tab. Go outside. The feature can wait until someone actually needs it.

The AI Won't Tell You to Stop

This is the core issue. Human collaborators push back. A teammate will say "do we really need this?" A product manager will say "that's not in scope." A tech lead will say "let's ship what we have and see if anyone asks for more."

The AI says "sure, here's the implementation."

Every single time. It doesn't have opinions about scope. It doesn't know your roadmap. It doesn't care that you've been building for six hours straight and haven't eaten. It will generate a multi-phase import progress tracker with individual step validation and linked item tracking with error details, and it will do it enthusiastically.

That's not the AI's fault. It's doing what it's designed to do — generate code based on your description. The problem is that it removes the last natural checkpoint between "I had an idea" and "I built it." In traditional development, the effort of implementation was that checkpoint. Building something hard took long enough that you'd reconsider whether it was worth building at all.

When building is nearly free, you have to supply your own restraint. And that's harder than it sounds when you're in the flow.

What I Started Doing Differently

I don't have a perfect system for this, but I have a few rules I follow now.

Write the spec before you start. Not a detailed document — just a list of what the tool needs to do. Three to five bullet points. When I'm tempted to add something, I check the list. If it's not there, I don't build it. RayDrop's spec should have been: create test cases, save drafts locally, import to Xray. Three bullet points. Not a Xray entity browser with folder navigation.

Set a time limit. I give myself a fixed window — usually two hours for a small tool, half a day for something bigger. When the time is up, I stop adding features and start cleaning up what I have. The constraint forces prioritization.

Ship the ugly version. This is the hardest one. Vibe coding makes it so easy to polish that you end up polishing things nobody will notice. The slightly misaligned button. The loading animation that could be smoother. The error message that could be more helpful. Ship it rough. See if anyone even uses it before you make it beautiful.

Delete aggressively. If I built something during a vibe coding session and nobody's used it in two weeks, I remove it. Not comment it out. Delete it. The AI can regenerate it in minutes if it turns out someone actually needed it. Dead code is cognitive overhead, even when it was free to create.

The hardest skill in vibe coding isn't getting the AI to build things. It's deciding what not to build. That's a discipline, not a technical skill, and no amount of tooling will develop it for you.

When Vibe Coding Works

I don't want to make it sound like vibe coding is all downside. It's not. It's genuinely useful in the right context.

Prototyping is the obvious one. When you're exploring an idea and you need to see it working before you can evaluate it — vibe coding is perfect. Build the thing in an hour, show it to your team, get feedback, decide if it's worth building properly.

Throwaway tools are another. Scripts that process a one-time data migration. A quick form for collecting feedback at a workshop. A demo for a presentation. Things with a clear expiration date, where over-engineering doesn't matter because the whole thing gets deleted next week.

Learning is the third. When I'm exploring a new library or a pattern I haven't used before, vibe coding lets me see working examples immediately. I describe what I want, the AI shows me how the library handles it, and I learn the concepts faster than I would from documentation alone.

The pattern: vibe coding works best when the output is temporary, exploratory, or disposable. It gets dangerous when you're building something that's going to live in production and accumulate users.

The Real Cost of Free Features

Every feature you add — even if it took two minutes to generate — has an ongoing cost. Someone has to understand it when they read the code. Someone has to test it when the adjacent code changes. Someone has to maintain it when the dependency updates. Someone has to explain it to the new team member.

These costs don't show up on the day you build the feature. They show up three months later when you're trying to fix a bug and you can't understand why the codebase is so complex for what should be a simple tool.

RayDrop has 37 component files, 5 server route modules, a custom hook that's 400 lines long, and 30 test files. The original requirement was a form that creates test cases and imports them to Xray. That's what vibe coding does when you don't watch it. The AI doesn't know the difference between a weekend project and an enterprise platform. It builds whatever you ask for with the same level of structure and abstraction. And if you keep asking, it keeps building.

The Question to Keep Asking

Every time I'm about to describe a new feature to the AI, I try to pause and ask: if I had to build this by hand, would I still build it?

If the answer is no — if the only reason I'm building it is because the AI makes it easy — then I probably shouldn't build it. Easy to create is not the same as worth creating. The effort filter is gone, but the need filter still matters.

Vibe coding is a powerful tool. But powerful tools without discipline produce impressive messes. RayDrop taught me that.

The skill isn't in making the AI produce more. It's in knowing when to stop talking to it.


Have you caught yourself over-engineering something during a vibe coding session? I'd be curious to hear where you drew the line — or didn't. Reach out on the contact page to share your experience.

Continue Reading

Claude Code /btw and /voice changed how I talk to my terminal

Read more →

I Did a Live AI Demo at a QA Meetup. It Failed.

Read more →

15 Years of Finding Bugs Taught Me How to Build Software

Read more →

Stay Connected

Subscribe and get instant access to 50 free AI prompts for software testers — plus new articles on AI-powered testing, automation strategies, and quality leadership. No spam, unsubscribe anytime.