The Prototype Paradox: Why First-Generation Vibe Coding Platforms Created an Industry of Fixers

Image Credits: Upload

The Prototype Paradox: Why First-Generation Vibe Coding Platforms Created an Industry of Fixers

R

Ruban Phukan

Author

Published on

The Promise vs. The Reality

In February 2025, Andrej Karpathy coined the term "vibe coding" to describe a new approach to software development: describing what you want to an AI and letting it generate the code. Within months, platforms like Lovable, Bolt, and Replit had collectively raised over a billion dollars in funding, with Lovable alone reaching a $6.6 billion valuation and $200 million in annual recurring revenue by the end of 2025.

The promise was compelling: build software 20x faster than traditional development, no coding experience required. Companies like Klarna, Uber, and Zendesk signed on as customers. Y Combinator reported that 25% of startups in its Winter 2025 batch had codebases that were 95% AI-generated.

But alongside this explosive growth, a quieter industry emerged. Search Fiverr for "fix Lovable" or "Bolt bug fix" today, and you'll find dozens of freelancers advertising services specifically to repair code generated by these platforms. One listing's headline captures the phenomenon perfectly: "Is Your AI-Generated App Buggy, Slow, or Not Production Ready? You're not alone."

This isn't a failure of vibe coding as a concept. It's the predictable outcome of platforms optimized for one metric—speed to first demo—while their business models depend on something else entirely: the iterations required to bridge the gap between prototype and production.


The Credit Economy: Where Speed and Completion Diverge

Understanding why fixers have become necessary requires examining how these platforms make money.

Lovable's Credit Model

Lovable charges $25/month for 100 credits. Each AI interaction consumes credits based on complexity—creating an app structure costs 2 credits, changing a button's border radius costs 0.5 credits. According to a Superblocks analysis, "If your prompts are vague, you'll burn through credits with extra iterations."

This creates a predictable dynamic. Users arrive with an idea, generate an impressive first prototype in minutes, then encounter the long tail of implementation: authentication edge cases, error handling, database optimizations, and security configurations. Each fix attempt consumes more credits.

One developer documented spending 25 hours on a vibe-coding project with "countless 'please debug and fix' messages." Another wrote on dev.to: "I love Lovable and have been using it for the past months. But especially regarding credit usage, I am blind. Whenever I perform an operation, I don't know how many credits will be used for it. Result? My credits die faster than I expect."

Bolt's Token Economy

Bolt uses token-based pricing: $20/month for 10 million tokens, scaling to $200/month for 120 million tokens. A Trickle blog analysis found that "token consumption has turned out to be much more aggressive than expected. A user's Pro plan lost 1.3 million tokens in a single day. The situation gets worse where developers have burned through 7-12 million tokens just trying to fix simple errors."

One Medium post from a Bolt user explained the escalation: "As your project grows, you'll find yourself forking out a LOT of credits per call. It becomes unsustainable... Bolt cannot fix certain things, and you'll find it overwriting good code with bad, and wasting credits at the same time."

Replit's Effort-Based Pricing

Replit introduced "effort-based pricing" in July 2025, charging based on compute resources consumed rather than flat checkpoint fees. The Register reported user complaints following the Agent 3 release: "I typically spent between $100-$250/mo. I blew through $70 in a night at Agent 3 launch... One prompt brute forced its way through authentication, redoing auth and hard resetting a user's password. Other prompt redesigned the complete app in a new UI that it made up. I stopped immediately after that because the one prompt was $20 that ruined my UI."


The Sunk Cost Spiral

These pricing models create what behavioral economists call a sunk cost trap. Users who have already invested $50-100 in credits to reach 80% completion face a choice: abandon their work or buy more credits to push through the final 20%. The platforms benefit either way as abandoned projects don't consume support resources, and continuing users generate more revenue.

The math reveals the tension. If platforms were optimized to minimize iterations, their per-user revenue would decline. A user who achieves production readiness in 20 interactions is worth less than one who requires 100. The business model isn't aligned with user success; it's aligned with iteration volume.

This isn't necessarily intentional exploitation. These platforms face real costs: every AI call to Claude, GPT, or Gemini costs them money. But the result is a system where the path from prototype to production becomes increasingly expensive for the user, even as the platforms' marketing continues to emphasize the speed of initial creation.


The Production Readiness Gap

The emergence of dedicated fixers reveals what these platforms struggle to deliver: code that works beyond the demo.

Security Vulnerabilities at Scale

In May 2025, security researcher Matt Palmer published CVE-2025-48757, documenting a critical vulnerability in Lovable-generated applications. His analysis of 1,645 Lovable-created apps found that 170 (10.3%) had security flaws exposing user data. The vulnerable endpoints leaked email addresses, phone numbers, API keys, payment details, and personal information.

The root cause was misconfigured Row Level Security (RLS) policies in Supabase databases, a backend integration Lovable prominently features. As Palmer's disclosure stated: "We believe that many developers using Lovable are unaware that they're exposing sensitive information through these database endpoints."

Lovable's response included a "security scanner" feature, but according to Palmer's follow-up analysis, "it merely checks for the existence of any RLS policy, not its correctness or alignment with application logic. This provides a false sense of security."

Former Facebook security chief Alex Stamos commented that the odds of a non-technical user configuring database permissions correctly are "extremely low."

Expert Assessments

ZDNET surveyed professional developers on vibe coding. One stated: "Vibe coding today excels at creating 'web toys', personal tools with a narrow focus and minimal security concerns, without the stakes and risks of deploying them in a production environment."

Another was more direct: "I think vibe coding is a phrase invented by people who think that AI-generated code is safe and secure... Every single vibe coding project I've seen has been insecure, not able to answer a use case, or just emulating better things that exist already."

The SaaStr founder documented building five production applications with vibe coding tools and concluded: "If you want to build a real B2B application that handles real users, collects real data, and charges real money, budget a month of work. Sixty percent of that time will be QA and testing."

Sixty percent. Not the "20x faster" marketing promise, but an acknowledgment that vibe-coded applications require extensive manual verification before they're safe to deploy.

The 80% Problem

Multiple independent sources converge on the same insight: vibe coding gets you roughly 80% of the way to a working application quickly. It's the remaining 20%: the edge cases, error handling, security hardening, and production optimization that consume disproportionate time and money.

A LogRocket analysis put it this way: "No doubt AI tools generate incredible things... but there is a touch to results like this; they don't come by just prompting most times."

Or as one developer wrote: "If you can't debug it, you don't really own it."


The Fixer Economy

The gap between prototype and production has created a market opportunity. Fiverr now has a dedicated "Vibe Coding" category specifically for fixing AI-generated applications. The category description: "Fiverr connects you with vetted freelancers who specialize in taking no-code prototypes and turning them into fully functional, polished products."

Examining specific listings reveals the scope of demand:

"Fix lovable ai bug lovable ai dev website saas mvp supabase replit debug web app" - $70/hr. The seller states: "If your Lovable AI app, SaaS MVP, or web app isn't performing as expected, I can debug, repair, and optimize it today."

"Fix vibe coded bug code website webapp lovable replit webflow bolt" - $20/hr. The listing opens: "Is Your AI-Generated App Buggy, Slow, or Not Production Ready? You're not alone—AI coding tools like Bolt, Lovable, Replit, Cursor AI, v0, and Base44 can generate fast MVPs, but they often leave behind unstable, bloated, or insecure code."

"Base44 app, base44 fix, base44 mvp, lovable dev" - $30/hr. A customer review captures the pattern: "I was in a jam with an app I tried to build on base44 and quickly realized I was in over my head. Ken saved the day."

On Upwork, job postings seek "lovable experts" to ensure applications are "functional on the lovable cloud base with end-to-end functionality." One $650 fixed-price listing detailed the work required: "Ensure robust stability for scalability... Security OTP / MFA / Email code... Payment gateway (Stripe integration)... UI / UX fixes."

The existence of this fixer economy isn't an indictment of vibe coding itself. It's evidence that the first generation of platforms created tools optimized for different outcomes than many users actually need.


The Structural Misalignment

Why did these platforms evolve this way? The answer lies in incentives.

Vibe coding platforms compete on time-to-first-impression. Marketing emphasizes minutes-to-prototype, AI-generated demos, and the magic moment when a user sees their idea rendered as working software. This is what drives sign-ups, and sign-ups drive valuations.

But credit-based and token-based pricing means revenue scales with iteration count. A platform that delivers production-ready code in fewer iterations earns less per user. The business model creates pressure to optimize the beginning of the journey (to attract users) while the ending (production readiness) remains expensive.

This isn't a critique of the people building these platforms as they're solving genuinely hard problems while managing real compute costs. But users deserve to understand the dynamic: platforms marketed around speed-to-prototype make money on the distance to production.


The Case for Aligning Incentives: Why We Built Avery

We created Avery.dev because we believe the fundamental premise of vibe coding was right: that AI should make building production software accessible to more people. But the business model of first-generation platforms was wrong.

Unlimited Iterations for a Flat Fee

Avery charges a flat monthly fee for unlimited AI iterations. This single structural change realigns our incentives with yours. We don't make more money when you get stuck in debugging loops. We don't benefit when the AI overwrites working code, and you have to regenerate. We succeed when you reach production, because satisfied users who ship working software are users who stay.

When iterations are unlimited, we're economically motivated to minimize them, to get the AI right the first time, to prevent regressions, to build in the guardrails that stop you from needing a fixer.

Optimizing for Completion, Not Commencement

First-generation platforms optimized for the impressive first demo. We optimized for the finish line.

This means:

  • Built-in production patterns: Authentication, error handling, and security configurations that work correctly by default, not configurations you discover are misconfigured after deployment

  • Regression prevention: Changes that don't break what's already working

  • Production readiness checks: Automated verification that code meets deployment standards before you ship

We believe the next generation of AI coding platforms will be measured not by how quickly they generate a prototype, but by how reliably they deliver software that works in production. The fixer economy exists because that gap wasn't being closed. We're building to close it.


The Bottom Line

The first generation of vibe coding platforms proved something important: AI can meaningfully accelerate software creation. Lovable's $200 million ARR and 100,000+ daily new projects demonstrate real demand. The technology works.

But the business models created friction where the user needed acceleration. Credit-based pricing turned the journey from prototype to production into a progressively expensive endeavor. The result is a parallel industry of human fixers charging starting from $20/hr to fixed cost like $650 to bridge the gap that the platforms' own pricing models made expensive to cross.

The lesson isn't that vibe coding failed. It's that the alignment of incentives matters as much as the technology. Platforms that profit from iterations will naturally generate more of them. Platforms that profit from successful completions will optimize for fewer obstacles along the way.

We built Avery to be the latter. Unlimited iterations. Flat pricing. Because the goal was never to generate impressive prototypes but to ship production software that works - it wasn't about enabling just vibe coding but viable coding.

Share this article:

AveryPowered by Avery