Vibe Coding Isn't Dying—It's Fragmenting

Vibe Coding Isn't Dying—It's Fragmenting

An analysis of the current state of AI-powered development tools and the evolution of the vibe coding ecosystem.

Vibe Coding Isn’t Dying—It’s Fragmenting

I recently came across an article that started with “Is Lovable dying? Web traffic has declined almost 50% from 35.4M in June to 19.1M in September.” The author brought up a number of valid points about the drop in vibe coding and a few of them have probably contributed to the drop. I also appreciated the real data and how the author laid out his arguments. In my opinion, the decline in vibe coding traffic is not evidence of failure, it’s evidence of market segmentation and natural maturation of an overhyped category. It also made me want to capture my take about the AI field more broadly.

What Does “Vibe Coding” Even Mean?

We must agree on what “vibe coding” means before we go any further. The term has become overloaded and overhyped, meaning very different things to different audiences. To an architect, vibe coding sets up entirely different expectations than to someone new to code—and the perceived limitations carry different implications for each.

When Andrej Karpathy coined the term, he had something specific in mind: a workflow where experienced practitioners let AI drive while they supervise loosely. “Code vibing” might have been more accurate—vibing while coding, not coding while vibing. The misnomer stuck, and with it came inflated expectations about who these tools serve and what they can realistically deliver.

I do think the generalization in the article is stretched though. The author draws a connection between the drop in apps’ web traffic with a drop in belief in vibe coding. In fact, many very large companies do quite the opposite, they are increasing the investment in it. We see signs, especially in the enterprise sector where companies go all-in on trying to solve that “no-code” problem. No-code has long been attractive to SaaS companies looking to enable non-technical users to build websites and apps. However, this is a very complex space; the promise of truly universal no-code remains unfulfilled. Enterprises think AI provides the perfect toolset to help solve this. The jury is still out on the costs to the enterprise, however.

The Chasm AI Hasn’t Crossed

Reflecting on Moore’s “chasm” concept, let’s discuss market maturity. The message of vibe coding has been loud and clear: “anyone can create apps”. We have not yet delivered on that promise and we can say that AI has not made it into the masses.

The fragmentation of the tooling landscape is not theoretical—I’ve experienced it firsthand. I recently set out to build CueTime from scratch and get it published on the App Store, writing virtually no code by hand. I succeeded. But here’s what’s telling: it took three different tools to get there. I started with Replit, migrated to Cursor when I hit limitations, and eventually settled into Claude Code for the majority of the work, with occasional returns to Cursor for specific tasks.

This is the current state of AI-assisted development. The tools are powerful enough to produce real, shippable software—but no single product covers the full journey. Users are stitching together workflows across multiple platforms, each with different strengths and gaps. That’s not a mature ecosystem. It’s a fragmented one, which reinforces my point: AI tooling hasn’t crossed the chasm because there’s no unified product experience yet.

There is another angle that I haven’t seen being explored. While the AI field is showing a lot of promise and has been pulling a lot (in trillions of dollars, including infrastructure investments and market valuations) of capital, I would argue that we haven’t crossed the chasm yet. AI is still not mainstream. While leadership at large companies sees the opportunity and have been communicating the desire to integrate the AI down the chain, there is still quite a bit of skepticism around AI-powered tooling (deservedly) for people in the trenches. There is something else though, aside from the anecdotal comments about AI tooling not making it to the people. It’s the societal rhythms, inertia, if you will that trickles down to companies and more generally to large groups. AI adoption like an adoption of any tooling depends heavily on these rhythms. One example is that while companies try to use AI adhoc, it has not really penetrated every layer of society, outside of very few specific use cases. Organizations have not adjusted in being fully AI enabled either. The team structure, the way how the domain knowledge, and information in general is being exchanged has not yet changed to take advantage of AI. The expectations from the product teams haven’t truly changed.

Moreover, I assert that the industry is building AI-powered tooling for a customer who hasn’t been defined yet. The problem exists, the roles exist—but AI-powered tooling like Cursor can now perform tasks that once spanned product, UX, front-end, back-end, QA, and DevOps. AI tools will and already pose the question in front of the leadership at large companies about replacing some categories of employees in the organizations. Which roles are displaced and who takes over is for every organization to figure out. One thing for sure: product team structures will need to change if organizations want to capture the full advantages of AI tooling.

The Economics Don’t Support the Narrative

I don’t necessarily agree with the connection the author drew between power users eating the compute, companies needing to monetize, and the drop in usage. Sure there is a drop but the casual users are not the core value to companies like Lovable. I would really hope that this is not the case. The real money is in offering compute infrastructure and locking users into the ecosystem. Doing the markup on tokens cost is also not a viable long-term monetization strategy. It’s a game inference providers can typically win—Anthropic’s Claude Code being one example. The drop is due to what I would call seasonal usage. To users, who have the need for something to be built, it is ultimately an investment vs return question. “Is it worth my money?” is the real question and the current state of vibe coding gives a “yes” to people with some technical background (engineers, architects, technical product managers). Tools like Lovable are invaluable to them.

Guardrails and the Cost of Mistakes

When it comes to the viral story of Replit’s misstep, let’s put aside the question of using AI in a production environment—which should never happen. Guardrails—UX that goes far beyond the basics to ensure users understand the repercussions of the code generated—are a must. They are essential in any development environment.

Another question that I feel is often overlooked in non-system-critical environments using non-deterministic AI systems is about assessing the cost of the mistake. It is a well-known strategy of comparing the reward versus the economic (or even human life) consequence of the probability of catastrophic loss.

I studied actuarial science, which is fundamentally about quantifying the cost of adverse events. The same principle applies here: we must match our tooling choices to our risk tolerance. In low-risk systems, we can experiment more freely with general tools like Lovable and Replit. In system-critical environments, AI tooling needs to provide code guarantees.

The Case for Specialization

There is something to be said about the adoption of AI tooling. The tools themselves are plenty, popping up everywhere so fast that it is hard to keep track of them. Hundreds of tools now get attention from investors. There are many more out there who are less known. So we can say that yes, AI coding is losing its novelty.

This is what we do at Code Metal. Our mission is to accelerate critical industries with provably correct, automated code translation and optimization. I foresee this trend of tooling ecosystem specialization to continue.

The author mentions another tool, Emergent, and wonders whether it will take off. History suggests new entrants often follow a similar arc—initial spike driven by marketing spend, then correction as users encounter limitations. Whether Emergent breaks this pattern remains to be seen. The truth is that building generic end-to-end tools that work out of the box is extraordinarily difficult. The foundational innovation will likely come from LLM makers, while tooling companies compete on integration and workflow.

Problems Waiting to Be Solved

The space is still young and has many issues to overcome. In my interactions with AI-based tools, I’ve identified a set of notable problems waiting to be solved. Aside from model capabilities, they are the key to full AI embracement. Until then, AI will just be a set of disjointed one-off tools.

Domain Knowledge. Domain knowledge is the lubricant that keeps teams efficient. Domain expertise accumulation and exchanging across individuals, teams, and organizations is something that needs to be solved by AI. It is really good at synthesizing information. The way knowledge is generated and passed down is still done the same way it has been for as long as we remember. The natural extension of this is the problem of protecting partial networks of domain knowledge from unauthorized access.

User Experience. User experience is, to me, the biggest pain point. Helping users understand the changes is critical. Being presented with a wall of generated code overwhelms a user. Understanding the relationship between the prompt and generated code is essential for users to understand what has been generated and why. We are seeing different tools trying to solve this problem, but the industry hasn’t made much progress.

Provable Guarantees. Coming up with provably correct generated code is the answer to the gaps in AI-generated tests. Users need guarantees based not only on the amount of test coverage—although this is important too—but also that the generated code is correct and can be demonstrated by formal verification methods.

Thinking Models. Another issue that was called out by Andrej Karpathy is about making the models not just glorified autocomplete machines but truly “thinking” models. Generally, bigger models are better models. And they come with “memory baggage” that influences generated results. Regardless, this approach will not work for edge devices. The industry needs to produce smaller models that are capable of deep reasoning.

Understanding True Cost. Measuring usage in terms of tokens used is an indirect metric and doesn’t really tell the organization how much it pays for using these tools to build a product. There needs to be an understood relationship between gained efficiency and the cost.

The Road Ahead

Vibe coding, and AI more broadly is not dying, it’s evolving—and fast. This dynamic is not specific to the field of AI. This is a characteristic of an overstimulated market, typical of free, unbounded capital flows. Despite all the problems and existing limitations, AI is here to stay—the genie is out of the bottle. I feel like I am sitting in the front row of an AI train going 200 miles an hour and still accelerating. Unlike many people I talk to, I am staying optimistic. We should not think about the AI field in binary categories. It’s an iterative process with quantum leap opportunities, and with enough patience and a little bit of faith, we’ll be able to build amazing things with AI.