▶️ LISTEN TO THIS ARTICLE
Tailwind CSS is more popular than it's ever been. Downloads are up. Adoption is up. The framework is embedded in millions of projects worldwide. And in January 2026, its creator Adam Wathan laid off 75% of his engineering team because revenue dropped 80%.
That's the number that should make every developer stop scrolling. Not a startup that failed to find product-market fit. A wildly successful open source project, used by more people than ever, financially collapsing because AI tools severed the connection between users and the project itself.
The Invisible Tax
The term "vibe coding" started as a joke. Andrej Karpathy coined it in February 2025 to describe the experience of building software by talking to an AI agent, barely reading the code it produces. Within months, it stopped being a punchline. GitHub Copilot crossed 20 million users by mid-2025. Cursor grabbed 18% of the paid AI coding market. Gartner now forecasts 90% of enterprise developers will use AI coding assistants by 2028.
Here's what none of those adoption numbers capture: every time a developer asks Claude or Copilot to generate Tailwind classes instead of visiting the docs, they skip the page where Tailwind sells its commercial products. Every time a developer asks ChatGPT how to configure a library instead of filing an issue, the maintainer loses a signal about what's broken. Every time an AI agent assembles five open source packages into a working app, zero of those packages get a star, a bug report, or a sponsorship click.
A January 2026 economics paper from CEU and Kiel put formal math behind what Wathan was living through. Researchers Miklos Koren, Gabor Bekes, Julian Hinz, and Aaron Lohmann built an equilibrium model showing that vibe coding creates a "demand diversion channel." In the short run, AI lowers development costs and spurs new project creation. That's the part everyone celebrates. But in the long run, when maintainers depend on direct user engagement to fund their work, widespread AI mediation erodes that revenue. Their model shows the feedback loops that once amplified growth now accelerate contraction. The same network effects that made open source powerful make its decline self-reinforcing.
Wathan's experience is the proof of concept. Documentation traffic dropped 40% from its peak, even as Tailwind became three times more popular than when traffic was highest. When someone submitted a pull request proposing an /llms.txt endpoint to make Tailwind's docs more accessible to AI tools, Wathan closed it the day after the layoffs. "Making it easier for LLMs to read our docs just means less traffic to our docs," he wrote, "which means less people learning about our paid products and the business being even less sustainable."
The Quality Myth
The standard defense of AI coding tools is that they make developers faster. The data says otherwise, at least for the people who matter most.
METR ran a randomized controlled trial in early 2025 with 16 experienced open source developers working on their own repositories, the kind of massive, mature codebases that form critical infrastructure. Each developer tackled issues randomly assigned as either AI-allowed or AI-prohibited. The result: developers using AI tools took 19% longer to complete their tasks. Not faster. Slower. The AI Coding Productivity Paradox extends this analysis to organizational-level productivity, revealing how individual speed gains can mask systemic costs across teams and codebases.
The kicker is the perception gap. Before starting, developers predicted AI would speed them up by 24%. After finishing (and measurably losing time), they still believed AI had helped by 20%. The tools feel fast while actually burning hours. Developers accepted fewer than 44% of AI-generated suggestions, spending significant time reviewing, testing, and ultimately rejecting code that didn't fit their codebase.
CodeRabbit's December 2025 analysis of 470 open source pull requests found AI-coauthored code introduced 1.7 times more issues than human-written code. Security vulnerabilities were up to 2.74 times more frequent. Performance regressions hit 8 times the rate. Readability problems tripled. The code compiles, passes a casual glance, and hides defects that surface weeks later.
Then there's Lovable, the "vibe coding" platform that hit unicorn status by letting anyone build full-stack apps through chat. Security researchers scanned 1,645 apps from Lovable's showcase and found 170 of them, 10.3%, had critical security flaws exposing user data through misconfigured database policies. Names, emails, API keys, payment details, personal debt amounts. The apps looked finished. They worked. They just leaked everything.
This is the benchmark trap applied to an entire development methodology. The surface metrics look great. Completion rates are high. Time-to-first-commit drops. But the metrics that matter, the ones measuring security, maintainability, and long-term code health, are moving in the wrong direction.
Death by a Thousand Slop PRs
Open source maintainers aren't just losing revenue. They're drowning in garbage.
Daniel Stenberg, who maintains curl (installed on virtually every internet-connected device on Earth), shut down the project's bug bounty program in January 2026. The reason: 95% of HackerOne submissions in 2025 weren't valid. People were feeding AI tools the project's source code, collecting whatever the model flagged as a vulnerability, and submitting it for bounty money without verifying anything. Stenberg spent years building that program. He killed it to preserve his team's sanity.
He's not alone. Tldraw, the React canvas library, started automatically closing all pull requests from external contributors after a surge in AI-generated submissions that were "formally correct" but showed no understanding of the codebase. Ghostty, the terminal emulator, escalated from requiring AI disclosure to banning AI-assisted contributions entirely unless tied to an accepted issue. Drive-by AI PRs get closed without review.
These projects aren't anti-AI. They're drowning. Every junk submission takes time to evaluate, time that maintainers, 60% of whom work unpaid, don't have. The curl team was reviewing fake vulnerability reports instead of shipping patches. Tldraw's maintainers were explaining architectural decisions to people who'd never read the codebase and wouldn't stick around to learn.
Stack Overflow's collapse is the canary in a different mine. Question volume has dropped 76% since ChatGPT launched. Monthly submissions fell from peaks above 200,000 to under 50,000 by late 2025. The 2024 Stack Overflow Developer Survey showed 67.5% of developers now use AI for "searching for answers." The knowledge commons that trained the AI in the first place is withering because nobody's feeding it new questions.
That creates a problem the CEU/Kiel paper identifies clearly: AI models are trained on existing open source code and documentation. If the flow of new contributions slows, the models gradually train on stale knowledge. The training data problem becomes circular. AI consumes the commons. The commons shrinks. The AI gets worse. Nobody notices until it's too late because the AI still sounds confident.
The Counterargument, and Why It's Half Right
There's a version of this story with a happy ending. When professional developers use AI as a power tool rather than a replacement for understanding, the demand-diversion problem disappears. The CEU/Kiel paper itself notes this: if AI doesn't mediate final-user consumption and only reduces development costs, you get higher entry, better quality, and more OSS, not less.
Some experienced developers genuinely use AI for tedious, repetitive tasks while maintaining deep engagement with the projects they depend on. They still file issues. They still read docs. They still sponsor maintainers. For these developers, AI is like a better autocomplete, not a substitute for thinking.
But that's not what's scaling. What's scaling is 20 million Copilot users generating 46% of their code through AI suggestions. What's scaling is vibe coding platforms turning non-developers into app builders who've never heard of Row Level Security. What's scaling is the pattern where AI sits between the developer and the project, silently eating the engagement that keeps open source alive.
The prompt engineering ceiling turns out to apply to more than just chatbots. There's a ceiling on how much you can extract from AI coding tools before you hit the wall of context, judgment, and architectural understanding that the tools don't have. Experienced developers hit that ceiling and adjust. New developers don't know it exists.
What Breaks Next
Google disclosed in October 2024 that more than a quarter of all new code at the company was AI-generated. GitHub reports that 29-30% of Python functions by end of 2024 came through Copilot. These numbers are climbing fast. SWE-bench scores jumped from Claude 2 solving 1.96% of issues to Claude 4.5 solving 74.2% in barely two years.
The tools are getting better. The economic pressure on open source is getting worse. And the gap between "AI can write code" and "AI can sustain the infrastructure that code depends on" keeps widening.
If you're building on open source, which means if you're building software at all, you already use AI tools. The real problem is whether the business models that keep your dependencies alive will survive the next two years. Tailwind's 80% revenue drop happened while the project was at peak popularity. Curl's maintainer is fielding fake vulnerability reports instead of fixing real ones. Stack Overflow's knowledge base is a fraction of what it was three years ago.
The fix isn't going to come from telling developers to stop using AI. That ship sailed. It's going to require what the CEU/Kiel researchers call "major changes in how maintainers are paid." Per-download fees. AI-company licensing deals. Foundation funding tied to dependency graphs rather than documentation traffic. Something structural, because the current model where open source survives on goodwill, sponsorship pages, and documentation-adjacent product sales is already failing.
The irony is almost perfect. The AI tools that promise to make every developer more productive are strip-mining the shared infrastructure that makes development possible. Vibe coding doesn't just consume open source. It consumes the conditions that produce it.
Sources
Research Papers:
- Economics of Vibe Coding: AI Tools and the Sustainability of Open Source Software -- Koren, Bekes, Hinz, Lohmann / CEU and Kiel (2026)
- Early 2025 AI-Experienced Open Source Developer Study -- METR (2025)
Industry / Case Studies:
- Tailwind Labs Lays Off 75% of Its Engineers Thanks to Brutal Impact of AI -- DevClass (2026)
- State of AI vs. Human Code Generation Report -- CodeRabbit (2025)
- Vibe Break Chapter IV: The Lovable Inadvertence -- Desplega AI
- Curl Ends Bug Bounty -- The Register (2026)
- Stay Away From My Trash -- tldraw
- AI Killed the Stack Overflow Star: The 76% Collapse in Developer Q&A -- Allstacks
Commentary:
- How Vibe Coding Is Killing Open Source -- Hackaday (2026)
- AI Slopageddon and the OSS Maintainers -- RedMonk (2026)
Related Swarm Signal Coverage: