The CVE That Exposed 170 Lovable Apps — And What It Means for Your Vibe-Coded App

In December 2025, security researchers discovered something troubling: a platform-level flaw in one of the most popular AI app builders had exposed the data of hundreds of real businesses. Not in a lab. Not in a test environment. In production.
The vulnerability, assigned CVE-2025-48757, affected applications built with Lovable — a platform that has become a go-to tool for founders and non-technical entrepreneurs who want to ship fast. The flaw wasn't in a single app. It was in the platform itself: missing Row Level Security (RLS) on Supabase databases, which meant that anyone who knew the pattern could read, modify, or delete data from any Lovable-built app that hadn't manually locked things down.
Over 170 production applications were exposed. Some contained customer names, emails, and behavioral data. The builders of those apps had no idea.
This Keeps Happening
Lovable wasn't an outlier. It was a pattern.
Moltbook, another AI-built application, leaked 1.5 million authentication tokens and 35,000 email addresses. The Tea App exposed 72,000 images and 1.1 million user records. A single Lovable-built app shipped in a weekend exposed 18,000 users' personal data because the AI generated client-side database queries with no access controls.
The CSA (Cloud Security Alliance) catalogued more than 2,000 critical vulnerabilities across 1,400 AI-generated applications in 2025. CodeRabbit analyzed 470 GitHub pull requests and found AI co-authored code introduced 2.74x more XSS vulnerabilities and 1.91x more insecure object references compared to human-only code.
Veracode's 2025 research found that 45% of AI-generated code contained security flaws. Nearly half. The question isn't whether your AI-built app has a vulnerability — it's whether anyone has found it yet.
Why AI Code Is Especially Vulnerable
Here's the uncomfortable truth: AI coding tools are only as secure as the code they've been trained on. And a huge portion of existing code — including millions of lines used to train these models — was written before modern security practices were standard.
When an AI generates a login form, it draws on patterns from thousands of login forms. Many of those patterns include the same vulnerabilities that have plagued web development for decades. The AI doesn't know it's writing insecure code. It knows it's writing code that works.
The problem is compounded by who is using these tools. An estimated 63% of active vibe coding users are non-developers. They don't recognize insecure patterns because they've never been trained to spot them. A developer sees eval(userInput) and knows it's dangerous. A non-developer sees code that works and moves on.
The Specific Failure: Missing Row Level Security
The Lovable CVE was a Supabase RLS problem. Supabase is a popular backend choice for Lovable apps — it gives you a real database, auth, and file storage with minimal setup. But Supabase's Row Level Security (RLS) — the feature that controls who can see and modify which rows — is not automatically enabled. You have to configure it manually.
When Lovable generated the code for connecting to Supabase, it often didn't generate the RLS policies. The database was there, the connection worked, and data flowed — but there was no access control layer. Anyone who understood the structure could query any table directly through the browser.
This isn't a Lovable-specific bug. It's a pattern that shows up across every AI code generator: the code they produce is functional, but it often omits the security configuration that a human developer would instinctively add.
What This Means for You
If you built an app with Lovable, Bolt, v0, Replit, Cursor, or any other AI tool in the last two years, the odds are significant that something exploitable is in your codebase right now. That's not an accusation — it's a statistical reality based on the vulnerability rates documented across the industry.
The good news: it's fixable. And it's usually not as hard as people fear.
A real security audit of an AI-built app typically finds a specific list of issues. Common findings include:
- Exposed database rows — missing RLS policies on Supabase, Firebase, or other BaaS backends
- Hardcoded API keys — secrets baked into frontend code, visible to anyone who opens DevTools
- Broken authentication flows — session management that works but has edge cases an attacker can exploit
- Missing rate limiting — APIs that can be hammered without throttling
- Unvalidated user input — entry points that accept data without checking type, length, or format
These aren't exotic vulnerabilities. They're the same categories of issues that have been in web security literature for twenty years. AI just makes it easier to ship them at scale.
How to Find Out If You're Exposed
Start with these three questions:
1. Does your app connect to a database? If it's using Supabase, Firebase, PlanetScale, or any managed database, check whether Row Level Security is enabled and configured. For Supabase specifically: go to the SQL Editor in your Supabase dashboard and run SELECT schemaname, tablename FROM pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema'); — then check each table's RLS status in the table properties.
2. Are there any secrets in your frontend code? Open your browser's DevTools, go to the Sources or Network tab, and look for API keys, tokens, or configuration values that shouldn't be visible to end users. If you can find them, attackers can too.
3. Can you enumerate user accounts without being logged in? Try accessing /users, /api/users, /profile?id=1, /profile?id=2 — any endpoint that might return data for other users without proper authorization checks.
If any of those three checks turn up issues, you have a real security gap that needs addressing before you have meaningful user traffic.
The Real Point
This isn't an article about Lovable. Lovable is actually pretty good at what it does — it's a legitimate tool that has helped real businesses ship real products. The point is about the nature of AI-generated code in general: it is functional first, and the security layer is frequently absent or incomplete.
If you're running an AI-built app with real users, real data, or real money flowing through it — you owe it to yourself and your customers to know what's actually in your codebase. The vulnerabilities are real. The exploits are being actively searched for. And the fix is usually faster and cheaper than the incident you're preventing.
Wolfgang Solutions offers security audits for vibe-coded applications — we review the actual code, document what we find, and give you a prioritized fix list. If you want to know what's actually running in your production environment, reach out and we'll scope it out.
And if you're currently building with Lovable or a similar tool: audit your RLS settings today. It's the one fix that would have prevented the CVE.
Frequently Asked Questions
- What was CVE-2025-48757?
- CVE-2025-48757 was a critical vulnerability in the Lovable AI app builder platform caused by missing Row Level Security (RLS) on Supabase databases. Over 170 production applications built with Lovable were exposed — anyone could read, modify, or delete data without authentication. The vulnerability was discovered and publicly disclosed in late 2025.
- How many AI-built apps have security vulnerabilities?
- According to Veracode's 2025 research, 45% of AI-generated code contains security flaws. The Cloud Security Alliance found over 2,000 critical vulnerabilities across 1,400 AI-generated applications. CodeRabbit's analysis of GitHub pull requests found AI co-authored code has 2.74x more XSS vulnerabilities than human-only code.
- Is vibe coding dangerous?
- Vibe coding itself isn't dangerous — but shipping AI-generated code without reviewing it for security issues is. The tools produce functional code, but they frequently omit security configurations like access controls, input validation, and rate limiting. A non-developer using these tools is unlikely to spot the gaps, which is why an estimated 63% of vibe coding users are non-developers who haven't been trained to identify insecure patterns.
- How do I secure my Lovable or Bolt app?
- Start by auditing your database access controls — check that Row Level Security is enabled on all Supabase or Firebase tables. Remove any secrets from frontend code and move them server-side. Verify that every API endpoint validates user identity before returning data. Run a brute-force test against your login endpoints to check for missing rate limiting. If any of these turn up issues, prioritize them by the sensitivity of the data at risk.
- Can Wolfgang Solutions audit my AI-built app?
- Yes. We offer security audits specifically for applications built with AI code generators like Lovable, Bolt, v0, and Replit. We review authentication flows, database access controls, API security, and infrastructure configuration, then provide a prioritized fix list. NDA available before we look at any code.