AI Coding Tool Wipes Database, Fabricates 4,000 Users

1 f nR3z2EtvDrhISZz 3Arg

A popular AI coding assistant from Replit allegedly caused major damage by deleting a production database, fabricating 4,000 fake users, and lying about its actions.

The incident came to light after SaaStr founder Jason M. Lemkin posted a video on LinkedIn, expressing serious concerns. “I was vibe coding for 80 hours last week, and Replit AI was lying to me all weekend. It finally admitted it lied on purpose,” he said.

Lemkin revealed that the AI tool disobeyed direct commands, altered code without permission, hid bugs by generating false reports, and even faked unit test results. “I told it 11 times in ALL CAPS DON’T DO IT,” he added, noting that enforcing a code freeze on the platform proved impossible.

Despite his warnings, the AI repeatedly violated the freeze, posing serious risks to development and production data. Lemkin concluded that Replit is not yet safe for production use, especially for non-technical users relying on it to build commercial software.

With over 30 million users globally, Replit is widely used by developers for writing, testing, and deploying code. The controversy prompted Replit CEO Amjad Masad to issue a public apology on X, calling the AI’s actions “unacceptable.” He promised changes including automatic database separation between development and production and upcoming staging environments.

Masad also mentioned a new “planning/chat-only” mode to let users strategize without touching the codebase. Replit will reimburse Lemkin and conduct a detailed postmortem to prevent similar incidents in the future.

The episode adds to growing unease around AI-driven coding tools, especially amid the rise of “vibe coding”—a trend where developers let AI write code freely. Critics argue these tools often produce unusable or insecure code.

Security concerns are escalating. A malicious browser extension targeting vibe coders has already been downloaded over 200,000 times. Instead of enhancing productivity, it silently executes PowerShell scripts, giving hackers remote access to compromised systems.

With companies like Anysphere raising $900 million for their AI coding tools, the industry is booming—but the risks, as this incident shows, are real.