Hacker Newsnew | past | comments | ask | show | jobs | submit | AMARCOVECCHIO99's commentslogin

This matches what I've seen. The .env file is one vector, but the more common pattern with AI coding tools is secrets ending up directly in source code that never touch .env at all.

The ones that come up most often:

  - Hardcoded keys: const STRIPE_KEY = "sk_live_..."
  - Fallback patterns: process.env.SECRET || "sk_live_abc123" (the AI helpfully provides a default)
  - NEXT_PUBLIC_ prefix on server-only secrets, exposing them to the client bundle
  - Secrets inside console.log or error responses that end up in production logs
These pass type-checks and look correct in review. I built a static analysis tool that catches them automatically: https://github.com/prodlint/prodlint

It checks for these patterns plus related issues like missing auth on API routes, unvalidated server actions, and hallucinated imports. No LLM, just AST parsing + pattern matching, runs in under 100ms.


Just use gitleaks or trufflehog?


gitleaks and trufflehog are great for scanning git history for leaked secrets but that's one of 52 rules. prodlint catches the structural patterns AI coding tools specifically create: hallucinated npm packages that don't exist, server actions with no auth or validation, NEXT_PUBLIC_ on server-only env vars, missing rate limiting, empty catch blocks, and more. It's closer to a vibe-coding-aware ESLint than a secrets scanner.


Haven't used it but just checked it out — interesting project. Different goals though.

Raptor configures Claude Code as a security agent for active pentesting and adversarial research. It's an LLM doing dynamic security analysis.

Prodlint is the opposite direction with deterministic static analysis, no LLM in the loop. 52 rules that check for the structural patterns AI coding tools consistently get wrong (leaked secrets, missing rate limiting, hallucinated imports, etc.). Same result every time, under 100ms, works offline.


I use Cursor and Claude Code daily. The code they write compiles, passes typescript, passes eslint. Then I find a hardcoded Supabase key in a client component, or an import for a package that was never installed, or a server action that takes raw formData with zero validation.

These aren't edge cases. I kept hitting the same patterns across projects so I started cataloging them. That turned into prodlint -- 52 static analysis rules targeting the specific bugs AI coding tools consistently produce.

Some examples of what it catches: hallucinated-imports flags import statements for packages not in your package.json (the AI invented them). supabase-missing-rls catches CREATE TABLE in migrations without Row Level Security enabled. env-fallback-secret finds process.env.SECRET || "sk_live_abc123" patterns where the AI helpfully provides a fallback for your API key.

It's all local static analysis, no AI in the tool itself. Babel AST for 12 rules, regex for the rest. Scans ~150 files in under 100ms.

npx prodlint

No config, no account, no install needed. MIT licensed. Interested in what patterns other people are seeing that I should add rules for.


I've been building with AI coding tools for the past year and kept noticing the same patterns in the code they generate: API routes with zero error handling, catch blocks that just do `catch (e) {}`, database queries inside loops, imports of packages that don't exist on npm, no rate limiting anywhere.

None of it breaks locally. Tests pass, types check, everything looks fine. But these are the kinds of things that blow up in production, and AI assistants produce them consistently.

So I built prodlint. It's a linter tuned specifically for patterns AI gets wrong. `npx prodlint` in any JS/TS project gives you a 0-100 score across security, reliability, performance, and "AI quality" (stuff like TODO placeholders, hallucinated imports, inconsistent naming from copy-pasting different AI outputs).

27 rules right now. Some examples:

- SQL injection via string concatenation (AI loves template literals in queries) - dangerouslySetInnerHTML without sanitization - Packages that don't actually exist on npm (AI hallucinates names constantly) - .findMany() with no LIMIT - Missing auth on API routes

Also runs as an MCP server (`npx prodlint-mcp`) so your AI editor can use it while writing code. And there's a GitHub Action that posts scores on PRs.

I ran it on my own production app and it flagged 8 critical issues across 175 files that I'd missed during review.

175 files in ~600ms, zero config, 22kb on npm. MIT licensed.

What production issues do you keep seeing in AI-generated code? Always looking for new rules to add.

https://prodlint.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: