All articles
Security Fundamentals10 min readJanuary 24, 2026
SQL InjectionChatGPTCopilotDatabase

SQL Injection in AI Code: Why ChatGPT and Copilot Keep Making This Mistake

AI coding tools consistently generate SQL injection vulnerabilities. Learn why this happens and how to detect these critical security flaws.

Security Guide

The Most Dangerous Bug AI Keeps Writing

SQL injection has been the #1 web vulnerability for over two decades. It's well-documented, easily prevented, and catastrophically dangerous. Yet AI coding assistants generate SQL injection vulnerabilities constantly.

What Is SQL Injection?

SQL injection occurs when user input is inserted directly into a database query, allowing attackers to modify the query's logic.

javascript
// User enters: ' OR '1'='1
const query = SELECT * FROM users WHERE email = '${email}'

// Becomes: // SELECT * FROM users WHERE email = '' OR '1'='1' // Returns ALL users

Why AI Tools Keep Making This Mistake

Training Data Problem

Stack Overflow is full of examples like:

javascript
// Highly upvoted "solution"
const result = db.query(SELECT * FROM products WHERE name LIKE '%${search}%')

These examples are upvoted for being concise and "working," not for being secure. AI learns these patterns.

Template Literal Bias

Modern JavaScript encourages template literals for string building. AI models over-apply this pattern:

javascript
// AI sees template literals as "modern" and uses them everywhere
const query = INSERT INTO logs (message) VALUES ('${userMessage}')

Context Blindness

AI doesn't distinguish between:

  • Trusted internal data
  • Untrusted user input
To AI, all variables are just variables.

Real Examples from AI Tools

ChatGPT (Prompted: "Write a function to find a user by email")

javascript
async function findUserByEmail(email) {
  const query = SELECT * FROM users WHERE email = '${email}'
  return await db.execute(query)
}

Vulnerable. Direct string interpolation.

GitHub Copilot (Autocompleted after typing "const query =")

javascript
const query = SELECT * FROM orders WHERE user_id = ${userId} AND status = '${status}'

Vulnerable. Both parameters are injectable.

Cursor (Prompted: "Add search to the products API")

javascript
app.get('/api/products', (req, res) => {
  const { search, category } = req.query
  const query = SELECT * FROM products WHERE name LIKE '%${search}%' AND category = '${category}'
  // ...
})

Vulnerable. Classic search injection.

The Correct Patterns

Parameterized Queries (PostgreSQL)

javascript
const query = 'SELECT * FROM users WHERE email = $1'
const result = await db.query(query, [email])

Parameterized Queries (MySQL)

javascript
const query = 'SELECT * FROM users WHERE email = ?'
const result = await db.execute(query, [email])

ORM Methods (Prisma)

javascript
const user = await prisma.user.findUnique({
  where: { email: email }
})

ORM Methods (Drizzle)

javascript
const user = await db.select().from(users).where(eq(users.email, email))

Detection Patterns

Look for these red flags in AI-generated code:

PatternRisk Level
${variable} inside SQL stringCritical
' + variable + ' in SQLCritical
.query(...) with template literalCritical
Raw SQL with any string concatenationHigh
LIKE '%${x}%' patternCritical

How to Fix AI-Generated SQL Injection

Step 1: Find All Database Queries

bash
grep -rn "query\
execute\
sql" --include="*.ts" --include="*.js" src/

Step 2: Check Each Query for User Input

Trace each variable in the query. If it originates from:

  • req.body
  • req.query
  • req.params
  • Any user-controlled source
It must be parameterized.

Step 3: Refactor to Parameterized Queries

Replace every instance of string interpolation with parameter binding for your database driver.

Automated Detection

Manual review works, but automated scanning catches patterns consistently. ShipReady scans for:

  • Template literals in SQL contexts
  • String concatenation in queries
  • Missing parameterization
  • ORM misuse patterns

The Bottom Line

SQL injection is a solved problem—when you use parameterized queries. AI tools consistently fail to apply this solution because their training data is full of vulnerable examples.

Never trust AI-generated database code without verification.

Ready to secure your AI-generated code?

Stop reading about vulnerabilities. Start fixing them.

Start Scanning Free