Security risks of AI Code Vibing

Why the speed and confidence of AI generated changes can outrun security habits, and what guardrails to add before a prototype becomes production.

Abstract placeholder graphic for AI code vibing security
Jelle De Laender
27 December 2025

Code vibing (often called vibe coding) is everywhere right now. Tools like OpenAI Codex, ChatGPT, Claude, GitHub Copilot, Cursor, Windsurf, and friends can turn a vague idea into working code at a surprising speed.

That speed is the point. It is also the risk.

In a Nutshell

  • Code vibing is when you let an AI read your codebase and generate changes fast, often with minimal review.
  • AI can absolutely write insecure code when the prompt misses edge cases or security requirements.
  • The deeper problem is PEBCAK: the gaps between the chair and keyboard. The AI will confidently fill in those gaps unless you spell them out and verify them.
  • The added risk is the pace and confidence: you can ship a real product before your security process even notices it exists.
  • If your prototype becomes your MVP (and your MVP becomes production), you need guardrails.
  • ISO 27001:2022 is a good lens here: treat AI code generation like a supplier, treat generated changes like software changes, and treat prompts like data transfers.

What is "Code Vibing" (or "Vibe Coding")?

In practice, code vibing means:

  • You describe what you want in natural language.
  • The AI reads relevant parts of your repo (or you paste snippets).
  • The AI generates code, configuration, tests, scripts, and sometimes even runs commands.
  • You iterate quickly, often by "accepting" changes until it works.

The internet definition is basically "forget that the code even exists". That is fun for experiments. It is also exactly the mindset that breaks security when an experiment quietly turns into a real product.

This website is code vibed

We use code vibing ourselves for this website. We use Codex via the CLI to maintain the site, tweak pages, and add new features and functionality.

The content itself is not AI-based. We use AI assistance for code changes, not for writing lazy all content.

This website is a static site: HTML, JS, images, built once and served by a plain web server. There is no custom server-side execution. That dramatically reduces the blast radius:

  • Fewer moving parts
  • Smaller attack surface
  • Easier hardening
  • Easier rollback

Static does not mean "invincible", but it is a very different risk profile than a web app with authentication, a database, file uploads, billing, and production data.

Why AI Code Vibing changes your security risk profile

Classic software risk is: "humans make mistakes".

AI-assisted software risk is: "humans make mistakes faster, with more confidence, and with less friction."

Here are the big shifts.

1) Speed outpaces security habits

When it takes minutes to generate a feature, it is easy to skip the slow parts:

  • Threat modeling
  • Peer review
  • Secure defaults
  • Test coverage
  • Logging and monitoring
  • Access control checks

You are not deliberately ignoring security. You are simply moving faster than your normal "this should get reviewed" instincts.

2) AI output is plausible, not proven

AI is excellent at producing code that looks right. It is not guaranteed to:

  • Handle edge cases correctly
  • Enforce authorization consistently
  • Apply secure defaults
  • Understand your business logic and abuse cases

If you do not explicitly ask for security properties (and verify them), the AI will fill in gaps with guesses.

3) Context becomes a data-leak problem

To be useful, many tools pull context:

  • Open files
  • Surrounding code
  • Configs
  • Logs
  • Sometimes even docs and tickets, if you connect them

That can accidentally include secrets, customer data, internal URLs, or anything else that happens to be nearby.

This is not only a "developer mistake" problem. It is also a governance problem: are you allowed to send that data to a third-party tool at all, and are we sure secrets are not part of the source code, never pushed, but always set as server variables?

4) Agentic tools add a new attack surface

Some tools do more than suggest code. They can:

  • Run shell commands
  • Change files automatically
  • Fetch data from the internet
  • Connect to external systems through plugins and integrations

Best practice is to use a sandbox system and ensure all files are in a repository setup like Git or SVN, although this will never be 100% waterproof.

That expands the threat model. Prompt injection becomes relevant not just for chatbots, but for developer tooling.

5) The prototype-to-production trap

The highest risk pattern is simple:

  • You vibe code a PoC.
  • It works.
  • You ship it.
  • Users arrive.
  • It becomes production before you added production-grade controls.

A PoC can be allowed to be messy. Production cannot.

The most common security risks in vibed code

This section is intentionally high level. The point is to spot the patterns early, not to drown you in technical details.

Risk 1: Broken access control (and accidental data exposure)

Access control failures are boring, common, and devastating.

In vibed code, they happen because:

  • "It worked for me" testing passes
  • The AI adds endpoints quickly
  • Authorization checks are inconsistent across paths
  • Developers confuse authentication ("who are you?") with authorization ("what are you allowed to do?")

A classic example is IDOR: changing an ID in a URL or request lets you access someone else's data.

Impact: data breach, customer trust damage, compliance exposure.

Risk 2: Secrets leakage (keys, tokens, credentials)

AI does not have the same instinctive fear of secrets that experienced engineers develop over years of pain.

Common failure modes:

  • Hardcoding API keys into config files
  • Copying secrets into prompts ("why is my auth failing?")
  • Committing .env files or debug logs
  • Shipping "temporary" test credentials in production

Impact: account takeover, cloud cost explosions, lateral movement.

Risk 3: Insecure defaults and misconfigurations

Modern stacks are powerful and complex.

When you vibe code, you often accept:

  • Default database permissions
  • Default storage permissions
  • Default auth settings
  • Default CORS rules
  • Default logging (too much or too little)

The risk is not only "bad code". It is often "bad configuration".

Impact: unauthorized access, tampering, invisible incidents.

Risk 4: Dependency and supply-chain shortcuts

AI will happily install dependencies to solve problems quickly.

If you do not control this:

  • You inherit vulnerabilities from packages
  • You add unnecessary libraries
  • You create long-term maintenance debt

There is a newer twist: some AI tools hallucinate package names. Attackers have noticed and published lookalike packages to catch those mistakes, sometimes with malware or trojan horses. This risk is now known as slopsquatting.

Impact: exploit via vulnerable dependency, surprise future incidents.

Risk 5: Missing security testing and review

The AI can generate tests, but it will not enforce that they exist.

If your process does not require:

  • Code review
  • Automated testing
  • Security testing
  • Secrets scanning
  • Dependency scanning

...then vibed code becomes "ship now, discover later".

Impact: production incidents that could have been caught in CI.

Risk 6: Prompt injection and tool misuse (the "developer tooling" version)

When tools can read files, run commands, or fetch content, attackers may try to feed them malicious instructions via:

  • Untrusted repositories
  • README files
  • Issues and pull requests
  • Pasted content
  • Plugins or integrations

This is still a fast-evolving area, but the principle is stable: if the tool can act, you must assume someone will try to trick it into acting badly.

Impact: data exfiltration, unintended changes, possible code execution in worst cases.

Real-world examples

These are public posts. They are useful because they show how normal mistakes happen fast when shipping is frictionless.

Mapping the risks to ISO 27001:2022

ISO 27001:2022 is risk-based. The goal is not to ban AI coding. The goal is to stay in control.

Below are common vibing risks, and where they typically map.

  • Prompts contain sensitive data (source, secrets, customer info)
    Good: data classification rules for what can be shared; redaction; approved tools.
    ISO 27001:2022: Annex A 5.12 Classification of information, 5.14 Information transfer, 5.34 Privacy and protection of PII, 8.12 Data leakage prevention.
  • Using AI tools as a third-party service
    Good: vendor review, contractual terms, data controls, clear allowed usage.
    ISO 27001:2022: Annex A 5.19 Information security in supplier relationships, 5.20 Addressing security in supplier agreements, 5.23 Information security for use of cloud services.
  • Generated code changes shipped without review
    Good: mandatory PRs, review standards, change control.
    ISO 27001:2022: Annex A 8.25 Secure development lifecycle, 8.28 Secure coding, 8.32 Change management.
  • Weak access control (IDOR, broken authorization)
    Good: centralized authorization patterns; security requirements; abuse-case testing.
    ISO 27001:2022: Annex A 8.26 Application security requirements, 8.27 Secure architecture and engineering principles, 8.28 Secure coding, 8.29 Security testing.
  • Secrets in code or prompts
    Good: use a secret manager; scanning in CI; rotation; least privilege.
    ISO 27001:2022: Annex A 5.17 Authentication information, 8.9 Configuration management, 8.12 Data leakage prevention.
  • Mixing prod and non-prod
    Good: separate environments; no production data in development; strong access boundaries.
    ISO 27001:2022: Annex A 8.31 Separation of development, test and production environments.
  • AI tool can run commands or access the internet
    Good: least privilege, approvals, sandboxing, logging of actions.
    ISO 27001:2022: Annex A 8.2 Privileged access rights, 8.15 Logging, 8.16 Monitoring activities, 5.15 Access control.
  • No evidence that controls are followed
    Good: documented process, training, auditing, continuous improvement.
    ISO 27001:2022: Clauses 6.1 (risk), 7.2 (competence), 7.3 (awareness), 8.1 (operational planning and control), 9.2 (internal audit).

A practical "minimum guardrails" checklist for founders and product teams

You do not need a perfect security program to start using code vibing safely. You need the basics, consistently.

  • 1. Decide what is allowed in prompts. Treat prompts like external sharing. If you would not paste it into an email to a supplier, do not paste it into a model.
  • 2. Use approved accounts and settings. Prefer business plans where you control data usage, retention, and access.
  • 3. Keep a human in the loop for production. No direct-to-main. Use PRs. Require review. This is the cheapest control you will ever implement.
  • 4. Automate the boring checks. At minimum: linting, tests, dependency scanning, and secrets scanning in CI.
  • 5. Separate environments. Do not vibe code directly against production. Do not use production data in dev unless you have a deliberate, controlled process.
  • 6. Threat model the first version. A 30-minute session is enough to catch the "obvious" risks: auth, data exposure, admin features, integrations.
  • 7. Log and monitor. If something goes wrong, you want evidence. Not vibes.

Closing: treat it like a junior intern

AI code vibing can be a fantastic force multiplier. It helps teams ship, learn, and iterate. A useful mental model is a junior intern: fast, sometimes brilliant, and sometimes missing context.

The intern can spot things you overlooked and move quickly, but without supervision it can get stuck or make confident early mistakes. That is not malice, it is context.

The fix is straightforward: supervise, review, and keep ownership. Do code reviews, write tests, and treat the output as a draft you are accountable for.

Use code vibing. Keep control.

Coding Mammoth helps product teams stay in control with ISO 27001-based controls, audits, and practical security programs. If you are scaling fast and want the guardrails without slowing down, we should talk.

Learn more at our ISO 27001 overview, or reach out via contact. You can also explore our Virtual CISO, Internal Auditor, and Implementation services.