Vibe-Coding Risk Radar: MECE Risk Framework for AI-Generated Code

Vibe-Coding Risk Radar

"Just vibe-code it" is the new "just ship it."

But here's the uncomfortable truth: Not all code carries the same risk. CSS for a landing page? Go ahead, let the AI write it. Auth module for a fintech app? That's a completely different story.

The problem: Most teams treat AI-generated code as either "always review everything" (doesn't scale) or "never review anything" (negligent for critical code).

We built an open-source tool to solve this: The Vibe-Coding Risk Radar.

5 MECE dimensions (Code Type, Language Safety, Deployment Context, Data Sensitivity, Blast Radius) classify risk across orthogonal axes. The highest dimension determines the overall tier -- inspired by IEC 61508's "highest applicable SIL" principle.

4 cumulative tiers map to concrete mitigations: - Tier 1: Automated gates only (linters, SAST, dependency checks) - Tier 2: Extended assurance (AI code review, integration tests) - Tier 3: Mandatory human review, threat modeling, penetration testing - Tier 4: Severely restrict AI use, independent verification

The key insight: The answer to "Should we review AI code?" is neither yes nor no. It's "it depends" -- and now you have a framework to decide.

Built with Semantic Anchors -- well-defined terms that activate specific LLM knowledge domains. The MECE principle (Mutually Exclusive, Collectively Exhaustive) is one such anchor: it guided GPT and Claude to produce non-overlapping, complete risk categories backed by 30+ verified references.

Try it yourself (interactive, bilingual DE/EN): https://llm-coding.github.io/vibe-coding-risk-radar/

Source + full AsciiDoc documentation: https://github.com/LLM-Coding/vibe-coding-risk-radar

Semantic Anchors framework: https://github.com/LLM-Coding/Semantic-Anchors