The Agent Reliability Engine
Chaos Engineering for Production AI Agents
The "Happy Path" Fallacy: Current AI development tools focus on getting an agent to work once. Developers tweak prompts until they get a correct answer, declare victory, and ship.
The Reality: LLMs are non-deterministic. An agent that works on Monday with temperature=0.7 might fail on Tuesday. Production agents face real users who make typos, get aggressive, and attempt prompt injections. Real traffic exposes failures that happy-path testing misses.
The Void:
- Observability Tools (LangSmith) tell you after the agent failed in production
- Eval Libraries (RAGAS) focus on academic scores rather than system reliability
- CI Pipelines lack chaos testing — agents ship untested against adversarial inputs
- Missing Link: A tool that actively attacks the agent to prove robustness before deployment
Flakestorm is a chaos testing layer for production AI agents. It applies Chaos Engineering principles to systematically test how your agents behave under adversarial inputs before real users encounter them.
Instead of running one test case, Flakestorm takes a single "Golden Prompt", generates adversarial mutations (semantic variations, noise injection, hostile tone, prompt injections), runs them against your agent, and calculates a Robustness Score. Run it before deploy, in CI, or against production-like environments.
"If it passes Flakestorm, it won't break in Production."
Flakestorm is designed for teams already running AI agents in production. Most production agents use cloud LLM APIs (OpenAI, Gemini, Claude, Perplexity, etc.) and face real traffic, real users, and real abuse patterns.
Why local LLMs exist in the open source version:
- Fast experimentation and proofs-of-concept
- CI-friendly testing without external dependencies
- Transparent, extensible chaos engine
Why production chaos should mirror production reality: Production agents run on cloud infrastructure, process real user inputs, and scale dynamically. Chaos testing should reflect this reality—testing against the same infrastructure, scale, and patterns your agents face in production.
The cloud version removes operational friction: no local model setup, no environment configuration, scalable mutation runs, shared dashboards, and team collaboration. Open source proves the value; cloud delivers production-grade chaos engineering.
- Teams shipping AI agents to production — Catch failures before users do
- Engineers running agents behind APIs — Test against real-world abuse patterns
- Teams already paying for LLM APIs — Reduce regressions and production incidents
- CI/CD pipelines — Automated reliability gates before deployment
Flakestorm is built for production-grade agents handling real traffic. While it works great for exploration and hobby projects, it's designed to catch the failures that matter when agents are deployed at scale.
Watch flakestorm generate mutations and test your agent in real-time
Interactive HTML reports with detailed failure analysis and recommendations
Flakestorm follows a simple but powerful workflow:
- You provide "Golden Prompts" — example inputs that should always work correctly
- Flakestorm generates mutations — using a local LLM, it creates adversarial variations across 24 mutation types:
- Core prompt-level (8): Paraphrase, noise, tone shift, prompt injection, encoding attacks, context manipulation, length extremes, custom
- Advanced prompt-level (7): Multi-turn attacks, advanced jailbreaks, semantic similarity attacks, format poisoning, language mixing, token manipulation, temporal attacks
- System/Network-level (9): HTTP header injection, payload size attacks, content-type confusion, query parameter poisoning, request method attacks, protocol-level attacks, resource exhaustion, concurrent patterns, timeout manipulation
- Your agent processes each mutation — Flakestorm sends them to your agent endpoint
- Invariants are checked — responses are validated against rules you define (latency, content, safety)
- Robustness Score is calculated — weighted by mutation difficulty and importance
- Report is generated — interactive HTML showing what passed, what failed, and why
The result: You know exactly how your agent will behave under stress before users ever see it.
Note: The open source version uses local LLMs (Ollama) for mutation generation. The cloud version (in development) uses production-grade infrastructure to mirror real-world chaos testing at scale.
- ✅ 24 Mutation Types: Comprehensive robustness testing covering:
- Core prompt-level attacks (8): Paraphrase, noise, tone shift, prompt injection, encoding attacks, context manipulation, length extremes, custom
- Advanced prompt-level attacks (7): Multi-turn attacks, advanced jailbreaks, semantic similarity attacks, format poisoning, language mixing, token manipulation, temporal attacks
- System/Network-level attacks (9): HTTP header injection, payload size attacks, content-type confusion, query parameter poisoning, request method attacks, protocol-level attacks, resource exhaustion, concurrent patterns, timeout manipulation
- ✅ Invariant Assertions: Deterministic checks, semantic similarity, basic safety
- ✅ Beautiful Reports: Interactive HTML reports with pass/fail matrices
- ✅ Open Source Core: Full chaos engine available locally for experimentation and CI
Open Source (Always Free):
- Core chaos engine with all 24 mutation types (no artificial feature gating)
- Local execution for fast experimentation
- CI-friendly usage without external dependencies
- Full transparency and extensibility
- Perfect for proofs-of-concept and development workflows
Cloud (In Progress / Waitlist):
- Zero-setup chaos testing (no Ollama, no local models)
- Scalable runs (thousands of mutations)
- Shared dashboards & reports
- Team collaboration
- Scheduled & continuous chaos runs
- Production-grade reliability workflows
Our Philosophy: We do not cripple the OSS version. Cloud exists to remove operational pain, not to lock features. Open source proves the value; cloud delivers production-grade chaos engineering at scale.
This is the fastest way to try Flakestorm locally. Production teams typically use the cloud version (waitlist). Here's the local quickstart:
-
Install flakestorm (if you have Python 3.10+):
pip install flakestorm
-
Initialize a test configuration:
flakestorm init
-
Point it at your agent (edit
flakestorm.yaml):agent: endpoint: "http://localhost:8000/invoke" # Your agent's endpoint type: "http"
-
Run your first test:
flakestorm run
That's it! You'll get a robustness score and detailed report showing how your agent handles adversarial inputs.
Note: For full local execution (including mutation generation), you'll need Ollama installed. See the Usage Guide for complete setup instructions.
See what's coming next! Check out our Roadmap for upcoming features including:
- 🚀 Pattern Engine Upgrade with 110+ Prompt Injection Patterns and 52+ PII Detection Patterns
- ☁️ Cloud Version enhancements (scalable runs, team collaboration, continuous testing)
- 🏢 Enterprise features (on-premise deployment, custom patterns, compliance certifications)
- 📖 Usage Guide - Complete end-to-end guide (includes local setup)
- ⚙️ Configuration Guide - All configuration options
- 🔌 Connection Guide - How to connect FlakeStorm to your agent
- 🧪 Test Scenarios - Real-world examples with code
- 🔗 Integrations Guide - HuggingFace models & semantic similarity
- 🏗️ Architecture & Modules - How the code works
- ❓ Developer FAQ - Q&A about design decisions
- 🤝 Contributing - How to contribute
- 🔧 Fix Installation Issues - Resolve
ModuleNotFoundError: No module named 'flakestorm.reports' - 🔨 Fix Build Issues - Resolve
pip install .vspip install -e .problems
- 🐛 Issue Templates - Use our issue templates to report bugs, request features, or ask questions
- 📋 API Specification - API reference
- 🧪 Testing Guide - How to run and write tests
- ✅ Implementation Checklist - Development progress
For teams running production AI agents, the cloud version removes operational friction: zero-setup chaos testing without local model configuration, scalable mutation runs that mirror production traffic, shared dashboards for team collaboration, and continuous chaos runs integrated into your reliability workflows.
The cloud version is currently in early access. Join the waitlist to get access as we roll it out.
Apache 2.0 - See LICENSE for details.






