Writing
Blog
Analysis, techniques, and perspectives from the disreGUARD team on prompt injection and AI security.
February 7, 2026
The auditor in the airlock: a security pattern for AI agent decisions
When an agent needs to make a security-sensitive decision about tainted data, you need an information bottleneck between the taint and the judgment. Here's how to build one.
February 7, 2026Hardening OpenClaw: a practical prompt injection defense
This week researchers demonstrated persistent backdoors in OpenClaw via prompt injection. We're helping harden it with `sig` and patterns apply to any agent framework.
February 6, 2026sig: instruction signing for prompt injection defense
We can create a clear trust boundary by signing instructions and giving models a tool to participate in making secure choices
February 5, 2026Injection is inevitable. Disaster is optional.
Prompt injection is an infrastructure problem. We can't prevent it, but we can massively reduce the risk and impact.