Razorglint Labs
Applied AI-Safety Research & Verification
From AI memory anomalies to enterprise-grade evidence. We catch what others can’t
Who We Are
Razorglint Labs is the applied research division of TCOG Collective LLC (New Mexico). We focus on uncovering, validating, and documenting system-level memory and consistency anomalies in LLMs.
Our work builds verifiable, cryptographically-sealed evidence chains (SHA-256), freeze statements, and forensic datasets. These provide regulators, labs, and corporations with reproducible AI-safety insights that go far beyond theoretical speculation.
Why Razorglint labs exist
Memory anomalies are blind spots. AI systems forget, distort, or fabricate — and no one is tracking it at scale.
Evidence must be reproducible. Without sealed logs and verification, trust collapses.
We turn anomalies into data. Our work transforms system failures into cryptographically-sealed, enterprise-ready evidence.
Compliance needs proof, not theory. Regulators and enterprises demand hard evidence, not speculation.
Our Capabilities
- Sealed, hash-verified evidence chain (notes, logs, screenshots, video)
- Interactive Verification Viewer (local deployment)
- Freeze statements & provenance logs for chain-of-custody
- Reproducible anomaly phases validated across environments
- Research insights bridging AI safety, memory integrity, and long-term reliability
Why It Matters
LLMs are not just software—they are dynamic systems where memory, continuity, and context integrity directly affect trust, compliance, and security. Razorglint Labs provides actionable, reproducible data that enterprises can use for auditing, red-teaming, and governance.
Contact

Damian Ketting
Director, Razorglint Labs
TCOG Collective LLC (New Mexico)
📞 +31 6 23071750 (Europe/Brussels)
📧 d.ketting@razorglintlabs.com