Mission
The Gutenberg moment for AI
10/4/2025
AI capabilities are rapidly advancing, yet our understanding of its internal workings remains primitive. The CEO of Anthropic and the US AI Action Plan have called for more investment in interpretability.
Despite years of promising research, interpretability has only been useful to a niche market: AI researchers, alignment researchers, and domain-specific scientists. These tools are expensive, require high skill to use, and day-to-day collaboration with interpretability experts.
This mirrors a moment in history. Before Johannes Gutenberg invented the printing press, books were scarce and only the privileged could read. His invention democratized literacy.
Our goal with interpretability is to make AI systems legible. Today, making models better and safer is a vibes-based process. Interpretability is a principled paradigm to understand models in a faithful, mathematically precise way. This is how we move beyond intuition.
Our Vision:
- Productionizing Frontier Research. The frontier of interpretability continues to advance. Our commitment is to bridge promising research and real-world application. The best ideas are only useful if they are accessible.
- Pragmatic Focus on Safety. AI risk is not a single unified problem but a shifting landscape of empirical and theoretical risks. We are deeply pragmatic about maximizing the harm we can reduce.
The goal is ambitious but simple: interpretability should be table stakes. Not a luxury, not an afterthought. If this challenge speaks to you, reach out.