groundlens
Blog
/
Projects
/
Filter
compliance
EU-AI-Act
research
taxonomy
DGI
hallucination-detection
mechanistic-interpretability
transformers
semantic-laziness
SGI
geometric-methods
Latest Posts
Transformers reject wrong answers by rotating them
3/5/2026
When a transformer processes a factually wrong completion, it doesn't just scale down the probability. It rotates the representation into a rejection subspace. This mechanism has a phase transition.
Not all hallucinations are created equal
2/10/2026
Three types of hallucination. Two are detectable by geometry. One is provably invisible to any embedding-based method. A taxonomy that tells you where detection works and where it doesn't.
Regulation is the scar tissue of broken trust
1/8/2026
Nobody writes compliance frameworks for fun. The EU AI Act exists because AI systems failed and people got hurt. Here's what it requires.
Understanding semantic laziness in LLM responses
1/5/2026
Why LLMs take shortcuts and how geometric analysis reveals these patterns before they become production incidents.
The red bird that satisfies all constraints
12/15/2025
An LLM says a cardinal is red. It's correct. But it never checked — it predicted. That distinction is the entire problem.