groundlens
- Geometric LLM hallucination detection
- Deterministic. Auditable. No second LLM.
- Open-source Python library (MIT)
About me
groundlens is a research project by Javier Marin focused on geometric methods for LLM grounding verification.
The core question: can you determine whether an LLM response is grounded in evidence without using another LLM to judge it?
The answer is yes, within measurable boundaries. Embedding geometry encodes enough structure to detect two of three hallucination types. The third — factual errors within the correct conceptual frame — is provably undetectable by any embedding-based method, and we report this honestly.
Research
Three papers form the foundation:
-
Semantic Grounding Index (arXiv:2512.13771) — ratio-based grounding verification for RAG systems.
-
A Geometric Taxonomy of Hallucinations (arXiv:2602.13224v3) — three-type hallucination classification. The confabulation benchmark.
-
Rotational Dynamics of Factual Constraint Processing (arXiv:2603.13259) — transformers reject wrong answers via rotation, not rescaling. Phase transition at 1.6B parameters.
Philosophy
groundlens is verification triage, not truth detection. It tells you which responses earned the right to be trusted and which need human review.
The methods are deterministic. The same input produces the same score every time. Auditable by design.
Open source
The groundlens library is MIT licensed. The hallucination benchmark is CC BY 4.0.
Contact: javier@jmarin.info
Latest Posts
Transformers reject wrong answers by rotating them
3/5/2026Not all hallucinations are created equal
2/10/2026Latest Projects
Under the Hood
arXiv:2603.13259
Rotational Dynamics of Factual Constraint ProcessingMechanistic interpretability: transformers reject wrong answers via rotation, not rescaling. Phase transition at 1.6B parameters.
arXiv:2602.13224v3
A Geometric Taxonomy of HallucinationsThree-type hallucination classification via directional grounding. Von Mises-Fisher distributions on displacement vectors. Domain-specific calibration achieves AUROC 0.76-0.99.
arXiv:2512.13771
Semantic Grounding Index (SGI)Ratio-based grounding verification for RAG systems. Measures whether a response engages its source material via angular geometry on the unit hypersphere.