reasonAI - explainable reasoning in LLM (reasonAI)
Description
The project seeks to improve the explainability of large language models by applying systematic interventions, such as sparse autoencoders, to open-source reasoning models such as Qwen Deepseek-R1.
Transparent AI-driven reasoning is especially important in the life sciences, where trustworthy decisions of foundational models are essential.
Key data
Projectlead
Project team
Eric Gericke, Dr. Reto Gubelmann (Universitätsspital Zürich)
Project partners
Universitätsspital Zürich / Digital Society Initiative
Project status
ongoing, started 11/2025
Institute/Centre
Institute of Computational Life Sciences (ICLS)
Funding partner
Digitalisierungsinitiative der Zürcher Hochschulen DIZH
Project budget
43'000 CHF