About me

Hey,

My name is Jonas and I am currently working to obtain my PhD in Interpretable AI from Leibniz University in Hannover, Germany. My research focuses on Interpretability in socio-technical systems. My main goal is understanding how information is processed and making sure the LLMs do so reliably. Currently, my interests mainly focus on:

  • Mechanistic interpretability
  • QA & IR
  • Robustness of Deep Learning models

If you are interested and want to discuss these topics, feel free to send me an email!