Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in BLACKBOXNLP 2020, 2020
In this paper, we probe BERT specifically to understand and measure the relational knowledge it captures.
Recommended citation: Wallat, Jonas et al. “BERTnesia: Investigating the capture and forgetting of knowledge in BERT.” BLACKBOXNLP (2020). https://www.semanticscholar.org/paper/BERTnesia%3A-Investigating-the-capture-and-forgetting-Wallat-Singh/616610e0b0a31ab4bac1c64fd0b65c2572185522#paper-header
Published in arXiv, 2022
This survey fills a vital gap in the otherwise topically diverse literature of explainable information retrieval. It categorizes and discusses recent explainability methods developed for different application domains in information retrieval, providing a common framework and unifying perspectives.
Recommended citation: Anand, Avishek, et al. "Explainable Information Retrieval: A Survey." arXiv preprint arXiv:2211.02405 (2022). https://arxiv.org/abs/2211.02405
Published in arXiv, 2023
This review highlights the limitations of current AI models in understanding cause and effect, resulting in issues like poor generalization, unfair outcomes, and interpretability challenges.
Recommended citation: Ganguly, Niloy, et al. "A review of the role of causality in developing trustworthy ai systems." arXiv preprint arXiv:2302.06975 (2023). https://arxiv.org/abs/2302.06975
Published in ECIR 2023, 2023
In this paper, we probe BERT-rankers to understand which abilities are acquired by fine-tuning on a ranking task.
Recommended citation: Wallat, Jonas, et al. "Probing BERT for ranking abilities." Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Cham: Springer Nature Switzerland, 2023. https://link.springer.com/chapter/10.1007/978-3-031-28238-6_17
Published in arxiv, 2023
We investigate the ability to retain factual information from different pre-training strategies.
Recommended citation: Wallat, J., Zhang, T., & Anand, A. (2023). The Effect of Masking Strategies on Knowledge Retention by Language Models. ArXiv, abs/2306.07185. https://arxiv.org/abs/2306.07185
Published in ECAI 2023, 2023
We learn a new vocabulary of relevant DNA basepair groups using pointwise mutual information and show that it allows for significantly faster pretraining of DNA language models.
Recommended citation: Roy, S., Wallat, J., Sundaram, S.S., Nejdl, W., & Ganguly, N. (2023). GeneMask: Fast Pretraining of Gene Sequences to Enable Few-Shot Learning. European Conference on Artificial Intelligence. https://ebooks.iospress.nl/doi/10.3233/FAIA230492
Undergraduate course, Leibniz University Hannover, Software Engineering, 2017
Teachers Assistant in the bachelor’s course Software Quality. Including topics like code metrics, systematic testing and code reviews.
Graduate course, Leibniz University Hannover, Software Engineering, 2021
Teachers Assistant in the master’s course Deep Learning.
Graduate course, Leibniz University Hannover, Software Engineering, 2022
Teachers Assistant in the master’s course Foundations of Information Ethics.
Summer School, Leibniz University Hannover, Leibniz AI Lab, 2022
Student Ambassador and participant at the AI for Biomedicine Summer School.
Graduate course, Leibniz University Hannover, Software Engineering, 2023
Teachers Assistant in the master’s course Foundations of Information Ethics. Including a guest lecture on cybersecurity and its risks from an ethical perspective.