Ruta Binkyte

Post Doc Researcher


Curriculum vitae


AI Fairness, Causality, Privacy

CISPA Helmholtz Center for Information Security



New paper: Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models


March 12, 2025

I am particularly proud and excited about this paper. I started thinking about how different components of ethical AI could be brought together during my PhD thesis, where I worked on fairness and privacy in ML (My thesis). Back then I felt that all these things that we want to have in trustworthy, ethical AI, like fairness, privacy, robustness, and of course accuracy cannot be treated in isolation. They are highly interconnected and subject to various trade-offs and tensions.  Then I started thinking of causality as a glue, that could possibly hold everything together. However, it took another year and discussions with amazing co-authors Ivaxi Sheth, Zhijing Jin, Mohammad Havaei, Bernhard  Schölkopf and Mario Fritz to put it all together. The expertise of my co-authors allowed to broaden the ideas on applications of causality by including LLMs and foundation models. I think our most important contribution is invitation to think holistically about improving trustworthiness of AI and suggesting a principled framework for it. Not all tensions in trustworthy ML can be resolved, however it is important to understand and explicitly state them.
Causal ML Cycle Animated Gif Diagram
Causal Trustworthy ML Cycle.
Figure 1. Causal Trustworthy ML Cycle: Causal ML can leverage existing knowledge and causal auditing to enhance different components of trustworthiness: explainability, fairness, privacy, and accuracy while simultaneously advancing understanding through causal discovery.
💡
Despite significant advancements in research on individual dimensions of trustworthy ML such as fairness, privacy, and explainability—there is a notable lack of efforts to integrate these dimensions into a cohesive and unified framework. In this paper, we argue that integrating causality into ML and foundation models offers a way to balance multiple competing objectives of trustworthy AI.
In the paper we cover the trade-offs and intersections in trustworthy ML, such as privacy vs. accuracy, privacy vs. fairness, explainability vs. accuracy and others. We discuss how causality provides a principled approach to navigating these trade-offs by explicitly modeling data-generating processes and clarifying assumptions. In the second part of the paper we extend to new challenges which emerged with LLMs, such as hallucinations, and sketch out how causality could be integrated during pre-training or post-training to make them more trustworthy. The full paper can be accessed here: Binkyte, R., Sheth, I., Jin, Z., Havaei, M., Schölkopf, B. and Fritz, M., 2025. Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models. arXiv preprint arXiv:2502.21123.

Share

Tools
Translate to