General Medical AI Research

Introduction

Pacmed is involved in applied research in areas that are relevant to the use of AI and ML in healthcare, paying special attention to the preconditions for careful and responsible use of state-of-the-art technology. In addition, we strive for scientific validation of all steps of development and implementation, as well as education on all aspects of data-driven care. Therefore, Pacmed explicitly seeks collaboration with partners to ensure that the results of our research benefit everyone.

Publications about Pacmed Critical

Publications

Pre-prints and Poster Presentations

  • Zadorozhny K, Thoral P, Elbers P, Cinà G. Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation. arXiv preprint arXiv:2109.14885. 2021 Sep 30.
  • Izdebski A, Thoral PJ, Lalisang RC, McHugh DM, Entjes R, van der Meer NJ, Dongelmans DA, Boelens AD, Rigter S, Hendriks SH, de Jong R. A pragmatic approach to estimating average treatment effects from EHR data: the effect of prone positioning on mechanically ventilated COVID-19 patients. arXiv preprint arXiv:2109.06707. 2021 Sep 14.
  • Ruhe D, Cinà G, Tonutti M, de Bruin D, Elbers P. Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications. arXiv preprint arXiv:1906.08619. 2019 Jun 20. Presented at AISG @ ICML2019
  • Meijerink L, Cinà G, Tonutti M. Uncertainty estimation for classification and risk prediction in medical settings. arXiv preprint arXiv:2004.05824. 2020 Apr 13.
  • A.A. de Beer, P.J. Thoral, H. Hovenkamp, W.J. van den Wildenberg, M. Platenkamp, A.R.J. Girbes, P.W.G. Elbers. Right Data, Right Now: Improving a machine learning based ICU readmission tool by targeting model explainability and software usability with end-user testing. Presented at ESICM @ LIVES 2019

Explainable AI

Explaining to humans the decisions of machine learning models is a difficult problem. To ensure the highest accuracy, we often have to resort to complex models, but in general, the more complex a model is, the harder to decipher. There are a plethora of methods promising to explain the decisions of complex models in a post-hoc manner; as they can only provide simplified versions of the truth, comparing them and selecting the best one is challenging and often dependent on the use-case. Nonetheless, interpretability is a crucial aspect to make the most of this technology – to ensure transparency, identify problems with the algorithm, discover new knowledge, and build trust with users. At Pacmed we pay close attention to the development in Explainable AI, to ensure that the front end of our products relies on state-of-the-art techniques.

Our work on the topic

Out-Of-Distribution detection

In a medical context, an algorithm might become unreliable when the data it receives differs from what it has seen during its training stage – there are several likely scenarios where this could happen, from changing hospital protocols and patient demographics to novel phenotypes. At Pacmed we research the issue of Out-Of-Distribution detection, with the goal of detecting samples different from training data in real time. This line of research aims to improve the reliability of Pacmed products and their robustness against data shifts.

Our work on the topic

Causal Inference

Every time a doctor administers a treatment, the doctor relies on the assumption that said treatment will have a specific effect. The established way treatment effects are measured is by performing Randomized Control Trials (RCTs). However, we often don’t have reliable RCT information. For example, during the beginning of the COVID pandemic we didn’t have time to test treatments in RCTs. In other cases, RCTs might be very old or might be performed on very homogenous populations that do not reflect what we see in practice.

Our work on the topic

Kansen Voor West III project on Causal Inference

Pacmed and the department of Medical Informatics of the AmsterdamUMC and University of Amsterdam have received a grant under the Kansen Voor West III program, funded by the European Fund for Regional Development (EFRD) of the European Union, to do research on causal inference.

Intensive care physicians however could gain much more value from AI decision support when not only the risk of an undesirable outcome is predicted, but when it helps to determine the best intervention trajectory (by predicting the effects of, for example, letting a patient stay an extra day on the ICU). This requires treatment effect estimation. Techniques to estimate treatment effects come from the domain of causal inference, also referred to as causal AI.

Pacmed's collaboration with AmsterdamUMC, the Santeon hospitals and healthcare insurers such as CZ and Zilveren Kruis, will enable the validation and application of this new type of technology in practice. The project started in 2022 and will run for at least 3 years.