Skip to main content

General Medical AI Research

Pacmed Labs is involved in applied research in areas that are relevant to the use of AI and ML in healthcare, paying special attention to the preconditions for careful and responsible use of  state-of-the-art technology. In addition, we strive for scientific validation of all steps of development and implementation, as well as education on all aspects of data-driven care. Therefore, Pacmed Labs explicitly seeks collaboration with partners to ensure that the results of our research benefit everyone.

Publications about Pacmed Critical

Publications

Pre-prints and Poster Presentations

  • Zadorozhny K, Thoral P, Elbers P, Cinà G. Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation. arXiv preprint arXiv:2109.14885. 2021 Sep 30.
  • Izdebski A, Thoral PJ, Lalisang RC, McHugh DM, Entjes R, van der Meer NJ, Dongelmans DA, Boelens AD, Rigter S, Hendriks SH, de Jong R. A pragmatic approach to estimating average treatment effects from EHR data: the effect of prone positioning on mechanically ventilated COVID-19 patients. arXiv preprint arXiv:2109.06707. 2021 Sep 14.
  • Ruhe D, Cinà G, Tonutti M, de Bruin D, Elbers P. Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications. arXiv preprint arXiv:1906.08619. 2019 Jun 20. Presented at AISG @ ICML2019
  • Meijerink L, Cinà G, Tonutti M. Uncertainty estimation for classification and risk prediction in medical settings. arXiv preprint arXiv:2004.05824. 2020 Apr 13.
  • A.A. de Beer, P.J. Thoral, H. Hovenkamp, W.J. van den Wildenberg, M. Platenkamp, A.R.J. Girbes, P.W.G. Elbers. Right Data, Right Now: Improving a machine learning based ICU readmission tool by targeting model explainability and software usability with end-user testing. Presented at ESICM @ LIVES 2019

Explainable AI

Explaining to humans the decisions of machine learning models is a difficult problem. To ensure the highest accuracy, we often have to resort to complex models, but in general, the more complex a model is, the harder to decipher. There are a plethora of methods promising to explain the decisions of complex models in a post-hoc manner; as they can only provide simplified versions of the truth, comparing them and selecting the best one is challenging and often dependent on the use-case. Nonetheless, interpretability is a crucial aspect to make the most of this technology – to ensure transparency, identify problems with the algorithm, discover new knowledge, and build trust with users. At Pacmed Labs we pay close attention to the development in Explainable AI, to ensure that the front end of our products relies on state-of-the-art techniques.

Our work on the topic

Decomposition of interpretability in different dimensions (repository with associated paper)

Medium article on approximating Neural Networks with Decision Trees

Out-Of-Distribution detection

In a medical context, an algorithm might become unreliable when the data it receives differs from what it has seen during its training stage – there are several likely scenarios where this could happen, from changing hospital protocols and patient demographics to novel phenotypes. At Pacmed Labs we research the issue of Out-Of-Distribution detection, with the goal of detecting samples different from training data in real time. This line of research aims  to improve the reliability of Pacmed products and their robustness against data shifts.

Our work on the topic

Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation

Know your limits: Uncertainty estimation with relu classifiers fails at reliable ood detection

Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data

Uncertainty Estimation for Classification and Risk Prediction on Medical Tabular Data

Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications

Causal Inference 

Every time a doctor administers a treatment, the doctor relies on the assumption that said treatment will have a specific effect. The established way treatment effects are measured is by performing Randomized Control Trials (RCTs). However, we often don’t have reliable RCT information. For example, during the beginning of the COVID pandemic we didn’t have time to test treatments in RCTs. In other cases, RCTs might be very old or might be performed on very homogenous populations that do not reflect what we see in practice.

This is one of the reasons why inferring treatment effects from observational data – data collected from routine care – becomes such an important challenge. However, causal inference is complicated due to at least two major obstacles. The first one is the fundamental problem of causal inference, namely that for every patient in the data we only see one of the possible outcomes: we never exactly know what would have happened if the patient had received a different treatment. The second major issue is confounding, meaning that there can be variables muddling the causal relationship between treatment and outcome. In recent years scientists and practitioners from the field of AI intensified efforts to bring the AI revolution to the field of causal reasoning and thus healthcare. The promise is that applying concepts from machine learning will help overcome the highlighted difficulties and help doctors and medical experts make better decisions.

Our work on the topic

A pragmatic approach to estimating average treatment effects from EHR data: the effect of prone positioning on mechanically ventilated COVID-19 patients

Project on treatment effect estimation on COVID-19 funded by SIDN (in Dutch)

Kansen Voor West III project on Causal Inference

Pacmed and the department of Medical Informatics of the AmsterdamUMC and University of Amsterdam have received a grant under the Kansen Voor West III program, funded by the European Fund for Regional Development (EFRD) of the European Union, to do research on causal inference

Pacmed already developed a risk prediction software: Pacmed Critical. This software predicts the seven day risk of mortality and readmission if a patient were to be discharged to the ward.

Intensive care physicians however could gain much more value from AI decision support when not only the risk of an undesirable outcome is predicted, but when it helps to determine the best intervention trajectory (by predicting the effects of, for example, letting a patient stay an extra day on the ICU). This requires treatment effect estimation. Techniques to estimate treatment effects come from the domain of causal inference, also referred to as causal AI.

In this project the genericity of causal AI will be validated with different use cases such as discharge and extubation. Applying these models on medical data in intensive care has the potential to significantly contribute to reducing the pressure on the ICU by making healthcare delivery smarter, more efficient and at the same time more personalized.

Pacmeds collaboration with AmsterdamUMC, the Santeon hospitals and healthcare insurers such as CZ and Zilveren Kruis, will enable the validation and application of this new type of technology in practice. The project started in 2022 and will run for at least 3 years.