General Medical AI Research
Introduction
Pacmed is involved in applied research in areas that are relevant to the use of AI and ML in healthcare, paying special attention to the preconditions for careful and responsible use of state-of-the-art technology. In addition, we strive for scientific validation of all steps of development and implementation, as well as education on all aspects of data-driven care. Therefore, Pacmed explicitly seeks collaboration with partners to ensure that the results of our research benefit everyone.
Publications about Pacmed Critical
- de Vos J, Visser LA, de Beer AA, Fornasa M, Thoral PJ, Elbers PW, Cinà G. The Potential Cost-Effectiveness of a Machine Learning Tool That Can Prevent Untimely Intensive Care Unit Discharge. Value in Health. 2022 Oct 22.
- Thoral PJ, Fornasa M, de Bruin DP, Tonutti M, Hovenkamp H, Driessen RH, Girbes AR, Hoogendoorn M, Elbers PW. Explainable Machine Learning on AmsterdamUMCdb for ICU Discharge Decision Support: Uniting Intensivists and Data Scientists. Critical Care Explorations. 2021 Sep;3(9)
Publications
- Boerman AW, Schinkel M, Meijerink L, van den Ende ES, Pladet LC, Scholtemeijer MG, Zeeuw J, van der Zaag AY, Minderhoud TC, Elbers PW, Wiersinga WJ. Using machine learning to predict blood culture outcomes in the emergency department: a single-centre, retrospective, observational study. BMJ open. 2022 Jan 1;12(1):e053332.
- Fleuren LM, Dam TA, Tonutti M, de Bruin DP, Lalisang RC, Gommers D, Cremer OL, Bosman RJ, Rigter S, Wils EJ, Frenzel T. The Dutch Data Warehouse, a multicenter and full-admission electronic health records database for critically ill COVID-19 patients. Critical Care. 2021 Dec;25(1):1-2.
- Fleuren LM, Dam TA, Tonutti M, de Bruin DP, Lalisang RC, Gommers D, Cremer OL, Bosman RJ, Rigter S, Wils EJ, Frenzel T. Predictors for extubation failure in COVID-19 patients using a machine learning approach. Critical Care. 2021 Dec;25(1):1-0.
- Fleuren LM, Tonutti M, de Bruin DP, Lalisang RC, Dam TA, Gommers D, Cremer OL, Bosman RJ, Vonk SJ, Fornasa M, Machado T. Risk factors for adverse outcomes during mechanical ventilation of 1152 COVID-19 patients: a multicenter machine learning study with highly granular data from the Dutch Data Warehouse. Intensive care medicine experimental. 2021 Dec;9(1):1-5.
- Fleuren LM, de Bruin DP, Tonutti M, Lalisang RC, Elbers PW. Large-scale ICU data sharing for global collaboration: the first 1633 critically ill COVID-19 patients in the Dutch Data Warehouse. Intensive care medicine. 2021 Apr;47(4):478-81.
- Dam TA, de Grooth HJ, Klausch T, Fleuren LM, de Bruin DP, Entjes R, Rettig TC, Dongelmans DA, Boelens AD, Rigter S, Hendriks SH. Some Patients Are More Equal Than Others: Variation in Ventilator Settings for Coronavirus Disease 2019 Acute Respiratory Distress Syndrome. Critical care explorations. 2021 Oct;3(10).
- Ulmer D, Cinà G. Know your limits: Uncertainty estimation with relu classifiers fails at reliable ood detection. In Uncertainty in Artificial Intelligence 2021 Dec 1 (pp. 1766-1776). PMLR.
- Ulmer D, Meijerink L, Cinà G. Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data. In Machine Learning for Health 2020 Nov 23 (pp. 341-354). PMLR.
- Curth A, Thoral P, van den Wildenberg W, Bijlstra P, de Bruin D, Elbers P, Fornasa M. Transferring clinical prediction models across hospitals and electronic health record systems. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases 2019 Sep 16 (pp. 605-621). Springer, Cham.
- Grivas N, de Bruin D, Barwari K, van Muilekom E, Tillier C, van Leeuwen PJ, Wit E, Kroese W, van der Poel H. Ultrasensitive prostate‐specific antigen level as a predictor of biochemical progression after robot‐assisted radical prostatectomy: Towards risk adapted follow‐up. Journal of clinical laboratory analysis. 2019 Feb; 33(2):e22693.
Pre-prints and Poster Presentations
- Zadorozhny K, Thoral P, Elbers P, Cinà G. Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation. arXiv preprint arXiv:2109.14885. 2021 Sep 30.
- Izdebski A, Thoral PJ, Lalisang RC, McHugh DM, Entjes R, van der Meer NJ, Dongelmans DA, Boelens AD, Rigter S, Hendriks SH, de Jong R. A pragmatic approach to estimating average treatment effects from EHR data: the effect of prone positioning on mechanically ventilated COVID-19 patients. arXiv preprint arXiv:2109.06707. 2021 Sep 14.
- Ruhe D, Cinà G, Tonutti M, de Bruin D, Elbers P. Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications. arXiv preprint arXiv:1906.08619. 2019 Jun 20. Presented at AISG @ ICML2019
- Meijerink L, Cinà G, Tonutti M. Uncertainty estimation for classification and risk prediction in medical settings. arXiv preprint arXiv:2004.05824. 2020 Apr 13.
- A.A. de Beer, P.J. Thoral, H. Hovenkamp, W.J. van den Wildenberg, M. Platenkamp, A.R.J. Girbes, P.W.G. Elbers. Right Data, Right Now: Improving a machine learning based ICU readmission tool by targeting model explainability and software usability with end-user testing. Presented at ESICM @ LIVES 2019
Explainable AI
Explaining to humans the decisions of machine learning models is a difficult problem. To ensure the highest accuracy, we often have to resort to complex models, but in general, the more complex a model is, the harder to decipher. There are a plethora of methods promising to explain the decisions of complex models in a post-hoc manner; as they can only provide simplified versions of the truth, comparing them and selecting the best one is challenging and often dependent on the use-case. Nonetheless, interpretability is a crucial aspect to make the most of this technology – to ensure transparency, identify problems with the algorithm, discover new knowledge, and build trust with users. At Pacmed we pay close attention to the development in Explainable AI, to ensure that the front end of our products relies on state-of-the-art techniques.
Our work on the topic
- Decomposition of interpretability in different dimensions (repository with associated paper)
- Medium article on approximating Neural Networks with Decision Trees
Out-Of-Distribution detection
In a medical context, an algorithm might become unreliable when the data it receives differs from what it has seen during its training stage – there are several likely scenarios where this could happen, from changing hospital protocols and patient demographics to novel phenotypes. At Pacmed we research the issue of Out-Of-Distribution detection, with the goal of detecting samples different from training data in real time. This line of research aims to improve the reliability of Pacmed products and their robustness against data shifts.
Our work on the topic
- Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation
- Know your limits: Uncertainty estimation with relu classifiers fails at reliable ood detection
- Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data
- Uncertainty Estimation for Classification and Risk Prediction on Medical Tabular Data
- Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications
Causal Inference
Every time a doctor administers a treatment, the doctor relies on the assumption that said treatment will have a specific effect. The established way treatment effects are measured is by performing Randomized Control Trials (RCTs). However, we often don’t have reliable RCT information. For example, during the beginning of the COVID pandemic we didn’t have time to test treatments in RCTs. In other cases, RCTs might be very old or might be performed on very homogenous populations that do not reflect what we see in practice.
Our work on the topic
- A pragmatic approach to estimating average treatment effects from EHR data: the effect of prone positioning on mechanically ventilated COVID-19 patients
- Project on treatment effect estimation on COVID-19 funded by SIDN (in Dutch)
Kansen Voor West III project on Causal Inference
Pacmed and the department of Medical Informatics of the AmsterdamUMC and University of Amsterdam have received a grant under the Kansen Voor West III program, funded by the European Fund for Regional Development (EFRD) of the European Union, to do research on causal inference.
Intensive care physicians however could gain much more value from AI decision support when not only the risk of an undesirable outcome is predicted, but when it helps to determine the best intervention trajectory (by predicting the effects of, for example, letting a patient stay an extra day on the ICU). This requires treatment effect estimation. Techniques to estimate treatment effects come from the domain of causal inference, also referred to as causal AI.
Pacmed's collaboration with AmsterdamUMC, the Santeon hospitals and healthcare insurers such as CZ and Zilveren Kruis, will enable the validation and application of this new type of technology in practice. The project started in 2022 and will run for at least 3 years.