AI Basics: Risks and Solutions

By
Olivier Thijssens - Senior Data Scientist
October 14, 2024

AI Risks and Solutions - Technical Challenges


Estimated reading Time: 5 minutes

In the public debate, the benefits of AI in healthcare are often mentioned alongside potential risks. This is wise, as the quality of care is vital for the quality of life for a patient. But what risks are there? And are there solutions to prevent or mitigate these risks? In this blog post, Pacmed experts will guide you through the main technical challenges of training and maintaining safe, optimally functioning models. Pacmed's data scientists outline the most common risks associated with medical AI. You will learn about the causes, potential consequences, and corresponding solutions.

AI Risks Arise from Various Challenges

AI presents several risks, each stemming from distinct challenges. First, there are technical challenges related to the development of AI models. Ensuring that these models are rigorously trained and aligned with the tasks they’re deployed for is critical to their effectiveness. Then, there are application challenges, where it's vital that AI is used within the scope of its intended purpose. Misapplication can lead to unreliable or even harmful outcomes. Finally, management challenges arise, encompassing issues like data security, infrastructure integrity, and liability concerns.

However, all of these challenges are manageable. With the right expertise, oversight, and commitment to best practices, organizations can successfully navigate and mitigate the risks associated with AI, unlocking its full potential while safeguarding against its pitfalls.

Technical Risks: Overcoming Biases in Model Development and Monitoring

The greatest technical risks of AI stem from biases. If not properly addressed, the model may perform worse than expected. This could lead to a physician making a decision that is unsuitable or even harmful for a specific patient, for example, because the patient does not fit within the population on which the training data is based. Below is a list of biases and strategies to mitigate risks.

Bias 1: Selection Bias

Selection bias occurs when the dataset used for training the model is not representative of the entire population it will be applied to.  This can happen during the selection of the involved population or the chosen sampling methods. To prevent selection bias, it is important that the dataset is representative of all populations that the ICU must serve. This can be achieved by using a large and diverse dataset. Since the data within a single hospital is often limited to several tens of thousands of cases, multi-hospital models are an important means of expanding the pool. 

FAQ: which data does Pacmed use to train models? We use a hospital's historical dataset to kalibrate our models. This way we can guarantee the model performance on the population of the hospital.

Pacmed also conducts subgroup analyses to assess performance per group, such as based on age, gender, and type of ICU patient. Pacmed flags any groups where the model underperforms, preventing predictions for these groups. Clinicians are notified when patients from these flagged groups are encountered in real-time.

Bias 2: Data Drift

Data drift refers to the phenomenon where the statistical properties of the input data change over time. This means that the data on which the model was initially trained no longer accurately corresponds to the new data. Pacmed keeps the machine learning model up-to-date through regular retraining with new historical data. During retraining, data on which a prediction has previously been made can be included. Here, a new risk comes into play: self-influence of the model. If a model retrains on data with outcomes influenced by the model itself, bias can arise from a self-fulfilling prophecy. Therefore, it is important to keep the dataset large with new data outside the original setting. Pacmed is working on a multi-hospital model in the focused on all hospitals. This increases the generalizability of the model. It is crucial that more data in the Netherlands and Europe is responsibly made available for secondary use.

Bias 3: Overfitting to Training Data

Overfitting occurs when a model fits too closely to the training dataset and cannot generalize well to new data. Pacmed addresses this through extensive validation and cross-validation with the most recent hospital data, which is most similar to the data that may be encountered in production. This way, performance is measured, and the generalizability of the model is ensured.

Bias 4: Input Data Bias

Inconsistencies in input data can arise from errors in data entry by staff or measuring equipment. This can lead to incorrect training of the model. Pacmed ensures a medical validation process in which the distribution of data is compared with internal knowledge. Our in-house intensivists (medical specialists which whom work with Pacmed) and medical data experts verify the accuracy of the data, and everything is standardized to uniform units for easy comparison. After this initial validation we sit with the intensivists of a hospital to execute more checks and validation work.

Bias 5: Non-representative Training Data

If the population to which a model is applied is not representative of the training data, accurate predictions cannot be made. This can happen, for example, if a hospital primarily treats surgical patients while other hospitals have more trauma admissions. To avoid this problem, Pacmed has developed an "Out of Domain Detection Model" that flags patients for whom no prediction can be made. This works both for the selection of training data and live. Users of Pacmed's solution for the ICU, Pacmed Critical, receive a notification if a patient outcome cannot be predicted based on the training data.

Bias 6: Omitted-variable bias

Bias due to the omission of predictors occurs when a model excludes essential factors that influence patient outcomes, leading to inaccurate predictions and explanations. In the ICU, this is particularly problematic due to the complexity of care. To mitigate this bias, it is crucial to include all relevant variables during model development and collaborate with clinicians to ensure the model accurately reflects the intricate nature of ICU care. The algorithms within Pacmed Critical use over 100 medically carefully designed predictors, and we apply the insights gained from each hospital implementation to improve outcomes in other hospitals.

Bias 7: Interpretability

The issue of interpretability arises with "black box" models, where the user cannot see how a model reaches conclusions. This can lead to biases going unnoticed for a long time. However, Pacmed's models are not black boxes; they offer transparency. By displaying the variables used for conclusions (risk scores), a user can trace the reasoning behind them. Pacmed also tests the Shapley value of models, a concept from game theory, where the characteristics of each patient are adjusted one by one to see what impact they have on the prediction. This helps identify and understand the key predictors.

For the intensivist and end-user we've created an user interface (UI)which aligns with the way of thinking used by medical specialists by clustering predictions on organ systems, Airway, Breathing, Circulation, Disability, Exposure (ABCDE). This UI also offers a view on the individual variables which together form the prediction so users can view how the model came to a conclusion.

Managing Risks through Expertise and Thorough Practices

The risks of AI in healthcare are manageable. Pacmed is committed to addressing these risks and developing safe, effective AI solutions that improve the quality of care. In the upcoming parts of this blog series, we will delve into other important topics, such as information protection and liability, to provide a complete picture of the challenges and opportunities that AI in healthcare presents.

Want to know more about the risks and solutions for AI? Or do you have a comment? Shoot us a message through our Contact Page and we will be in touch!