Avoiding Discrimination by using Artificial Intelligence in Medicine
Artificial Intelligence (AI) is one of the key technologies of modern times and revolutionizes last but not least medicine, too. The technology is based on highly complex algorithms, which impress with their high prediction accuracy. That is why it is increasingly used in practice in clinics all over the world and in Hamburg as well. However, AI algorithms have proven themselves recently in a large number of studies to be biased against protected minorities. In our project, we will describe a method to avoid discrimination by Artificial Intelligence in medicine and weigh it ethically.
Algorithms with blind spots
Groups of people with protected characteristics (especially age, sex, religion, ethnic origin, sexual orientation and handicap) are unjustifiably discriminate against by algorithms. This may be caused by the underrepresentation of bearers of protected characteristics in the data set inter alia. For example, dermatological sets of data mostly consist of skin lesions of fair-skinned patients, so that an algorithm does not learn to recognize lesions of darker skin tones. Furthermore, sets of data which are used for the training of algorithms may contain biases towards bearers of protected characteristics. These biases are assumed by algorithms and partly amplified. This type of discrimination already exists traditional algorithms.
Deep Learning and Black Boxes
These days, above all Deep Learning - AI with highly complex large Neuronal Nets – increasingly enters statistics. However, these Neuronal Nets are predisposed for algorithmic discrimination. on the one hand, their size and complexity often cause a lower generalization from a total population on sub groups. On the other hand, the decisions of the algorithms are no longer transparent for humans due to their multitude of parameters; so called Black-Box Algorithms.
Training program for AI
In our research project, we will train large Neuronal Nets for the prediction of critical events and survival on American ICUs, which will be judged ethically and based on their prediction accuracy and generalizability regarding minorities with protected characteristics. Optionally, protected characteristics will be specifically provided during the learning process of the AI, the characteristics and their corralates will be removed, individual AI models for individual sub-groups will be created and additional objectives apart from the optimal prediction accuracy for a total population will be integrated into the training process of the AI.
Student research group
- Ansgar Beuse
- Jan Bremer
- Marius Knorr
Mentor
- Prof. Dr. Olaf von dem Knesebeck