AI is widely hoped to be the rescue for overstretched health systems around the world in the aftermath of the COVID-19 pandemic but questions of legal accountability and moral responsibility in AI-driven decision-making are being raised, drawing attention to the potential changes in doctor-patient relationships. A study emphasized the need for steps in designing trustworthy AI processes and highlighted that relational and epistemic trust is crucial, as clinician's positive experience is directly correlated to future patient's trust in AI tools. Another study showed that AI models built to identify people at high risk of liver disease from blood tests were twice as likely to miss the disease in women as in men. Experts say this is because data-driven AI models make inferences by finding patterns in the data. While disparities such as those on racial and ethnic basis have long existed in healthcare, without mitigation approaches, inferences learnt from such biased data are channeling embedded inequities into the decisions they make.
AI in healthcare raises ethical issues, prompts search for mitigating tools
Note : Ce résumé a été produit à partir de la nouvelle sous Lire l'article ci-dessous. Si vous avez des questions, prière de les acheminer à cette source.
décembre 21, 2022