ICMR Releases Ethical Guidelines for AI in Healthcare
25 Mar 2023 • The Indian Council of Medical Research (ICMR) has released the country’s first Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare, aimed at creating “an ethics framework which can assist in the development, deployment, and adoption of AI-based solutions” in the fields specified. The guidelines are intended for all stakeholders involved in research on AI in biomedical research and healthcare, including creators, developers, researchers, clinicians, ethics committees, institutions, sponsors, and funding organisations.
The 10 Ethical Principles addresses issues specific to AI for health. They are as follows:
- Autonomy: AI technology should not interfere with patient autonomy under any circumstances. The ‘Human in The Loop’ model of AI technologies gives room for humans to oversight the functioning and performance of the system
- Safety and Risk Minimization: Protection of dignity, rights, safety, and well-being of patients/ participants must have the highest priority
- Trustworthiness: In order to effectively use AI, clinicians and healthcare providers need to have a simple, systematic and trustworthy way to test the validity and reliability of AI technologies.
- Data Privacy: AI-based technology should ensure privacy and personal data protection at all stages of development and deployment
- Accountability and Liability : AI technologies intended to be deployed in the health sector must be ready to undergo scrutiny by concerned authorities at any point in time & must undergo regular internal and external audits to ensure their optimum functioning.
- Optimization of Data Quality: One major concern it raises is the pre-existing prejudice that arises in AI models when making decisions against a specific group of people, which is primarily attributed to the human involved in training those data, clouding the AI judgment. This 'data bias' needs to be minimised.
- Accessibility, Equity and Inclusiveness: AI developers and concerned authorities have to make sure of fairness in the distribution of AI technology. Special consideration must be given to those groups who are underprivileged or lack the infrastructure to access such technology.
- Collaboration: Considering the rapidly changing landscape of AI technology, it is imperative to collaborate among AI experts at the time of research and development so that the most appropriate techniques and algorithms are used to address any healthcare problem.
- Non Discrimination and Fairness Principles: Data set used for the training algorithm must be accurate and representative of the population in which it is used.
- Validity: AI technology in healthcare must undergo rigorous clinical and field validation before application on patients/participants.
Source: ICMR |Read full story