Context: The Indian Council of Medical Research (ICMR) has released the country’s first ICMR Ethical Guidelines for Application of Artificial Intelligence (AI) in Biomedical Research and Healthcare.
ICMR releases ethical guidelines, ICMR releases ethical guidelines, ICMR releases ethical guidelines
Artificial intelligence(AI):
- Artificial intelligence (AI) is defined as a system’s ability to correctly interpret external data and to use those learnings to achieve specific goals and tasks through flexible adaptation.
- AI uses complex computer algorithms to emulate human cognition albeit with far reaching capabilities of analyzing large datasets.
Applications of AI in Healthcare
- The induction of AI into healthcare has the potential to be the solution for significant challenges faced in the field of healthcare like diagnosis and screening, therapeutics, preventive treatments, clinical decision making, public health surveillance, complex data analysis, and predicting disease outcomes.
- For example, Computed Tomography (CT) scans can be automatically read by AI as well as radiologists.
- Tuberculosis screening can be done by AI using Chest X-Rays.
- As a result, AI for health has been recognized as one of the core areas by researchers as well as the governments.
Key features of the guidelines:
Effective and safe development, deployment, and adoption of AI-based technologies:
The guidelines provide an ethical framework that can assist in the development, deployment, and adoption of AI-based solutions in healthcare and biomedical research.
Accountability:
As AI technologies are further developed and applied in clinical decision making, the guidelines call for processes that discuss accountability in case of errors for safeguarding and protection.
Patient-centric ethical principles:
The guidelines outline 10 key patient-centric ethical principles for AI application in the health sector, including accountability and liability, autonomy, data privacy, collaboration, risk minimisation and safety, accessibility and equity, optimisation of data quality, non-discrimination and fairness, validity and trustworthiness.
Human oversight:
The autonomy principle ensures human oversight of the functioning and performance of the AI system.
Consent and informed decision making:
The guidelines call for the attainment of consent of the patient who must also be informed of the physical, psychological and social risks involved before initiating any process.
Safety and risk minimisation:
The safety and risk minimisation principle has aimed at preventing “unintended or deliberate misuse”, anonymised data delinked from global technology to avoid cyber-attacks, and a favourable benefit-risk assessment by an ethical committee among a host of other areas.
Accessibility, equity and inclusiveness:
The guidelines acknowledge that the deployment of AI technology assumes widespread availability of appropriate infrastructure which is aims to bridge the digital divide.
Relevant stakeholder involvement:
The guidelines outline a brief for relevant stakeholders including researchers, clinicians/hospitals/public health system, patients, ethics committee, government regulators, and the industry.
Standard practices:
The guidelines call for each step of the development process to follow standard practices to make the AI-based solutions technically sound, ethically justified, and applicable to a large number of individuals with equity and fairness.
Ethical review process:
The ethical review process for AI in health comes under the domain of the ethics committee which assesses several factors including data source, quality, safety, anonymization, and/or data piracy, data selection biases, participant protection, payment of compensation, possibility of stigmatisation among others.
Concerns with Artificial Intelligence in healthcare:
- Cultural Acceptance: Patients often seek assurance from doctors physically present. This creates aversion to technology diagnosing. Elderlies are found to be more averse to adopting new technology.
- Data Safety/ Privacy: AI systems can challenge privacy through real time collection and use of a multitude of data points that may or may not be disclosed to an individual in the form of a notice with consent taken. Hackers can exploit AI solutions to collect private and sensitive information like Electronic Health Records.
- Liability: In case of error in diagnosis malfunction of a technology, or the use of inaccurate or inappropriate data the question arises of who the liability would fall upon the doctor or the software developer.
- Malicious use of AI: While AI has the potential to be used for good. But it could also be used for malicious purposes. For example, there are fears that AI could be used for covert surveillance or screening.
- Effects on patients: Concerns have been raised about a loss of human contact and increased social isolation. If AI technologies are used to replace staff or family time with patients.
Way Forward:
India has a host of frameworks that combine technological advances with healthcare, such as the Digital Health Authority for leveraging digital health technologies under the National Health Policy (2017), the Digital Information Security in Healthcare Act (DISHA) 2018, and the Medical Device Rules, 2017.
As governance of AI tools is still in preliminary stages, even in developed countries.
Therefore, AI cannot be held accountable for the decisions it makes, so an ethically sound policy framework is essential to guide the AI technologies development and its application in healthcare. Further, as AI technologies get further developed and applied in clinical decision making, it is important to have processes that discuss accountability in case of errors for safeguarding and protection.