File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Book Chapter: When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices

TitleWhen Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices
Authors
Keywordsartificial intelligence (AI)
governance
liminality
medical devices
risk
Issue Date2021
PublisherCambridge University Press
Citation
When Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices. In Laurie, G ... et al (Eds.), The Camx`bridge Handbook of Health Research Regulation, p. 277-286. Cambridge, UK ; New York, NY: Cambridge University Press, 2021 How to Cite?
AbstractArtificial intelligence and machine learning (AI/ML) medical devices are able to optimise their performance by learning from past experience. In healthcare, such devices are already applied within controlled settings in image analysis systems to detect conditions like diabetic retinopathy, for instance. In examining the regulatory governance of AI/ML medical devices in the United States, it is argued that the development and application of these devices as a technical and social concern whether in research or in clinical care must proceed in tandem with their identities in regulation. In the light of emerging regulatory principles and approaches put forward by the International Medical Device Regulators Forum, and endorsed by the US Food and Drug Administration, conventional thinking about clinical research and clinical practice as distinct and separate domains needs to be reconsidered. The high connectivity of AI/ML medical devices that are capable of adapting to their digital environment in order to optimise performance suggest that the research agenda persist beyond what may be currently limited to the pilot or feasibility stages of medical device trials. If continuous risk-monitoring is required to support the use of software as medical devices in a learning healthcare system, more robust and responsive regulatory mechanisms are needed, not less.
Persistent Identifierhttp://hdl.handle.net/10722/311717
ISBN
Series/Report no.Cambridge Law Handbooks

 

DC FieldValueLanguage
dc.contributor.authorHo, WLC-
dc.date.accessioned2022-04-01T09:12:17Z-
dc.date.available2022-04-01T09:12:17Z-
dc.date.issued2021-
dc.identifier.citationWhen Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices. In Laurie, G ... et al (Eds.), The Camx`bridge Handbook of Health Research Regulation, p. 277-286. Cambridge, UK ; New York, NY: Cambridge University Press, 2021-
dc.identifier.isbn9781108475976-
dc.identifier.urihttp://hdl.handle.net/10722/311717-
dc.description.abstractArtificial intelligence and machine learning (AI/ML) medical devices are able to optimise their performance by learning from past experience. In healthcare, such devices are already applied within controlled settings in image analysis systems to detect conditions like diabetic retinopathy, for instance. In examining the regulatory governance of AI/ML medical devices in the United States, it is argued that the development and application of these devices as a technical and social concern whether in research or in clinical care must proceed in tandem with their identities in regulation. In the light of emerging regulatory principles and approaches put forward by the International Medical Device Regulators Forum, and endorsed by the US Food and Drug Administration, conventional thinking about clinical research and clinical practice as distinct and separate domains needs to be reconsidered. The high connectivity of AI/ML medical devices that are capable of adapting to their digital environment in order to optimise performance suggest that the research agenda persist beyond what may be currently limited to the pilot or feasibility stages of medical device trials. If continuous risk-monitoring is required to support the use of software as medical devices in a learning healthcare system, more robust and responsive regulatory mechanisms are needed, not less.-
dc.languageeng-
dc.publisherCambridge University Press-
dc.relation.ispartofThe Camx`bridge Handbook of Health Research Regulation-
dc.relation.ispartofseriesCambridge Law Handbooks-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectartificial intelligence (AI)-
dc.subjectgovernance-
dc.subjectliminality-
dc.subjectmedical devices-
dc.subjectrisk-
dc.titleWhen Learning Is Continuous: Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices-
dc.typeBook_Chapter-
dc.identifier.emailHo, WLC: cwlho@hku.hk-
dc.identifier.authorityHo, WLC=rp02632-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1017/9781108620024.035-
dc.identifier.hkuros332283-
dc.identifier.spage277-
dc.identifier.epage286-
dc.publisher.placeCambridge, UK ; New York, NY-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats