File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain

TitleTailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain
Authors
KeywordsLarge Language Models
Natural Language Processing
Radiology
Issue Date2024
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, v. 14348 LNCS, p. 464-473 How to Cite?
AbstractIn this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/.
Persistent Identifierhttp://hdl.handle.net/10722/349987
ISSN
2023 SCImago Journal Rankings: 0.606

 

DC FieldValueLanguage
dc.contributor.authorLiu, Zhengliang-
dc.contributor.authorZhong, Aoxiao-
dc.contributor.authorLi, Yiwei-
dc.contributor.authorYang, Longtao-
dc.contributor.authorJu, Chao-
dc.contributor.authorWu, Zihao-
dc.contributor.authorMa, Chong-
dc.contributor.authorShu, Peng-
dc.contributor.authorChen, Cheng-
dc.contributor.authorKim, Sekeun-
dc.contributor.authorDai, Haixing-
dc.contributor.authorZhao, Lin-
dc.contributor.authorZhu, Dajiang-
dc.contributor.authorLiu, Jun-
dc.contributor.authorLiu, Wei-
dc.contributor.authorShen, Dinggang-
dc.contributor.authorLi, Quanzheng-
dc.contributor.authorLiu, Tianming-
dc.contributor.authorLi, Xiang-
dc.date.accessioned2024-10-17T07:02:19Z-
dc.date.available2024-10-17T07:02:19Z-
dc.date.issued2024-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, v. 14348 LNCS, p. 464-473-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/349987-
dc.description.abstractIn this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.subjectLarge Language Models-
dc.subjectNatural Language Processing-
dc.subjectRadiology-
dc.titleTailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-031-45673-2_46-
dc.identifier.scopuseid_2-s2.0-85176017806-
dc.identifier.volume14348 LNCS-
dc.identifier.spage464-
dc.identifier.epage473-
dc.identifier.eissn1611-3349-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats