File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-45673-2_46
- Scopus: eid_2-s2.0-85176017806
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain
Title | Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain |
---|---|
Authors | |
Keywords | Large Language Models Natural Language Processing Radiology |
Issue Date | 2024 |
Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, v. 14348 LNCS, p. 464-473 How to Cite? |
Abstract | In this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/. |
Persistent Identifier | http://hdl.handle.net/10722/349987 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Zhengliang | - |
dc.contributor.author | Zhong, Aoxiao | - |
dc.contributor.author | Li, Yiwei | - |
dc.contributor.author | Yang, Longtao | - |
dc.contributor.author | Ju, Chao | - |
dc.contributor.author | Wu, Zihao | - |
dc.contributor.author | Ma, Chong | - |
dc.contributor.author | Shu, Peng | - |
dc.contributor.author | Chen, Cheng | - |
dc.contributor.author | Kim, Sekeun | - |
dc.contributor.author | Dai, Haixing | - |
dc.contributor.author | Zhao, Lin | - |
dc.contributor.author | Zhu, Dajiang | - |
dc.contributor.author | Liu, Jun | - |
dc.contributor.author | Liu, Wei | - |
dc.contributor.author | Shen, Dinggang | - |
dc.contributor.author | Li, Quanzheng | - |
dc.contributor.author | Liu, Tianming | - |
dc.contributor.author | Li, Xiang | - |
dc.date.accessioned | 2024-10-17T07:02:19Z | - |
dc.date.available | 2024-10-17T07:02:19Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, v. 14348 LNCS, p. 464-473 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349987 | - |
dc.description.abstract | In this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/. | - |
dc.language | eng | - |
dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.subject | Large Language Models | - |
dc.subject | Natural Language Processing | - |
dc.subject | Radiology | - |
dc.title | Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-3-031-45673-2_46 | - |
dc.identifier.scopus | eid_2-s2.0-85176017806 | - |
dc.identifier.volume | 14348 LNCS | - |
dc.identifier.spage | 464 | - |
dc.identifier.epage | 473 | - |
dc.identifier.eissn | 1611-3349 | - |