File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: LLM Fine-Tuning: Concepts, Opportunities, and Challenges

TitleLLM Fine-Tuning: Concepts, Opportunities, and Challenges
Authors
Keywordscomprehension
fine-tuning
hermeneutics
human–AI co-evolution
large language models (LLM)
tutorial fine-tuning
Issue Date2-Apr-2025
Citation
Big Data and Cognitive Computing, 2025, v. 9, n. 4 How to Cite?
AbstractAs a foundation of large language models, fine-tuning drives rapid progress, broad applicability, and profound impacts on human–AI collaboration, surpassing earlier technological advancements. This paper provides a comprehensive overview of large language model (LLM) fine-tuning by integrating hermeneutic theories of human comprehension, with a focus on the essential cognitive conditions that underpin this process. Drawing on Gadamer’s concepts of Vorverständnis, Distanciation, and the Hermeneutic Circle, the paper explores how LLM fine-tuning evolves from initial learning to deeper comprehension, ultimately advancing toward self-awareness. It examines the core principles, development, and applications of fine-tuning techniques, emphasizing its growing significance across diverse field and industries. The paper introduces a new term, “Tutorial Fine-Tuning (TFT)”, which annotates a process of intensive tuition given by a “tutor” to a small number of “students”, to define the latest round of LLM fine-tuning advancements. By addressing key challenges associated with fine-tuning, including ensuring adaptability, precision, credibility and reliability, this paper explores potential future directions for the co-evolution of humans and AI. By bridging theoretical perspectives with practical implications, this work provides valuable insights into the ongoing development of LLMs, emphasizing their potential to achieve higher levels of cognitive and operational intelligence.
Persistent Identifierhttp://hdl.handle.net/10722/366998

 

DC FieldValueLanguage
dc.contributor.authorWu, Xiao Kun-
dc.contributor.authorChen, Min-
dc.contributor.authorLi, Wanyi-
dc.contributor.authorWang, Rui-
dc.contributor.authorLu, Limeng-
dc.contributor.authorLiu, Jia-
dc.contributor.authorHwang, Kai-
dc.contributor.authorHao, Yixue-
dc.contributor.authorPan, Yanru-
dc.contributor.authorMeng, Qingguo-
dc.contributor.authorHuang, Kaibin-
dc.contributor.authorHu, Long-
dc.contributor.authorGuizani, Mohsen-
dc.contributor.authorChao, Naipeng-
dc.contributor.authorFortino, Giancarlo-
dc.contributor.authorLin, Fei-
dc.contributor.authorTian, Yonglin-
dc.contributor.authorNiyato, Dusit-
dc.contributor.authorWang, Fei Yue-
dc.date.accessioned2025-11-29T00:35:48Z-
dc.date.available2025-11-29T00:35:48Z-
dc.date.issued2025-04-02-
dc.identifier.citationBig Data and Cognitive Computing, 2025, v. 9, n. 4-
dc.identifier.urihttp://hdl.handle.net/10722/366998-
dc.description.abstractAs a foundation of large language models, fine-tuning drives rapid progress, broad applicability, and profound impacts on human–AI collaboration, surpassing earlier technological advancements. This paper provides a comprehensive overview of large language model (LLM) fine-tuning by integrating hermeneutic theories of human comprehension, with a focus on the essential cognitive conditions that underpin this process. Drawing on Gadamer’s concepts of Vorverständnis, Distanciation, and the Hermeneutic Circle, the paper explores how LLM fine-tuning evolves from initial learning to deeper comprehension, ultimately advancing toward self-awareness. It examines the core principles, development, and applications of fine-tuning techniques, emphasizing its growing significance across diverse field and industries. The paper introduces a new term, “Tutorial Fine-Tuning (TFT)”, which annotates a process of intensive tuition given by a “tutor” to a small number of “students”, to define the latest round of LLM fine-tuning advancements. By addressing key challenges associated with fine-tuning, including ensuring adaptability, precision, credibility and reliability, this paper explores potential future directions for the co-evolution of humans and AI. By bridging theoretical perspectives with practical implications, this work provides valuable insights into the ongoing development of LLMs, emphasizing their potential to achieve higher levels of cognitive and operational intelligence.-
dc.languageeng-
dc.relation.ispartofBig Data and Cognitive Computing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectcomprehension-
dc.subjectfine-tuning-
dc.subjecthermeneutics-
dc.subjecthuman–AI co-evolution-
dc.subjectlarge language models (LLM)-
dc.subjecttutorial fine-tuning-
dc.titleLLM Fine-Tuning: Concepts, Opportunities, and Challenges-
dc.typeArticle-
dc.identifier.doi10.3390/bdcc9040087-
dc.identifier.scopuseid_2-s2.0-105003740525-
dc.identifier.volume9-
dc.identifier.issue4-
dc.identifier.eissn2504-2289-
dc.identifier.issnl2504-2289-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats