File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A brief review of hypernetworks in deep learning

TitleA brief review of hypernetworks in deep learning
Authors
KeywordsDeep learning
Hypernetworks
Neural networks
Parameter generation
Weight generation
Issue Date13-Aug-2024
PublisherSpringer
Citation
Artificial Intelligence Review, 2024, v. 57, n. 9 How to Cite?
AbstractHypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. They have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. Hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. Despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. To fill this gap, we review the progress in hypernets. We present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. We also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. Finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. We believe that hypernetworks have the potential to revolutionize the field of deep learning. They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. Through this review, we aim to inspire further advancements in deep learning through hypernetworks.
Persistent Identifierhttp://hdl.handle.net/10722/362752
ISSN
2023 Impact Factor: 10.7
2023 SCImago Journal Rankings: 3.260

 

DC FieldValueLanguage
dc.contributor.authorChauhan, Vinod Kumar-
dc.contributor.authorZhou, Jiandong-
dc.contributor.authorLu, Ping-
dc.contributor.authorMolaei, Soheila-
dc.contributor.authorClifton, David A-
dc.date.accessioned2025-09-30T00:35:21Z-
dc.date.available2025-09-30T00:35:21Z-
dc.date.issued2024-08-13-
dc.identifier.citationArtificial Intelligence Review, 2024, v. 57, n. 9-
dc.identifier.issn0269-2821-
dc.identifier.urihttp://hdl.handle.net/10722/362752-
dc.description.abstractHypernetworks, or hypernets for short, are neural networks that generate weights for another neural network, known as the target network. They have emerged as a powerful deep learning technique that allows for greater flexibility, adaptability, dynamism, faster training, information sharing, and model compression. Hypernets have shown promising results in a variety of deep learning problems, including continual learning, causal inference, transfer learning, weight pruning, uncertainty quantification, zero-shot learning, natural language processing, and reinforcement learning. Despite their success across different problem settings, there is currently no comprehensive review available to inform researchers about the latest developments and to assist in utilizing hypernets. To fill this gap, we review the progress in hypernets. We present an illustrative example of training deep neural networks using hypernets and propose categorizing hypernets based on five design criteria: inputs, outputs, variability of inputs and outputs, and the architecture of hypernets. We also review applications of hypernets across different deep learning problem settings, followed by a discussion of general scenarios where hypernets can be effectively employed. Finally, we discuss the challenges and future directions that remain underexplored in the field of hypernets. We believe that hypernetworks have the potential to revolutionize the field of deep learning. They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks. Through this review, we aim to inspire further advancements in deep learning through hypernetworks.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofArtificial Intelligence Review-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDeep learning-
dc.subjectHypernetworks-
dc.subjectNeural networks-
dc.subjectParameter generation-
dc.subjectWeight generation-
dc.titleA brief review of hypernetworks in deep learning-
dc.typeArticle-
dc.identifier.doi10.1007/s10462-024-10862-8-
dc.identifier.scopuseid_2-s2.0-85201316234-
dc.identifier.volume57-
dc.identifier.issue9-
dc.identifier.eissn1573-7462-
dc.identifier.issnl0269-2821-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats