File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Learning versatile neural architectures by propagating network codes
Title | Learning versatile neural architectures by propagating network codes |
---|---|
Authors | |
Keywords | Multitask NAS Task-Transferable Architecture Neural Predictor NAS Benchmark |
Issue Date | 2022 |
Publisher | ICLR. |
Citation | 10th International Conference on Learning Representation (ICLR) (Virtual), April 25-29, 2022 How to Cite? |
Abstract | This work explores how to design a single neural network capable of adapting to multiple heterogeneous vision tasks, such as image segmentation, 3D detection, and video recognition. This goal is challenging because both network architecture search (NAS) spaces and methods in different tasks are inconsistent. We solve this challenge from both sides. We first introduce a unified design space for multiple tasks and build a multitask NAS benchmark (NAS-Bench-MR) on many widely used datasets, including ImageNet, Cityscapes, KITTI, and HMDB51. We further propose Network Coding Propagation (NCP), which back-propagates gradients of neural predictors to directly update architecture codes along the desired gradient directions to solve various tasks. In this way, optimal architecture configurations can be found by NCP in our large search space in seconds. Unlike prior arts of NAS that typically focus on a single task, NCP has several unique benefits. (1) NCP transforms architecture optimization from data-driven to architecture-driven, enabling joint search an architecture among multitasks with different data distributions. (2) NCP learns from network codes but not original data, enabling it to update the architecture efficiently across datasets. (3) In addition to our NAS-Bench-MR, NCP performs well on other NAS benchmarks, such as NAS-Bench-201. (4) Thorough studies of NCP on inter-, cross-, and intra-tasks highlight the importance of cross-task neural architecture design, i.e., multitask neural architectures and architecture transferring between different tasks. |
Description | Poster Session 9 |
Persistent Identifier | http://hdl.handle.net/10722/315800 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ding, M | - |
dc.contributor.author | Huo, Y | - |
dc.contributor.author | Lu, H | - |
dc.contributor.author | Yang, L | - |
dc.contributor.author | Wang, Z | - |
dc.contributor.author | Lu, Z | - |
dc.contributor.author | Wang, J | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2022-08-19T09:04:40Z | - |
dc.date.available | 2022-08-19T09:04:40Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | 10th International Conference on Learning Representation (ICLR) (Virtual), April 25-29, 2022 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315800 | - |
dc.description | Poster Session 9 | - |
dc.description.abstract | This work explores how to design a single neural network capable of adapting to multiple heterogeneous vision tasks, such as image segmentation, 3D detection, and video recognition. This goal is challenging because both network architecture search (NAS) spaces and methods in different tasks are inconsistent. We solve this challenge from both sides. We first introduce a unified design space for multiple tasks and build a multitask NAS benchmark (NAS-Bench-MR) on many widely used datasets, including ImageNet, Cityscapes, KITTI, and HMDB51. We further propose Network Coding Propagation (NCP), which back-propagates gradients of neural predictors to directly update architecture codes along the desired gradient directions to solve various tasks. In this way, optimal architecture configurations can be found by NCP in our large search space in seconds. Unlike prior arts of NAS that typically focus on a single task, NCP has several unique benefits. (1) NCP transforms architecture optimization from data-driven to architecture-driven, enabling joint search an architecture among multitasks with different data distributions. (2) NCP learns from network codes but not original data, enabling it to update the architecture efficiently across datasets. (3) In addition to our NAS-Bench-MR, NCP performs well on other NAS benchmarks, such as NAS-Bench-201. (4) Thorough studies of NCP on inter-, cross-, and intra-tasks highlight the importance of cross-task neural architecture design, i.e., multitask neural architectures and architecture transferring between different tasks. | - |
dc.language | eng | - |
dc.publisher | ICLR. | - |
dc.subject | Multitask NAS | - |
dc.subject | Task-Transferable Architecture | - |
dc.subject | Neural Predictor | - |
dc.subject | NAS Benchmark | - |
dc.title | Learning versatile neural architectures by propagating network codes | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.hkuros | 335592 | - |
dc.publisher.place | United States | - |