File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICDM58522.2023.00074
- Scopus: eid_2-s2.0-85178403596
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness
Title | Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness |
---|---|
Authors | |
Keywords | Adversarial Robustness Deep Ensemble Deep Learning Ensemble Robustness Heterogeneity |
Issue Date | 2023 |
Citation | Proceedings - IEEE International Conference on Data Mining, ICDM, 2023, p. 648-657 How to Cite? |
Abstract | Deep neural network ensembles hold the potential of improving generalization performance for complex learning tasks. This paper presents formal analysis and empirical evaluation to show that heterogeneous deep ensembles with high ensemble diversity can effectively leverage model learning heterogeneity to boost ensemble robustness. We first show that heterogeneous DNN models trained for solving the same learning problem, e.g., object detection, can significantly strengthen the mean average precision (mAP) through our weighted bounding box ensemble consensus method. Second, we further compose ensembles of heterogeneous models for solving different learning problems, e.g., object detection and semantic segmentation, by introducing the connected component labeling (CCL) based alignment. We show that this two-tier heterogeneity driven ensemble construction method can compose an ensemble team that promotes high ensemble diversity and low negative correlation among member models of the ensemble, strengthening ensemble robustness against both negative examples and adversarial attacks. Third, we provide a formal analysis of the ensemble robustness in terms of negative correlation. Extensive experiments validate the enhanced robustness of heterogeneous ensembles in both benign and adversarial settings. The appendix and source codes are available on GitHub at https://github.com/git-disl/HeteRobust. |
Persistent Identifier | http://hdl.handle.net/10722/343443 |
ISSN | 2020 SCImago Journal Rankings: 0.545 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, Yanzhao | - |
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Wei, Wenqi | - |
dc.contributor.author | Liu, Ling | - |
dc.date.accessioned | 2024-05-10T09:08:10Z | - |
dc.date.available | 2024-05-10T09:08:10Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Proceedings - IEEE International Conference on Data Mining, ICDM, 2023, p. 648-657 | - |
dc.identifier.issn | 1550-4786 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343443 | - |
dc.description.abstract | Deep neural network ensembles hold the potential of improving generalization performance for complex learning tasks. This paper presents formal analysis and empirical evaluation to show that heterogeneous deep ensembles with high ensemble diversity can effectively leverage model learning heterogeneity to boost ensemble robustness. We first show that heterogeneous DNN models trained for solving the same learning problem, e.g., object detection, can significantly strengthen the mean average precision (mAP) through our weighted bounding box ensemble consensus method. Second, we further compose ensembles of heterogeneous models for solving different learning problems, e.g., object detection and semantic segmentation, by introducing the connected component labeling (CCL) based alignment. We show that this two-tier heterogeneity driven ensemble construction method can compose an ensemble team that promotes high ensemble diversity and low negative correlation among member models of the ensemble, strengthening ensemble robustness against both negative examples and adversarial attacks. Third, we provide a formal analysis of the ensemble robustness in terms of negative correlation. Extensive experiments validate the enhanced robustness of heterogeneous ensembles in both benign and adversarial settings. The appendix and source codes are available on GitHub at https://github.com/git-disl/HeteRobust. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - IEEE International Conference on Data Mining, ICDM | - |
dc.subject | Adversarial Robustness | - |
dc.subject | Deep Ensemble | - |
dc.subject | Deep Learning | - |
dc.subject | Ensemble Robustness | - |
dc.subject | Heterogeneity | - |
dc.title | Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICDM58522.2023.00074 | - |
dc.identifier.scopus | eid_2-s2.0-85178403596 | - |
dc.identifier.spage | 648 | - |
dc.identifier.epage | 657 | - |