File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Learning Interpretable Concept Groups in CNNs
Title | Learning Interpretable Concept Groups in CNNs |
---|---|
Authors | |
Issue Date | 2021 |
Citation | IJCAI International Joint Conference on Artificial Intelligence, 2021, p. 1061-1067 How to Cite? |
Abstract | We propose a novel training methodology-Concept Group Learning (CGL)-that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions that are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features. |
Persistent Identifier | http://hdl.handle.net/10722/329783 |
ISSN | 2020 SCImago Journal Rankings: 0.649 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Varshneya, Saurabh | - |
dc.contributor.author | Ledent, Antoine | - |
dc.contributor.author | Vandermeulen, Robert A. | - |
dc.contributor.author | Lei, Yunwen | - |
dc.contributor.author | Enders, Matthias | - |
dc.contributor.author | Borth, Damian | - |
dc.contributor.author | Kloft, Marius | - |
dc.date.accessioned | 2023-08-09T03:35:18Z | - |
dc.date.available | 2023-08-09T03:35:18Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IJCAI International Joint Conference on Artificial Intelligence, 2021, p. 1061-1067 | - |
dc.identifier.issn | 1045-0823 | - |
dc.identifier.uri | http://hdl.handle.net/10722/329783 | - |
dc.description.abstract | We propose a novel training methodology-Concept Group Learning (CGL)-that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions that are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features. | - |
dc.language | eng | - |
dc.relation.ispartof | IJCAI International Joint Conference on Artificial Intelligence | - |
dc.title | Learning Interpretable Concept Groups in CNNs | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85125434598 | - |
dc.identifier.spage | 1061 | - |
dc.identifier.epage | 1067 | - |