File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TCYB.2024.3392474
- Scopus: eid_2-s2.0-85194037077
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Relational Part-Aware Learning for Complex Composite Object Detection in High-Resolution Remote Sensing Images
Title | Relational Part-Aware Learning for Complex Composite Object Detection in High-Resolution Remote Sensing Images |
---|---|
Authors | |
Keywords | Complex composite object detection Correlation Feature extraction high-resolution remote sensing images (RSIs) inter-relationship Object detection Power generation Remote sensing Semantics Transformer Transformers |
Issue Date | 20-May-2024 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Cybernetics, 2024 How to Cite? |
Abstract | In high-resolution remote sensing images (RSIs), complex composite object detection (e.g., coal-fired power plant detection and harbor detection) is challenging due to multiple discrete parts with variable layouts leading to complex weak inter-relationship and blurred boundaries, instead of a clearly defined single object. To address this issue, this article proposes an end-to-end framework, i.e., relational part-aware network (REPAN), to explore the semantic correlation and extract discriminative features among multiple parts. Specifically, we first design a part region proposal network (P-RPN) to locate discriminative yet subtle regions. With butterfly units (BFUs) embedded, feature-scale confusion problems stemming from aliasing effects can be largely alleviated. Second, a feature relation Transformer (FRT) plumbs the depths of the spatial relationships by part-and-global joint learning, exploring correlations between various parts to enhance significant part representation. Finally, a contextual detector (CD) classifies and detects parts and the whole composite object through multirelation-aware features, where part information guides to locate the whole object. We collect three remote sensing object detection datasets with four categories to evaluate our method. Consistently surpassing the performance of state-of-the-art methods, the results of extensive experiments underscore the effectiveness and superiority of our proposed method. |
Persistent Identifier | http://hdl.handle.net/10722/348135 |
ISSN | 2023 Impact Factor: 9.4 2023 SCImago Journal Rankings: 5.641 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yuan, Shuai | - |
dc.contributor.author | Zhang, Lixian | - |
dc.contributor.author | Dong, Runmin | - |
dc.contributor.author | Xiong, Jie | - |
dc.contributor.author | Zheng, Juepeng | - |
dc.contributor.author | Fu, Haohuan | - |
dc.contributor.author | Gong, Peng | - |
dc.date.accessioned | 2024-10-05T00:30:45Z | - |
dc.date.available | 2024-10-05T00:30:45Z | - |
dc.date.issued | 2024-05-20 | - |
dc.identifier.citation | IEEE Transactions on Cybernetics, 2024 | - |
dc.identifier.issn | 2168-2275 | - |
dc.identifier.uri | http://hdl.handle.net/10722/348135 | - |
dc.description.abstract | In high-resolution remote sensing images (RSIs), complex composite object detection (e.g., coal-fired power plant detection and harbor detection) is challenging due to multiple discrete parts with variable layouts leading to complex weak inter-relationship and blurred boundaries, instead of a clearly defined single object. To address this issue, this article proposes an end-to-end framework, i.e., relational part-aware network (REPAN), to explore the semantic correlation and extract discriminative features among multiple parts. Specifically, we first design a part region proposal network (P-RPN) to locate discriminative yet subtle regions. With butterfly units (BFUs) embedded, feature-scale confusion problems stemming from aliasing effects can be largely alleviated. Second, a feature relation Transformer (FRT) plumbs the depths of the spatial relationships by part-and-global joint learning, exploring correlations between various parts to enhance significant part representation. Finally, a contextual detector (CD) classifies and detects parts and the whole composite object through multirelation-aware features, where part information guides to locate the whole object. We collect three remote sensing object detection datasets with four categories to evaluate our method. Consistently surpassing the performance of state-of-the-art methods, the results of extensive experiments underscore the effectiveness and superiority of our proposed method. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Cybernetics | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Complex composite object detection | - |
dc.subject | Correlation | - |
dc.subject | Feature extraction | - |
dc.subject | high-resolution remote sensing images (RSIs) | - |
dc.subject | inter-relationship | - |
dc.subject | Object detection | - |
dc.subject | Power generation | - |
dc.subject | Remote sensing | - |
dc.subject | Semantics | - |
dc.subject | Transformer | - |
dc.subject | Transformers | - |
dc.title | Relational Part-Aware Learning for Complex Composite Object Detection in High-Resolution Remote Sensing Images | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TCYB.2024.3392474 | - |
dc.identifier.scopus | eid_2-s2.0-85194037077 | - |
dc.identifier.issnl | 2168-2267 | - |