File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TGRS.2022.3224733
- Scopus: eid_2-s2.0-85144057189
- WOS: WOS:000900003000004
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: DGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA
Title | DGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA |
---|---|
Authors | |
Keywords | Graph convolutional network (GCN) high spatial resolution land-use mapping (LUM) transfer learning vision transformer (ViT) |
Issue Date | 2022 |
Citation | IEEE Transactions on Geoscience and Remote Sensing, 2022, v. 60, article no. 5634915 How to Cite? |
Abstract | Land use mapping (LUM) of a coal mining subsidence area (CMSA) is a significant task. The application of convolutional neural networks (CNNs) has become prevalent in LUM, which can achieve promising performances. However, CNNs cannot process irregular data; as a result, the boundary information is overlooked. The graph convolutional network (GCN) flexibly operates with irregular regions to capture the contextual relations among neighbors. However, the global context is not considered in the GCN. In this article, we develop the deep global context construction with enabled boundary (DGCC-EB) for the LUM of the CMSA. An original Google Earth image is partitioned into nonoverlapping processing units. The DGCC-EB extracts the preliminary features from the processing unit that are further divided into nonoverlapping superpixels with irregular edges. The superpixel features are generated and then embedded into the GCN and vision transformer (ViT). In the GCN, the graph convolution is applied to superpixel features; therefore, the boundary information of objects can be preserved. In the ViT, the multihead attention blocks and positional encoding build the global context among the superpixel features. The feature constraint is calculated to fuse the advantages of the features extracted from the GCN and ViT. To improve the LUM accuracy, the cross-entropy (CE) loss is calculated. The DGCC-EB integrates all modules into a whole end-to-end framework and is then optimized by a customized algorithm. The results of case studies show that the proposed DGCC-EB obtained the acceptable overall accuracy (OA) (89.06%/88.68%) and Kappa (0.86/0.87) values for Shouzhou city and Zezhou city, respectively. |
Persistent Identifier | http://hdl.handle.net/10722/329903 |
ISSN | 2023 Impact Factor: 7.5 2023 SCImago Journal Rankings: 2.403 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Hanchao | - |
dc.contributor.author | Zang, Ning | - |
dc.contributor.author | Cao, Yun | - |
dc.contributor.author | Wang, Yuebin | - |
dc.contributor.author | Zhang, Liqiang | - |
dc.contributor.author | Huang, Bo | - |
dc.contributor.author | Takis Mathiopoulos, P. | - |
dc.date.accessioned | 2023-08-09T03:36:19Z | - |
dc.date.available | 2023-08-09T03:36:19Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | IEEE Transactions on Geoscience and Remote Sensing, 2022, v. 60, article no. 5634915 | - |
dc.identifier.issn | 0196-2892 | - |
dc.identifier.uri | http://hdl.handle.net/10722/329903 | - |
dc.description.abstract | Land use mapping (LUM) of a coal mining subsidence area (CMSA) is a significant task. The application of convolutional neural networks (CNNs) has become prevalent in LUM, which can achieve promising performances. However, CNNs cannot process irregular data; as a result, the boundary information is overlooked. The graph convolutional network (GCN) flexibly operates with irregular regions to capture the contextual relations among neighbors. However, the global context is not considered in the GCN. In this article, we develop the deep global context construction with enabled boundary (DGCC-EB) for the LUM of the CMSA. An original Google Earth image is partitioned into nonoverlapping processing units. The DGCC-EB extracts the preliminary features from the processing unit that are further divided into nonoverlapping superpixels with irregular edges. The superpixel features are generated and then embedded into the GCN and vision transformer (ViT). In the GCN, the graph convolution is applied to superpixel features; therefore, the boundary information of objects can be preserved. In the ViT, the multihead attention blocks and positional encoding build the global context among the superpixel features. The feature constraint is calculated to fuse the advantages of the features extracted from the GCN and ViT. To improve the LUM accuracy, the cross-entropy (CE) loss is calculated. The DGCC-EB integrates all modules into a whole end-to-end framework and is then optimized by a customized algorithm. The results of case studies show that the proposed DGCC-EB obtained the acceptable overall accuracy (OA) (89.06%/88.68%) and Kappa (0.86/0.87) values for Shouzhou city and Zezhou city, respectively. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Geoscience and Remote Sensing | - |
dc.subject | Graph convolutional network (GCN) | - |
dc.subject | high spatial resolution | - |
dc.subject | land-use mapping (LUM) | - |
dc.subject | transfer learning | - |
dc.subject | vision transformer (ViT) | - |
dc.title | DGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TGRS.2022.3224733 | - |
dc.identifier.scopus | eid_2-s2.0-85144057189 | - |
dc.identifier.volume | 60 | - |
dc.identifier.spage | article no. 5634915 | - |
dc.identifier.epage | article no. 5634915 | - |
dc.identifier.eissn | 1558-0644 | - |
dc.identifier.isi | WOS:000900003000004 | - |