File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/IGARSS.2018.8517293
- Scopus: eid_2-s2.0-85063131352
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: A comparative study of impervious surface estimation from optical and SAR data using deep convolutional networks
Title | A comparative study of impervious surface estimation from optical and SAR data using deep convolutional networks |
---|---|
Authors | |
Keywords | ISA AlexNet Urban SAR |
Issue Date | 2018 |
Citation | International Geoscience and Remote Sensing Symposium (IGARSS), 2018, v. 2018-July, p. 1656-1659 How to Cite? |
Abstract | © 2018 IEEE Incorporating optical and SAR data to estimate impervious surface is useful but challenging due to their different geometric imaging mechanism. The recent development of deep convolutional networks (DCN) opens a promising opportunity. In this study, the typical DCN, AlexNet, was modified to estimate the impervious surface from optical and SAR data. GoogLeNet and the Support Vector Machine (SVM) were employed for comparison. Experimental results indicated the effectiveness of AlexNet with an accuracy of over 99%, outperforming both GoogLeNet and SVM. Furthermore, 60~80% of training samples outperformed the results from the whole training set under certain number of epochs, indicating that large number of training samples may not necessarily produce better results, depending on other factors (e.g. number of epochs). Generally, AlexNet was able to fuse the optical and SAR data and improved the accuracy of estimating impervious surface by about 2% compared with that using optical data alone. |
Persistent Identifier | http://hdl.handle.net/10722/277704 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Hongsheng | - |
dc.contributor.author | Wan, Luoma | - |
dc.contributor.author | Wang, Ting | - |
dc.contributor.author | Lin, Yinyi | - |
dc.contributor.author | Lin, Hui | - |
dc.contributor.author | Zheng, Zezhong | - |
dc.date.accessioned | 2019-09-27T08:29:45Z | - |
dc.date.available | 2019-09-27T08:29:45Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | International Geoscience and Remote Sensing Symposium (IGARSS), 2018, v. 2018-July, p. 1656-1659 | - |
dc.identifier.uri | http://hdl.handle.net/10722/277704 | - |
dc.description.abstract | © 2018 IEEE Incorporating optical and SAR data to estimate impervious surface is useful but challenging due to their different geometric imaging mechanism. The recent development of deep convolutional networks (DCN) opens a promising opportunity. In this study, the typical DCN, AlexNet, was modified to estimate the impervious surface from optical and SAR data. GoogLeNet and the Support Vector Machine (SVM) were employed for comparison. Experimental results indicated the effectiveness of AlexNet with an accuracy of over 99%, outperforming both GoogLeNet and SVM. Furthermore, 60~80% of training samples outperformed the results from the whole training set under certain number of epochs, indicating that large number of training samples may not necessarily produce better results, depending on other factors (e.g. number of epochs). Generally, AlexNet was able to fuse the optical and SAR data and improved the accuracy of estimating impervious surface by about 2% compared with that using optical data alone. | - |
dc.language | eng | - |
dc.relation.ispartof | International Geoscience and Remote Sensing Symposium (IGARSS) | - |
dc.subject | ISA | - |
dc.subject | AlexNet | - |
dc.subject | Urban | - |
dc.subject | SAR | - |
dc.title | A comparative study of impervious surface estimation from optical and SAR data using deep convolutional networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/IGARSS.2018.8517293 | - |
dc.identifier.scopus | eid_2-s2.0-85063131352 | - |
dc.identifier.volume | 2018-July | - |
dc.identifier.spage | 1656 | - |
dc.identifier.epage | 1659 | - |