File Download

There are no files associated with this item.

Supplementary

Conference Paper: Bringing events into video deblurring with non-consecutively blurry frames

TitleBringing events into video deblurring with non-consecutively blurry frames
Authors
Issue Date2021
PublisherNeural Information Processing Systems Foundation.
Citation
35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 4531-4540 How to Cite?
AbstractRecently, video deblurring has attracted considerable research attention, and several works suggest that events at high time rate can benefit deblurring. Existing video deblurring methods assume consecutively blurry frames, while neglecting the fact that sharp frames usually appear nearby blurry frame. In this paper, we develop a principled framework D2Nets for video deblurring to exploit nonconsecutively blurry frames, and propose a flexible event fusion module (EFM) to bridge the gap between event-driven and video deblurring. In D2Nets, we propose to first detect nearest sharp frames (NSFs) using a bidirectional LSTMdetector, and then perform deblurring guided by NSFs. Furthermore, the proposed EFM is flexible to be incorporated into D2Nets, in which events can be leveraged to notably boost the deblurring performance. EFM can also be easily incorporated into existing deblurring networks, making event-driven deblurring task benefit from state-of-theart deblurring methods. On synthetic and real-world blurry datasets, our methods achieve better results than competing methods, and EFM not only benefits D2Nets but also significantly improves the competing deblurring networks.
Persistent Identifierhttp://hdl.handle.net/10722/315682

 

DC FieldValueLanguage
dc.contributor.authorShang, W-
dc.contributor.authorRen, D-
dc.contributor.authorZou, D-
dc.contributor.authorRen, S-
dc.contributor.authorLuo, P-
dc.contributor.authorZuo, W-
dc.date.accessioned2022-08-19T09:02:28Z-
dc.date.available2022-08-19T09:02:28Z-
dc.date.issued2021-
dc.identifier.citation35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 4531-4540-
dc.identifier.urihttp://hdl.handle.net/10722/315682-
dc.description.abstractRecently, video deblurring has attracted considerable research attention, and several works suggest that events at high time rate can benefit deblurring. Existing video deblurring methods assume consecutively blurry frames, while neglecting the fact that sharp frames usually appear nearby blurry frame. In this paper, we develop a principled framework D2Nets for video deblurring to exploit nonconsecutively blurry frames, and propose a flexible event fusion module (EFM) to bridge the gap between event-driven and video deblurring. In D2Nets, we propose to first detect nearest sharp frames (NSFs) using a bidirectional LSTMdetector, and then perform deblurring guided by NSFs. Furthermore, the proposed EFM is flexible to be incorporated into D2Nets, in which events can be leveraged to notably boost the deblurring performance. EFM can also be easily incorporated into existing deblurring networks, making event-driven deblurring task benefit from state-of-theart deblurring methods. On synthetic and real-world blurry datasets, our methods achieve better results than competing methods, and EFM not only benefits D2Nets but also significantly improves the competing deblurring networks.-
dc.languageeng-
dc.publisherNeural Information Processing Systems Foundation.-
dc.relation.ispartofAdvances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021)-
dc.titleBringing events into video deblurring with non-consecutively blurry frames-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335598-
dc.identifier.spage4531-
dc.identifier.epage4540-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats