File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep Online Fused Video Stabilization

TitleDeep Online Fused Video Stabilization
Authors
KeywordsComputational Photography
Image and Video Synthesis
Issue Date2022
Citation
Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 2022, p. 865-873 How to Cite?
AbstractWe present a deep neural network (DNN) that uses both sensor data (gyroscope) and image content (optical flow) to stabilize videos through unsupervised learning. The network fuses optical flow with real/virtual camera pose histories into a joint motion representation. Next, the LSTM cell infers the new virtual camera pose, which is used to generate a warping grid that stabilizes the video frames. We adopt a relative motion representation as well as a multi-stage training strategy to optimize our model without any supervision. To the best of our knowledge, this is the first DNN solution that adopts both sensor data and image content for video stabilization. We validate the proposed framework through ablation studies and demonstrate that the proposed method outperforms the state-of-art alternative solutions via quantitative evaluations and a user study. Check out our video results, code and dataset at our website.
Persistent Identifierhttp://hdl.handle.net/10722/341347
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorShi, Zhenmei-
dc.contributor.authorShi, Fuhao-
dc.contributor.authorLai, Wei Sheng-
dc.contributor.authorLiang, Chia Kai-
dc.contributor.authorLiang, Yingyu-
dc.date.accessioned2024-03-13T08:42:06Z-
dc.date.available2024-03-13T08:42:06Z-
dc.date.issued2022-
dc.identifier.citationProceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 2022, p. 865-873-
dc.identifier.urihttp://hdl.handle.net/10722/341347-
dc.description.abstractWe present a deep neural network (DNN) that uses both sensor data (gyroscope) and image content (optical flow) to stabilize videos through unsupervised learning. The network fuses optical flow with real/virtual camera pose histories into a joint motion representation. Next, the LSTM cell infers the new virtual camera pose, which is used to generate a warping grid that stabilizes the video frames. We adopt a relative motion representation as well as a multi-stage training strategy to optimize our model without any supervision. To the best of our knowledge, this is the first DNN solution that adopts both sensor data and image content for video stabilization. We validate the proposed framework through ablation studies and demonstrate that the proposed method outperforms the state-of-art alternative solutions via quantitative evaluations and a user study. Check out our video results, code and dataset at our website.-
dc.languageeng-
dc.relation.ispartofProceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022-
dc.subjectComputational Photography-
dc.subjectImage and Video Synthesis-
dc.titleDeep Online Fused Video Stabilization-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/WACV51458.2022.00094-
dc.identifier.scopuseid_2-s2.0-85126129304-
dc.identifier.spage865-
dc.identifier.epage873-
dc.identifier.isiWOS:000800471200087-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats