File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

postgraduate thesis: Deep learning based automatic colorization and color sketch generation

TitleDeep learning based automatic colorization and color sketch generation
Authors
Advisors
Advisor(s):Yu, Y
Issue Date2017
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Zhang, W. [张伟]. (2017). Deep learning based automatic colorization and color sketch generation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractIn recent years, due to the booming prevalence and rapid technological advances in mobile devices, tremendous amounts of photographs have been recorded, posted and circulated over social networks. Sustained efforts from both research communities and the industry have been made for smart image editing applications which help consumers beautify and stylize their photographs. As human intelligence is capable of analyzing and understanding images, developing automatic image transform algorithms that fulfill humans' preference, such as color and style, is also important in the research of computer graphics and artificial intelligence. This thesis focuses on two automatic image transform tasks using deep learning based approaches, one is colorization which aims to convert grayscale images to colorful ones; another one is color sketch style transfer which is to generate color sketches from natural photographs. I formulate image colorization as a pixel-wise prediction problem utilizing deep fully convolutional neural networks. In order to maintain color consistency in homogeneous regions as well as precisely distinguishing colors near region boundaries, I propose a novel fully automatic colorization pipeline which involves a boundary-guided CRF and a CNN-based color transform as post-processing steps. In addition, as there usually exist multiple plausible colorization proposals for a single image, automatic evaluation for different colorization methods remains a challenging task. I further introduce two novel automatic evaluation schemes to efficiently assess colorization quality in terms of spatial coherence and localization. Comprehensive experiments demonstrate great quality improvement in results of our proposed colorization method under multiple evaluation metrics. For color sketch generation, I develop a fully automatic image transform system which extends the state-of-the-art real-time neural-style transfer method. Due to the unique characteristics of the texture statistics of color sketch style artworks, it is extremely difficult to generate color sketches using existing deep neural-style transfer method. I address this problem by choosing suitable style target online from a color sketch example set which consists of 10 example color sketches selected by clustering. I also propose a novel image transform CNN architecture with spatial refinement which benefits learning high-resolution style transfer. Moreover, a boundary perceptual loss that is complementary to the style and content loss is proposed to maintain object boundaries in color sketch generation. The final step in our system is to enhance gouache colors via a linear color transform followed by a guided filtering operation. Experimental results demonstrate that our system greatly reduces artifacts compared to previous state-of-the-art methods, producing vivid color sketch results.
DegreeDoctor of Philosophy
SubjectImage processing - Digital techniques
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/250808

 

DC FieldValueLanguage
dc.contributor.advisorYu, Y-
dc.contributor.authorZhang, Wei-
dc.contributor.author张伟-
dc.date.accessioned2018-01-26T01:59:36Z-
dc.date.available2018-01-26T01:59:36Z-
dc.date.issued2017-
dc.identifier.citationZhang, W. [张伟]. (2017). Deep learning based automatic colorization and color sketch generation. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/250808-
dc.description.abstractIn recent years, due to the booming prevalence and rapid technological advances in mobile devices, tremendous amounts of photographs have been recorded, posted and circulated over social networks. Sustained efforts from both research communities and the industry have been made for smart image editing applications which help consumers beautify and stylize their photographs. As human intelligence is capable of analyzing and understanding images, developing automatic image transform algorithms that fulfill humans' preference, such as color and style, is also important in the research of computer graphics and artificial intelligence. This thesis focuses on two automatic image transform tasks using deep learning based approaches, one is colorization which aims to convert grayscale images to colorful ones; another one is color sketch style transfer which is to generate color sketches from natural photographs. I formulate image colorization as a pixel-wise prediction problem utilizing deep fully convolutional neural networks. In order to maintain color consistency in homogeneous regions as well as precisely distinguishing colors near region boundaries, I propose a novel fully automatic colorization pipeline which involves a boundary-guided CRF and a CNN-based color transform as post-processing steps. In addition, as there usually exist multiple plausible colorization proposals for a single image, automatic evaluation for different colorization methods remains a challenging task. I further introduce two novel automatic evaluation schemes to efficiently assess colorization quality in terms of spatial coherence and localization. Comprehensive experiments demonstrate great quality improvement in results of our proposed colorization method under multiple evaluation metrics. For color sketch generation, I develop a fully automatic image transform system which extends the state-of-the-art real-time neural-style transfer method. Due to the unique characteristics of the texture statistics of color sketch style artworks, it is extremely difficult to generate color sketches using existing deep neural-style transfer method. I address this problem by choosing suitable style target online from a color sketch example set which consists of 10 example color sketches selected by clustering. I also propose a novel image transform CNN architecture with spatial refinement which benefits learning high-resolution style transfer. Moreover, a boundary perceptual loss that is complementary to the style and content loss is proposed to maintain object boundaries in color sketch generation. The final step in our system is to enhance gouache colors via a linear color transform followed by a guided filtering operation. Experimental results demonstrate that our system greatly reduces artifacts compared to previous state-of-the-art methods, producing vivid color sketch results.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshImage processing - Digital techniques-
dc.titleDeep learning based automatic colorization and color sketch generation-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5353/th_991043982880503414-
dc.date.hkucongregation2017-
dc.identifier.mmsid991043982880503414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats