File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

postgraduate thesis: Freeform surface modeling through 2D sketching

TitleFreeform surface modeling through 2D sketching
Authors
Advisors
Advisor(s):Wang, WP
Issue Date2019
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Li, C. [李昌健]. (2019). Freeform surface modeling through 2D sketching. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractSketch-based modeling is an effective and intuitive approach for creating 3D shapes. It allows users to draw some strokes on a 2D plane; then the algorithm tries to interpret the strokes and turn them into a 3D shape. Advanced algorithms are constantly emerging to solve this problem, and it remains an active and challenging field in computer graphics throughout the time. To resolve the challenges, in this thesis we present two approaches to model high-quality 3D shapes from user 2D sketches faithfully and effectively. We firstly present a novel approach that takes as input user annotated sketches and predicts the freeform surface by exploiting geometric optimizations. To model a surface patch, the user sketches the patch boundary as well as a small number of strokes representing the major bending directions of the shape. Our method uses this input to generate a curvature field that conforms to the user strokes and then uses this field to derive a freeform surface with the desired curvature pattern. To infer the surface from the strokes, we first disambiguate the convex versus concave bending directions and estimate the surface bending magnitude along strokes. We subsequently construct a curvature field based on these estimates, using a non-orthogonal 4-direction field coupled with a scalar magnitude field, and finally construct a surface whose curvature pattern reflects this field through an iterative sequence of simple linear optimizations. We extend the single-view modeling framework to support multi-view interaction, which effectively combines multi-view inputs to obtain a coherent 3D shape running at an interactive speed. Second, we present a data-driven method for modeling generic freeform 3D surfaces from sparse, expressive 2D sketches by incorporating convolution neural networks (CNN) into the sketch processing workflow. Given input 2D sketches, we use CNNs to infer the depth and normal maps representing the surface. To combat ambiguity, we introduce an intermediate CNN that models the dense curvature direction, or flow field of the surface, and produce an additional output confidence map along with depth and normal. The flow field guides our subsequent surface reconstruction for improved regularity; the confidence map trained unsupervised measures ambiguity and provides a robust estimator for data fitting. Our CNN is trained on a large dataset generated by rendering sketches of various 3D shapes using a non-photo-realistic line rendering (NPR) method that mimics human sketching of freeform shapes. We use the CNN model to process both single- and multi-view sketches. Using our multi-view framework, users progressively complete the shape by sketching in different views, generating complete closed shapes.
DegreeDoctor of Philosophy
SubjectThree-dimensional modeling
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/279370

 

DC FieldValueLanguage
dc.contributor.advisorWang, WP-
dc.contributor.authorLi, Changjian-
dc.contributor.author李昌健-
dc.date.accessioned2019-10-28T03:02:29Z-
dc.date.available2019-10-28T03:02:29Z-
dc.date.issued2019-
dc.identifier.citationLi, C. [李昌健]. (2019). Freeform surface modeling through 2D sketching. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/279370-
dc.description.abstractSketch-based modeling is an effective and intuitive approach for creating 3D shapes. It allows users to draw some strokes on a 2D plane; then the algorithm tries to interpret the strokes and turn them into a 3D shape. Advanced algorithms are constantly emerging to solve this problem, and it remains an active and challenging field in computer graphics throughout the time. To resolve the challenges, in this thesis we present two approaches to model high-quality 3D shapes from user 2D sketches faithfully and effectively. We firstly present a novel approach that takes as input user annotated sketches and predicts the freeform surface by exploiting geometric optimizations. To model a surface patch, the user sketches the patch boundary as well as a small number of strokes representing the major bending directions of the shape. Our method uses this input to generate a curvature field that conforms to the user strokes and then uses this field to derive a freeform surface with the desired curvature pattern. To infer the surface from the strokes, we first disambiguate the convex versus concave bending directions and estimate the surface bending magnitude along strokes. We subsequently construct a curvature field based on these estimates, using a non-orthogonal 4-direction field coupled with a scalar magnitude field, and finally construct a surface whose curvature pattern reflects this field through an iterative sequence of simple linear optimizations. We extend the single-view modeling framework to support multi-view interaction, which effectively combines multi-view inputs to obtain a coherent 3D shape running at an interactive speed. Second, we present a data-driven method for modeling generic freeform 3D surfaces from sparse, expressive 2D sketches by incorporating convolution neural networks (CNN) into the sketch processing workflow. Given input 2D sketches, we use CNNs to infer the depth and normal maps representing the surface. To combat ambiguity, we introduce an intermediate CNN that models the dense curvature direction, or flow field of the surface, and produce an additional output confidence map along with depth and normal. The flow field guides our subsequent surface reconstruction for improved regularity; the confidence map trained unsupervised measures ambiguity and provides a robust estimator for data fitting. Our CNN is trained on a large dataset generated by rendering sketches of various 3D shapes using a non-photo-realistic line rendering (NPR) method that mimics human sketching of freeform shapes. We use the CNN model to process both single- and multi-view sketches. Using our multi-view framework, users progressively complete the shape by sketching in different views, generating complete closed shapes.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshThree-dimensional modeling-
dc.titleFreeform surface modeling through 2D sketching-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5353/th_991044158789703414-
dc.date.hkucongregation2019-
dc.identifier.mmsid991044158789703414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats