Login [Center] Logout Join Us Guidelines  I  中文  I  CQI

Enhancing the Creative Process in Digital Prototyping

Speaker: Zeyu Wang Hong Kong University of Science and Technology
Time: 2023-04-13 19:00-2023-04-13 20:00
Venue: C19-2 or 腾讯会议:398-077-2882 会议密码:095015

Abstract:

Despite advances in computer-aided design (CAD) systems and video editing software, digital content creation for design, storytelling, and interactive experiences remains a challenging problem. This talk introduces a series of studies, techniques, and systems along three thrusts that engage creators more directly and enhance the user experience in authoring digital content.

First, we present a drawing dataset and spatiotemporal analysis that provide insight into how people draw by comparing tracing, freehand drawing, and computer-generated approximations. We found a high degree of similarity in stroke placement and types of strokes used over time, which informs methods for customized stroke treatment and emulating drawing processes. We also propose a deep learning-based technique for line drawing synthesis from animated 3D models, where our learned style space and optimization-based embedding enable the generation of line drawing animations while allowing interactive user control across frames.

Second, we demonstrate the importance of utilizing spatial context in the creative process in augmented reality (AR) through two tablet-based interfaces. DistanciAR enables designers to create site-specific AR experiences for remote environments using LiDAR capture and new authoring modes, such as Dollhouse and Peek. PointShopAR integrates point cloud capture and editing in a single AR workflow to help users quickly prototype design ideas in their spatial context. Our user studies show that LiDAR capture and the point cloud representation in these systems can make rapid AR prototyping more accessible and versatile.

Last, we introduce two procedural methods to generate time-based media for visual communication and storytelling. AniCode supports authoring and on-the-fly consumption of personalized animations in a network-free environment via a printed code. CHER-Ob generates video flythroughs for storytelling from annotated heterogeneous 2D and 3D data for cultural heritage. Our user studies show that these methods can benefit the video-oriented digital prototyping experience and facilitate the dissemination of creative and cultural ideas.

Short Bio:

Zeyu Wang is an Assistant Professor of Computational Media and Arts (CMA) in the Information Hub at the Hong Kong University of Science and Technology (Guangzhou) and an Affiliate Assistant Professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. He received a PhD from the Department of Computer Science at Yale University, advised by Profs. Julie Dorsey and Holly Rushmeier, and a BS from the School of Artificial Intelligence at Peking University, advised by Profs. Hongbin Zha and Katsushi Ikeuchi. He has been a research intern at Adobe, Google, and Microsoft.

He leads the Creative Intelligence and Synergy (CIS) Lab at HKUST(GZ) to study the intersection of Computer Graphics, Human-Computer Interaction, and Artificial Intelligence, with a focus on algorithms and systems for digital content creation. His current research topics include sketching, VR/AR/XR, generative techniques, and multimodal creativity, with applications in art, design, perception, and cultural heritage. His work has been recognized by an Adobe Research Fellowship, a Franke Interdisciplinary Research Fellowship, a Best Paper Award, and a Best Demo Honorable Mention Award. Personal Website: https://zachzeyuwang.github.io/