Workshop
AI for Creative Visual Content Generation, Editing and Understanding
Ozgur Kara · Fabian Caba Heilbron · Anyi Rao · Victor Escorcia · Ruihan Zhang · Mia Tang · Dong Liu · Maneesh Agrawala · James Rehg
Thu 12 Jun 11 a.m. PDT — 4 p.m. PDT
Though start and end times here are correct, detailed schedules here may not be complete or up to date. Please be sure to cross reference the workshop's website to verify workshop schedule details if they are available on the workshop's website. (Added by CVPR.)
Visual content creation is booming, yet producing engaging visual content remains a challenging task. This workshop aims to highlight machine learning technologies that accelerate and enhance creative processes in visual content creation and editing, including image animation, text-to-visual content generation, and content translation. Moreover, we believe that advancing technology to better understand edited visual content can enable novel platforms for creating compelling media. We seek to bridge the gap between technical and creative communities by bringing together researchers from computer vision, graphics, and the arts, fostering interdisciplinary collaboration and exploring opportunities in this under-explored area.
Schedule
|
-
|
Geometry-Aware Texture Generation for 3D Head Modeling with Artist-driven Control
(
Paper
)
>
|
Amin Fadaeinejad · Abdallah Dib · Luiz Gustavo Hafemann · Emeline Got · Trevor Anderson · Amaury Depierre · Nikolaus F. Troje · Marcus A. Brubaker · Marc-Andre Carbonneau 🔗 |
|
-
|
Temporal Consistent Semantic Video Color Transfer from Multiple References
(
Paper
)
>
|
Aupendu Kar · Guan-Ming Su 🔗 |
|
-
|
Semantic-Aware Local Image Editing with a Single Mask Operation
(
Paper
)
>
|
Dongchao Wen · Zijian Chen · Weihong Deng · Yujiang Tian · Hongzhi Shi · Yingjie Zhang · Xingchen Cui · Jian Zhao · Lingyan Liang · Mei Wang 🔗 |
|
-
|
OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion Models
(
Paper
)
>
|
Mathis Koroglu · Hugo Caselles-Dupré · Guillaume Jeanneret · Matthieu Cord 🔗 |
|
-
|
Training-free Color-Style Disentanglement for Constrained Text-to-Image Synthesis
(
Paper
)
>
|
Aishwarya Agarwal · Srikrishna Karanam · Balaji Vasan Srinivasan 🔗 |
|
-
|
CLIPDraw++: Text-to-Sketch Synthesis with Simple Primitives ( Paper ) > link | Nityanand Mathur · Shyam Marjit · Abhra Chaudhuri · Anjan Dutta 🔗 |
|
-
|
Z-SASLM: Zero-Shot Style-Aligned SLI Blending Latent Manipulation
(
Paper
)
>
|
Alessio Borgi · Luca Maiano · Irene Amerini 🔗 |
|
-
|
Defurnishing with X-Ray Vision: Joint Removal of Furniture from Panoramas and Mesh
(
Paper
)
>
|
Alan Dolhasz · Chen Ma · Dave Gausebeck · Kevin Chen · Gregor Miller · Lucas Hayne · Gunnar Hovden · Azwad Sabik · Olaf Brandt · Mira Slavcheva 🔗 |
|
-
|
LAPIS: A novel dataset for personalized image aesthetic assessment
(
Paper
)
>
|
Anne-Sofie Maerten · Li-Wei Chen · Stefanie De Winter · Christophe Bossens · Johan Wagemans 🔗 |
|
-
|
WaveDIF: Wavelet sub-band based Deepfake Identification in Frequency Domain
(
Paper
)
>
|
Anurag Dutta · Arnab Kumar Das · Ruchira Naskar · Rajat Subhra Chakraborty 🔗 |
|
-
|
Progressive Autoregressive Video Diffusion Models
(
Paper
)
>
|
Desai Xie · Zhan Xu · Yicong Hong · Hao Tan · Difan Liu · Feng Liu · Arie E. Kaufman · Yang Zhou 🔗 |
|
-
|
STAM: Zero-Shot Style Transfer using Diffusion Model via Attention Modulation
(
Paper
)
>
|
Masud Fahim · Nazmus Saqib · Jani Boutellier 🔗 |
|
-
|
PartStickers: Generating Parts of Objects for Rapid Prototyping
(
Paper
)
>
|
Mo Zhou · Josh Myers-Dean · Danna Gurari 🔗 |
|
-
|
HopNet: Harmonizing Object Placement Network for Realistic Image Generation via Object Composition
(
Paper
)
>
|
Matthew Poska · Sharon X. Huang · Bin Hwang 🔗 |
|
-
|
T3V2V: Test Time Training for Domain Adaptation in Video-to-Video Editing
(
Paper
)
>
|
Zezhou Wang · Jing Tang 🔗 |