About
I’m a research scientist at NXN Labs, where I’m developing large-scale generative foundation models for fashion imagery. My research interest lies in image/video generation and 3D computer vision, with the broader goal of modeling and interacting with the visual world. Prior to NXN Labs, I was at Innerverz AI, focusing on video diffusion models. I hold a PhD (as well as a B.S. and M.S.) from Korea University, advised by Prof. Hanseok Ko.


[Google Scholar] [Github] [CV]

News


[Jul. 2025] I will start a postdoctoral position at the University of British Columbia (UBC), working with Prof.Kwang Moo Yi.
[Jun. 2025] I’ve been recognized as an Outstanding Reviewer at CVPR 2025 (711/12,593 reviewers)
[May 2025] One paper has been accepted to CVPRW 2025.
[Dec. 2024] I’ve joined NXN Labs as an AI researcher, focusing on visual generative/editing models.
[Jul. 2024] I will be giving a talk at Twelve Labs.
[Apr. 2024] Our paper has been selected as one of Highlight Papers at CVPR 2024 (Top 10%).
[Feb. 2024] One paper has been accepted to CVPR 2024.
[Jan. 2024] One paper has been accepted to ICASSP 2024.
[Jan. 2024] I’ve joined Innerverz AI as an AI/ML researcher, focusing on video diffusion models.
[Dec. 2023] I’ve successfully defended my thesis, “Towards Controllable and Interpretable Generative Neural Rendering”.
[Dec. 2023] I completed a 6-month visit to the University of British Columbia (UBC) as a PhD visiting student in the Computer Vision Lab, supervised by Prof.Kwang Moo Yi.

Selected Publications

| ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models
Jeong-gi Kwak * , Erqun Dong *, Yuhe Jin, Hanseok Ko, Shweta Mahajan, Kwang Moo Yi
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
Highlight (Top 10%)
[paper] [code] [project page]

| Towards Multi-domain Face Landmark Detection with Synthetic data from Diffusion model
Yuanming Li, Gwantae Kim, , Jeong-gi Kwak, Bonhwa Ku, Hanseok Ko
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024
[paper]

| Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis
Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, Donghyeon Kim, David Han, Hanseok Ko
European Conference on Computer Vision (ECCV), 2022
[paper] [code] [project page]
(2022.12) 2022 ETNews ICT Paper Awards sponsored by MSIT Korea

| DIFAI: Diverse Facial Inpainting using StyleGAN Inversion
Dongsik Yoon, Jeong-gi Kwak, Yuanming Li, David Han, Hanseok Ko
IEEE International Conference on Image Processing (ICIP), 2022
[paper]


| Generate and Edit Your Own Character in a Canonical View
Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, David Han, Hanseok Ko
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop (CVPRW), 2022
[paper][poster]

| Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN
Jeong-gi Kwak, Youngsaeng Jin, Yuanming Li, Dongsik Yoon, Donghyeon Kim, Hanseok Ko
British Machine Vision Conference (BMVC), 2021
[paper] [code]

| Reference Guided Image Inpainting using Facial Attributes
Dongsik Yoon, Jeong-gi Kwak, Yuanming Li, David Han, Youngsaeng Jin, Hanseok Ko
British Machine Vision Conference (BMVC), 2021
[paper] [code]

| CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention Feature
Jeong-gi Kwak, David K. Han, Hanseok Ko
European Conference on Computer Vision (ECCV), 2020
[paper]