|
Reconstruction witH dIfferNtial geOmetry (RHINO)
ANR JCJC (2022-2026)
Goal
3D Reconstruction from multiple images has always been a major research goal in computer vision but it has gained tremendous attention from the research community in recent times. Given a camera attached to a stationary/hand-held device or a robot, the ability to obtain the 3D scene from its video stream, or, alternatively, from a 3D sensor such as Kinect, Time of Flight camera, stereo rig or a LIDAR, opens doors to numerous applications. Application-wise, the obtained 3D scene can be used to develop mixed reality applications including remote surgery, virtual classrooms, enhanced shopping experiences, assistance in autonomous driving and immersive gaming experiences. A robot capable of autonomously navigating and interacting within human spaces will need to be able to perceive the 3D world. Currently, the development of these applications is limited to rigid environments, as the current state-of-the-art in 3D reconstruction is unable to reliably process deformable objects. On the other hand, our world is dynamic, it consists of both rigid and deformable objects, whose reconstruction from cameras and 3D sensors must be made as accurate and stable as possible. The goal of RHINO is to bridge the performance gap between deformable and rigid 3D reconstruction using cameras and 3D sensors so that the scientific community can move forward with developing mixed reality or robotics applications that work on real-life scenarios.
Our team
Thuy Tran. PhD student.
Ruochen Chen. PhD student.
Liming Chen. Collaborator.
Shaifali Parashar. Principal Investigator.
Publications
|
|
 |
Patch-based Representation and Learning for Efficient Deformation Modeling
Ruochen Chen, Thuy Tran, Shaifali Parashar
3DV, 2026 (Oral, award candidate)
Code
|
 |
FNOPT: Resolution-Agnostic, Self-Supervised Cloth Simulation using
Meta-Optimization with Fourier Neural Operators
Ruochen Chen, Thuy Tran, Shaifali Parashar
WACV, 2026
Code
|
 |
Image-Guided Shape-from-Template Using Mesh Inextensibility Constraints
Thuy Tran, Ruochen Chen, Shaifali Parashar
ICCV, 2025
Code
|
 |
GAPS: Geometry-Aware, Physics-Based, Self-Supervised Neural Garment Draping
Ruochen Chen, Liming Chen, Shaifali Parashar
3DV, 2024 (Best student paper award)
Code
/
Video
|
|