Pose estimation and body tracking using an artificial neural network
C. Öztireli, P. Chandran, M. Gross
Patent Office: US, Patent Number: 10970849
C. Öztireli, P. Chandran, M. Gross
Patent Office: US, Patent Number: 10970849
P. Chandran, T. Beeler, D. Bradley,
Patent Office: US, Patent Number: 11257276
P. Chandran, T. Beeler, D. Bradley,
Patent Office: US, Patent Number: 11276231
P. Chandran, D. Bradley, P. Gotardo, G. Zoss
Patent Office: US, Application Number: 17223577
D. Bradley, P. Chandran, P. Gotardo, J Riviere, S. Winberg, G. Zoss
Patent Office: US, Application Number: 17536777
G Zoss, D E Bradley, P Chandran, P F Gotardo, E Sifakis
Patent Office: US, Application Number: 17536777
G Zoss, D E Bradley, P Chandran, P F Gotardo, E Sifakis
Patent Office: US, Application Number: 17536777
S Winberg, DE Bradley, P Chandran, PFU Gotardo, G Zoss
Patent Office: US, Application Number: 18102480
DE Bradley, P Chandran, PFU Gotardo, C Otto, A Serifi, G Zoss
Patent Office: US, Application Number: 17669053
PFU Gotardo, DE Bradley, G Zoss, J Riviere, P Chandran, Xu Yingyan
Patent Office: US, Application Number: 18081593
PFU Gotardo, DE Bradley, G Zoss, J Riviere, P Chandran, Xu Yingyan
Patent Office: US, Application Number: 18081597
DE Bradley, P Chandran, PFU Gotardo, G Zoss
Patent Office: US, Application Number: 17526608
P Chandran, LF Ciccone, DE Bradley
Patent Office: US, Application Number: 17586449
DE Bradley, P Chandran, PFU Gotardo, C Otto, G Zoss
Patent Office: US, Application Number: 18599021
DE Bradley, P Chandran, ED Sifakis, B Solenthaler, PFU Gotardo, L Yang, G Zoss
Patent Office: US, Application Number: 18421429
DE Bradley, P Chandran, PFU Gotardo, XU Yingyan, G Zoss
Patent Office: US, Application Number: 18505009
G Zoss, P Chandran
Patent Office: US, Application Number: 18780259
G Zoss, P Chandran
Patent Office: US, Application Number: 18780264
DE Bradley, P Chandran, G Zoss
Patent Office: US, Application Number: 18906555
DE Bradley, SK Weiss, P Chandran, G Zoss, JR Stanhope
Patent Office: US, Application Number: 18906639
DE Bradley, P Chandran, PFU Gotardo, SK Weiss, G Zoss
Patent Office: US, Application Number: 18628602
DE Bradley, P Chandran, SK Weiss, G Zoss
Patent Office: US, Application Number: 18628611
DE Bradley, P Chandran, G Zoss
Patent Office: US, Application Number: 18906545
P Chandran, G Zoss, DE Bradley, JE Klintberg, PFU Gotardo
Patent Office: US, Application Number: 18786337
DE Bradley P Chandran S FOTI PFU Gotardo G Zoss
Patent Office: US, Application Number: 17697774
DE Bradley, P Chandran, PFU Gotardo, G Zoss
Patent Office: US, Application Number: 17526647
DE Bradley, P Chandran, G Zoss
Patent Office: US, Application Number: 18906575
DE Bradley, P Chandran, PFU Gotardo, G Zoss
Patent Office: US, Application Number: 17675713
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in 2011 World Congress on Information and Communication Technologies, 2011
This paper proposes a method to segment the exudates and lesions in retinal fundus images and classify using selective brightness feature. [Paper]
Published in 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014
Road segmentation and tracking is of prime importance in Advanced Driver Assistance Systems (ADAS) to either assist autonomous navigation or provide useful information to drivers operating semi-autonomous vehicles. The work reported herein describes a novel algorithm based on particle filters for segmenting and tracking the edges of roads in real world scenarios. [Paper]
Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
Facial landmark detection is a fundamental task for many consumer and high-end applications and is almost entirely solved by machine learning methods today.
Project PagePublished in 3D International Conference on 3D Vision (3DV), 2020
We present a method for nonlinear 3D face modeling using neural architectures.
Project PagePublished in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
We propose Adaptive convolutions; a generic extension of AdaIN, which allows for the simultaneous transfer of both statistical and structural styles in real time.
Project PagePublished in ACM SIGGRAPH Asia, 2021
We propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention.
Project PagePublished in Eurographics, 2022
We present a new nonlinear parametric 3D shape model based on transformer architectures.
Project PagePublished in Eurographics, 2022
We compare the results obtained with a state-of-the-art appearance capture method, with and without our proposed improvements to the lighting model.
Project PagePublished in Siggraph, 2022
We present a new method for high-fidelity offline facial performance retargeting that is neither expensive nor artifact-prone.
Project PagePublished in Siggraph, 2022
We demonstrate the proposed capture pipeline on a variety of different facial hair styles and lengths, ranging from sparse and short to dense full-beards.
Project PagePublished in Siggraph, 2022
We demonstrate how MoRF is a strong new step towards 3D morphable neural head modeling.
Project PagePublished in ACM/Eurographics Symposium on Computer Animation, 2022
We propose a 3D+time framework for modeling dynamic sequences of 3D facial shapes, representing realistic non-rigid motion during a performance.
Project PagePublished in Pacific Graphics, 2022
We approach the problem of face swapping from the perspective of learning simultaneous convolutional facial autoencoders for the source and target identities, using a shared encoder network with identity-specific decoders.
Project PagePublished in Siggraph Asia, 2022
We demonstrate how the simple U-Net, surprisingly, allows us to advance the state of the art for re-aging real faces on video, with unprecedented temporal stability and preservation of facial identity across variable expressions, viewpoints, and lighting conditions.
Project PagePublished in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023
We propose the first facial landmark detection network that can predict continuous, unlimited landmarks, allowing to specify the number and location of the desired landmarks at inference time.
Project PagePublished in Eurographics Symposium on Geometry Processing, 2023
We present a novel graph-based simulation approach for generating micro wrinkle geometry on human skin, which can easily scale up to the micro-meter range and millions of wrinkles.
Project PagePublished in International Conference on Computer Vision (ICCV), 2023
In this paper, we target the application scenario of capturing high-fidelity assets for neural relighting in controlled studio conditions, but without requiring a dense light stage. Instead, we leverage a small number of area lights commonly used in photogrammetry.
Project PagePublished in Pacific Graphics, 2023
In this work, we propose a new loss function for monocular face capture, inspired by how humans would perceive the quality of a 3D face reconstruction given a particular image. It is widely known that shading provides a strong indicator for 3D shape in the human visual system.
Project PagePublished in Siggraph Asia, 2023
We propose a new face model based on a data-driven implicit neural physics model that can be driven by both expression and style separately. At the core, we present a framework for learning implicit physics-based actuations for multiple subjects simultaneously, trained on a few arbitrary performance capture sequences from a small set of identities.
Project PagePublished in Eurographics, 2024
In this work we aim to overcome the gap between synthetic simulation and real skin scanning, by proposing a method that can be applied to large skin regions (e.g. an entire face) with the controllability of simulation and the organic look of real micro details.
Project PagePublished in Eurographics, 2024
We present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation.
Project PagePublished in Computer Vision and Pattern Recognition (CVPR), 2024
In this work, we present a novel use case for such implicit representations in the context of learning anatomically constrained face models.
Project PagePublished in Computer Vision and Pattern Recognition (CVPR), 2024
In this work, we simultaneously tackle both the motion and illumination problem, proposing a new method for relightable and animatable neural heads.
Project PagePublished in Computer Graphics Forum, 2024
In this work, we examine 3 important issues in the practical use of state-of-the-art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability.
Project PagePublished in Siggraph, 2024
In this work, we aim to make physics-based facial animation more accessible by proposing a generalized physical face model that we learn from a large 3D face dataset. Once trained, our model can be quickly fit to any unseen identity and produce a ready-to-animate physical face model automatically.
Project PagePublished in Europen Conference on Computer Vision (ECCV), 2024
We introduce Spline-based Transformers, a new class of transformer models that do not require position encoding.
Project PagePublished in Eurographics, 2025
We address the practical problem of generating facial blendshapes and reference animations for a new 3D character in production environments.
Project PagePublished in SIGGRAPH, 2025
In this work, we propose to couple locally-defined facial expressions with 3D Gaussian splatting to enable creating ultra-high fidelity, expressive and photorealistic head avatars.
Project PagePublished in ICCV, 2025
In this work, we present a new method for reconstructing the appearance properties of human faces from a lightweight capture procedure in an unconstrained environment.
Project PagePublished in Workshop on Human-Interactive Generation and Editing, 2025
In this work, we propose to jointly learn the visual appearance and depth of faces simultaneously in a diffusion-based portrait image generator. Our method embraces the end-to-end diffusion paradigm and introduces a new architecture suitable for learning this joint distribution, consisting of a reference network for target identity and a channel expanded diffusion backbone.
Project PagePublished in Shape Modeling International, 2025
In this work, we present a new method for multimodal conditional 3D face geometry generation that allows user-friendly control over the output identity and expression via a number of different conditioning signals.
Project PagePublished:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Course, Siggraph Asia 2023, Sydney, 2023
This course goes over the history of face models used in computer animation. The course covers a wide variety of models starting from linear blendshape models that provide intuitive artist control to more recent and powerful nonlinear neural shape models. link to course material
Course, Eurographics 2024, Limassol, 2024
This course is a revised extension of the course I presented in Siggraph Asia 2023, with added material about physics based facial animation from Dr. Lingchen Yang.
link to course material