Publications

Published in Europen Conference on Computer Vision (ECCV), 2024

We introduce Spline-based Transformers, a new class of transformer models that do not require position encoding. [Project Page]


Cite

 @InProceedings{ChandranSerifi2024,
author={Chandran, Prashanth and Serifi, Agon and Gross, Markus and B{"a}cher, Moritz},
editor={Leonardis, Ale{ {s}} and Ricci, Elisa and Roth, Stefan and Russakovsky, Olga and Sattler, Torsten and Varol, G{"u}l},
title={Spline-Based Transformers},
booktitle={Computer Vision -- ECCV 2024},
year={2025},
publisher={Springer Nature Switzerland},
pages={1--17},
isbn={978-3-031-73016-0}
}

Published in Siggraph, 2024

In this work, we aim to make physics-based facial animation more accessible by proposing a generalized physical face model that we learn from a large 3D face dataset. Once trained, our model can be quickly fit to any unseen identity and produce a ready-to-animate physical face model automatically. [Project Page]

Cite

 @article{Yang2024,
author = {Yang, Lingchen and Zoss, Gaspard and Chandran, Prashanth and Gross, Markus and Solenthaler, Barbara and Sifakis, Eftychios and Bradley, Derek},
title = {Learning a Generalized Physical Face Model From Data},
year = {2024},
issue_date = {July 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {43},
number = {4},
issn = {0730-0301},
doi = {10.1145/3658189},
month = jul,
articleno = {94},
numpages = {14},
}

Published in Computer Graphics Forum, 2024

In this work, we examine 3 important issues in the practical use of state-of-the-art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability. [Project Page]

Cite

 @article{Chandran2024b,
author = {Chandran, P. and Zoss, G. and Gotardo, P. and Bradley, D.},
title = {Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection},
journal = {Computer Graphics Forum},
volume = {43},
number = {6},
pages = {e15126},
doi = {https://doi.org/10.1111/cgf.15126},
year = {2024}
}

Published in Computer Vision and Pattern Recognition (CVPR), 2024

In this work, we simultaneously tackle both the motion and illumination problem, proposing a new method for relightable and animatable neural heads. [Project Page]


Cite

 @INPROCEEDINGS {Xu2024,
author = { Xu, Yingyan and Chandran, Prashanth and Weiss, Sebastian and Gross, Markus and Zoss, Gaspard and Bradley, Derek },
booktitle = { 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) },
title = {{ Artist-Friendly Relightable and Animatable Neural Heads }},
year = {2024},
pages = {2457-2467},
doi = {10.1109/CVPR52733.2024.00238},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month =Jun
}

Published in Computer Vision and Pattern Recognition (CVPR), 2024

In this work, we present a novel use case for such implicit representations in the context of learning anatomically constrained face models. [Project Page]


Cite

 @INPROCEEDINGS{Chandran2024a,
author = {Chandran, Prashanth and Zoss, Gaspard },
booktitle = { 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) },
title = {{ Anatomically Constrained Implicit Face Models }},
year = {2024},
pages = {2220-2229},
doi = {10.1109/CVPR52733.2024.00216},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month =Jun
}

Published in Eurographics, 2024

We present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation. [Project Page]


Cite

 @inproceedings{Weiss2024B,
booktitle = {Eurographics 2024 - Short Papers},
editor = {Hu, Ruizhen and Charalambous, Panayiotis},
title = {{Fast Dynamic Facial Wrinkles}},
author = {Weiss, Sebastian and Chandran, Prashanth and Zoss, Gaspard and Bradley, Derek},
year = {2024},
publisher = {The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-237-0},
DOI = {10.2312/egs.20241016}
}

Published in Eurographics, 2024

In this work we aim to overcome the gap between synthetic simulation and real skin scanning, by proposing a method that can be applied to large skin regions (e.g. an entire face) with the controllability of simulation and the organic look of real micro details. [Project Page]

Cite

 @article{Weiss2024B,
author = {Weiss, S. and Stanhope, J. and Chandran, P. and Zoss, G. and Bradley, D.},
title = {Stylize My Wrinkles: Bridging the Gap from Simulation to Reality},
journal = {Computer Graphics Forum},
volume = {43},
number = {2},
pages = {e15048},
doi = {https://doi.org/10.1111/cgf.15048},
year = {2024}
}

Published in Siggraph Asia, 2023

We propose a new face model based on a data-driven implicit neural physics model that can be driven by both expression and style separately. At the core, we present a framework for learning implicit physics-based actuations for multiple subjects simultaneously, trained on a few arbitrary performance capture sequences from a small set of identities. [Project Page]

Cite

 @inproceedings{Yang2023,
author = {Yang, Lingchen and Zoss, Gaspard and Chandran, Prashanth and Gotardo, Paulo and Gross, Markus and Solenthaler, Barbara and Sifakis, Eftychios and Bradley, Derek},
title = {An Implicit Physical Face Model Driven by Expression and Style},
year = {2023},
isbn = {9798400703157},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3610548.3618156},
booktitle = {SIGGRAPH Asia 2023 Conference Papers},
articleno = {106},
numpages = {12},
location = {, Sydney, NSW, Australia, },
series = {SA '23}
}

Published in Pacific Graphics, 2023

In this work, we propose a new loss function for monocular face capture, inspired by how humans would perceive the quality of a 3D face reconstruction given a particular image. It is widely known that shading provides a strong indicator for 3D shape in the human visual system. [Project Page]

Cite

 @article{Otto2023,
journal = {Computer Graphics Forum},
title = {{A Perceptual Shape Loss for Monocular 3D Face Reconstruction}},
author = {Otto, Christopher and Chandran, Prashanth and Zoss, Gaspard and Gross, Markus and Gotardo, Paulo and Bradley, Derek},
year = {2023},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14945}
}

Published in International Conference on Computer Vision (ICCV), 2023

In this paper, we target the application scenario of capturing high-fidelity assets for neural relighting in controlled studio conditions, but without requiring a dense light stage. Instead, we leverage a small number of area lights commonly used in photogrammetry. [Project Page]

Cite

 @InProceedings{Xu_2023_ICCV,
author = {Xu, Yingyan and Zoss, Gaspard and Chandran, Prashanth and Gross, Markus and Bradley, Derek and Gotardo, Paulo},
title = {ReNeRF: Relightable Neural Radiance Fields with Nearfield Lighting},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {22581-22591}
}

Published in Eurographics Symposium on Geometry Processing, 2023

We present a novel graph-based simulation approach for generating micro wrinkle geometry on human skin, which can easily scale up to the micro-meter range and millions of wrinkles. [Project Page]

Cite

 @article{Weiss2023,
author = {Weiss, Sebastian and Moulin, Jonathan and Chandran, Prashanth and Zoss, Gaspard and Gotardo, Paulo and Bradley, Derek},
title = {Graph-Based Synthesis for Skin Micro Wrinkles},
journal={Computer Graphics Forum (Symposium on Geometry Processing)},
year = {2023},
month = {6},
}

Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023

We propose the first facial landmark detection network that can predict continuous, unlimited landmarks, allowing to specify the number and location of the desired landmarks at inference time. [Project Page]

Cite

 @InProceedings{Chandran_2023_CVPR,
author = {Chandran, Prashanth and Zoss, Gaspard and Gotardo, Paulo and Bradley, Derek},
title = {Continuous Landmark Detection With 3D Queries},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {16858-16867}
}

Published in Siggraph Asia, 2022

We demonstrate how the simple U-Net, surprisingly, allows us to advance the state of the art for re-aging real faces on video, with unprecedented temporal stability and preservation of facial identity across variable expressions, viewpoints, and lighting conditions. [Project Page]

Cite

 @article{Zoss_2022, 
author = {Zoss, Gaspard and Chandran, Prashanth and Sifakis, Eftychios
and Gross, Markus and Gotardo, Paulo and Bradley, Derek},
title = {Production-Ready Face Re-Aging for Visual Effects},
year = {2022},
issue_date = {December 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {41},
number = {6},
issn = {0730-0301},
doi = {10.1145/3550454.3555520},
journal = {ACM Trans. Graph.},
month = {nov},
articleno = {237},
numpages = {12},
keywords = {facial re-aging, image and video editing}
}

Published in Pacific Graphics, 2022

We approach the problem of face swapping from the perspective of learning simultaneous convolutional facial autoencoders for the source and target identities, using a shared encoder network with identity-specific decoders. [Project Page]

Cite

 @article {Ott22a, 
journal = {Computer Graphics Forum},
title = {Learning Dynamic 3D Geometry and Texture for Video Face Swapping},
author = {Otto, Christopher and Naruniec, Jacek and Helminger,
Leonhard and Etterlin, Thomas and Mignone, Graziana and
Chandran, Prashanth and Zoss, Gaspard and Schroers, Christopher
and Gross, Markus and Gotardo, Paulo and Bradley, Derek and Weber, Romann},
year = {2022},
pages={611-622},
month={Oct},
number={7},
volume={41},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14705}
}

Published in ACM/Eurographics Symposium on Computer Animation, 2022

We propose a 3D+time framework for modeling dynamic sequences of 3D facial shapes, representing realistic non-rigid motion during a performance. [Project Page]

Cite

 @article{https://doi.org/10.1111/cgf.14641, 
author = {Chandran, Prashanth and Zoss, Gaspard
and Gross, Markus and Gotardo, Paulo and Bradley, Derek},
title = {Facial Animation with Disentangled Identity
and Motion using Transformers}, journal = {Computer Graphics Forum},
volume = {41},
number = {8},
pages = {267-277},
doi = {https://doi.org/10.1111/cgf.14641},
year = {2022}
}

Published in Siggraph, 2022

We demonstrate how MoRF is a strong new step towards 3D morphable neural head modeling. [Project Page]


Cite

 @article{Morf_2022,
author = {Wang, Daoye and Chandran, Prashanth and Zoss, Gaspard and Bradely, Derek and Gotardo, Paulo},
title = {MoRF: Morphable Radiance Fields for Multiview Neural Head Modeling},
year = {2022},
issue_date = {July 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
journal = {ACM Trans. Graph.},
month = {jul},
numpages = {9},
}

Published in Siggraph, 2022

We demonstrate the proposed capture pipeline on a variety of different facial hair styles and lengths, ranging from sparse and short to dense full-beards. [Project Page]


Cite

 @article{10.1145/3528223.3530116,
author = {Winberg, Sebastian and Zoss, Gaspard and Chandran, Prashanth and Gotardo, Paulo and Bradley, Derek},
title = {Facial Hair Tracking for High Fidelity Performance Capture},
year = {2022},
issue_date = {July 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {41},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3528223.3530116},
doi = {10.1145/3528223.3530116},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {165},
numpages = {12},
keywords = {hair tracking, facial hair capture, performance capture}
}

Published in Siggraph, 2022

We present a new method for high-fidelity offline facial performance retargeting that is neither expensive nor artifact-prone. [Project Page]


Cite

 @article{10.1145/3528223.3530114,
author = {Chandran, Prashanth and Ciccone, Loiic and Gross, Markus and Bradley, Derek},
title = {Local Anatomically-Constrained Facial Performance Retargeting},
year = {2022},
issue_date = {July 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {41},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3528223.3530114},
doi = {10.1145/3528223.3530114},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {168},
numpages = {14},
keywords = {facial animation, expression transfer, facial performance retargeting}
}

Published in Eurographics, 2022

We compare the results obtained with a state-of-the-art appearance capture method, with and without our proposed improvements to the lighting model. [Project Page]


Cite

 @inproceedings {10.2312:egs.20221019,
booktitle = {Eurographics 2022 - Short Papers},
editor = {Pelechano, Nuria and Vanderhaeghe, David},
title = {{Improved Lighting Models for Facial Appearance Capture}},
author = {Xu, Yingyan and Riviere, Jérémy and Zoss, Gaspard and Chandran, Prashanth and
Bradley, Derek and Gotardo, Paulo},
year = {2022},
publisher = {The Eurographics Association},
ISSN = {1017-4656},
ISBN = {978-3-03868-169-4},
DOI = {10.2312/egs.20221019}
}

Published in Eurographics, 2022

We present a new nonlinear parametric 3D shape model based on transformer architectures. [Project Page]

Cite

 @article{https://doi.org/10.1111/cgf.14468, 
author = {Chandran, Prashanth and Zoss, Gaspard and Gross, Markus and Gotardo, Paulo and Bradley, Derek},
title = {Shape Transformers: Topology-Independent 3D Shape Models Using Transformers}, journal = {Computer Graphics Forum},
volume = {41},
number = {2},
pages = {195-207},
doi = {https://doi.org/10.1111/cgf.14468},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14468},
year = {2022}
}

Published in ACM SIGGRAPH Asia, 2021

We propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. [Project Page]

Cite

 @article{10.1145/3478513.3480509, 
author = {Chandran, Prashanth and Winberg, Sebastian and Zoss, Gaspard and Riviere, Jeremy and Gross, Markus and Gotardo, Paulo and Bradley, Derek},
title = {Rendering with Style: Combining Traditional and Neural Approaches for High-Quality Face Rendering},
year = {2021},
issue_date = {December 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {6},
issn = {0730-0301},
url = {https://doi.org/10.1145/3478513.3480509},
doi = {10.1145/3478513.3480509},
journal = {ACM Trans. Graph.},
month = {dec},
articleno = {223},
}

Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021

We propose Adaptive convolutions; a generic extension of AdaIN, which allows for the simultaneous transfer of both statistical and structural styles in real time. [Project Page]


Cite

 @InProceedings{Chandran_2021_CVPR,
author = {Chandran, Prashanth and Zoss, Gaspard and Gotardo, Paulo and Gross, Markus and Bradley, Derek},
title = {Adaptive Convolutions for Structure-Aware Style Transfer},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {7972-7981}
}

Published in 3D International Conference on 3D Vision (3DV), 2020

We present a method for nonlinear 3D face modeling using neural architectures. [Project Page]



Cite

 @INPROCEEDINGS {9320344,
author = {P. Chandran and D. Bradley and M. Gross and T. Beeler},
booktitle = {2020 International Conference on 3D Vision (3DV)},
title = {Semantic Deep Face Models},
year = {2020},
pages = {345-354},
doi = {10.1109/3DV50981.2020.00044},
url = {https://doi.ieeecomputersociety.org/10.1109/3DV50981.2020.00044},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {nov}
}

Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020

Facial landmark detection is a fundamental task for many consumer and high-end applications and is almost entirely solved by machine learning methods today. [Project Page]

Cite

 @InProceedings{Chandran_2020_CVPR,
author = {Chandran, Prashanth and Bradley, Derek and Gross, Markus and Beeler, Thabo},
title = {Attention-Driven Cropping for Very High Resolution Facial Landmark Detection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Road tracking using particle filters for Advanced Driver Assistance Systems

Published in 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014

Road segmentation and tracking is of prime importance in Advanced Driver Assistance Systems (ADAS) to either assist autonomous navigation or provide useful information to drivers operating semi-autonomous vehicles. The work reported herein describes a novel algorithm based on particle filters for segmenting and tracking the edges of roads in real world scenarios. [Paper]

Cite

 @INPROCEEDINGS{6957884, 
author={Chandran, Prashanth and John, Mala and Santhosh Kumar S and Mithilesh N S R},
booktitle={17th International IEEE Conference on Intelligent Transportation Systems (ITSC)},
title={Road tracking using particle filters for Advanced Driver Assistance Systems},
year={2014},
pages={1408-1414},
doi={10.1109/ITSC.2014.6957884}
}

Segmentation and grading of diabetic retinopathic exudates using error-boost feature selection method

Published in 2011 World Congress on Information and Communication Technologies, 2011

This paper proposes a method to segment the exudates and lesions in retinal fundus images and classify using selective brightness feature. [Paper]

Cite

 @INPROCEEDINGS{6141299,  
author={Pradeep Kumar, A.V. and Prashanth, C. and Kavitha, G.},
booktitle={2011 World Congress on Information and Communication Technologies},
title={Segmentation and grading of diabetic retinopathic exudates using error-boost feature selection method},
year={2011},
pages={518-523},
doi={10.1109/WICT.2011.6141299}
}