GRSI Best Paper Award 2021

01 Nov 2021 - Rozenn Dahyot

Our paper on Model for predicting perception of facial action unit activation using virtual humans published in Computers and Graphics Journal won the Graphics Replicability Stamp Initiative (GRSI) Best Paper Award!

The Graphics Replicability Stamp Initiative is an additional recognition for authors who in addition to publishing the paper, provide a complete open-source implementation.

The Award committee noted:

Model for predicting perception of facial action unit activation using virtual humans makes an important contribution to the perception of human faces and facial expressions in the context of digital humans. Blendshape facial rigs are used extensively in games for creating facial animations of virtual humans. Storing and manipulating large sets of facial meshes (blendshapes) is costly in terms of memory and computation for gaming applications. Blendshape rigs include a set of semantically meaningful expressions that encode how expressive the face of a virtual human will be. The key contribution of this work is the development of models for predicting the perceptual importance of blendshapes, and this is cross-validated through successful prediction from unseen data. This work provides a strong foundation for the development of a universal perceptual error metric suitable for facial rigs of human faces. The authors’ GitHub repository includes data and models in R-code, allowing others to also investigate a broad range of faces, viewpoints, and facial expressions. This work is important for reducing the computational cost and memory demands for game developers in the creation of expressive digital human characters.

Co-authors Rachel McDonnell (Trinity College Dublin), Katja Zibrek (Inria), Emma Carrigan (Trinity College Dublin) and Rozenn Dahyot (Maynooth University) have made their data and code from their study available on Github.