CVPR 2023
Skip to yearly menu bar Skip to main content


Workshop

GAZE 2023: The 5th International Workshop on Gaze Estimation and Prediction in the Wild

Hyung Jin Chang · Xucong Zhang · Shalini De Mello · Thabo Beeler · Seonwook Park · Otmar Hilliges · Aleš Leonardis

West 115

Keywords:  Face&gestures  

The 5th International Workshop on Gaze Estimation and Prediction in the Wild (GAZE 2023) at CVPR 2023 aims to encourage and highlight novel strategies for eye gaze estimation and prediction with a focus on robustness and accuracy in extended parameter spaces, both spatially and temporally. This is expected to be achieved by applying novel neural network architectures, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training. Specifically, the workshop topics include (but are not limited to):

- Reformulating eye detection, gaze estimation, and gaze prediction pipelines with deep networks.
- Applying geometric and anatomical constraints into the training of (sparse or dense) deep networks.
- Leveraging additional cues such as contexts from face region and head pose information.
- Developing adversarial methods to deal with conditions where current methods fail (illumination, appearance, etc.).
- Exploring attention mechanisms to predict the point of regard.
- Designing new accurate measures to account for rapid eye gaze movement.
- Novel methods for temporal gaze estimation and prediction including Bayesian methods.
- Integrating differentiable components into 3D gaze estimation frameworks.
- Robust estimation from different data modalities such as RGB, depth, head pose, and eye region landmarks.
- Generic gaze estimation method for handling extreme head poses and gaze directions.
- Temporal information usage for eye tracking to provide consistent gaze estimation on the screen.
- Personalization of gaze estimators with few-shot learning.
- Semi-/weak-/un-/self- supervised leraning methods, domain adaptation methods, and other novel methods towards improved representation learning from eye/face region images or gaze target region images.

Chat is not available.
Timezone: America/Los_Angeles

Schedule