Skip to yearly menu bar Skip to main content


Tutorial

Contactless AI Healthcare using Cameras and Wireless Sensors

Wenjin Wang · Daniel Mcduff · Xuyu Wang

[ ] [ Project Page ]
Mon 17 Jun 5 a.m. PDT — 8 a.m. PDT

Abstract:

Understanding people and extracting health-related metrics is an emerging research topic in computer vision that has grown rapidly recently. Without the need of any physical contact of the human body, cameras have been used to measure vital signs remotely (e.g. heart rate, heart rate variability, respiration rate, blood oxygenation saturation, pulse transit time, body temperature, etc.) from an image sequence of the skin or body, which leads to contactless, continuous and comfortable heath monitoring. The use of cameras also enables the measurement of human behaviors/activities and high-level visual semantic/contextual information leveraging computer vision and machine learning techniques, such as facial expression analysis for pain/discomfort/delirium detection, emotion recognition for depression measurement, body motion for sleep staging or bed exit/fall detection, activity recognition for patient actigraphy, etc. Understanding of the environment around the people is also a unique advantage of cameras compared to the contact bio-sensors (e.g., wearables), which facilitates better understanding of human and scene for health monitoring. In addition to camera based approach, Radio Frequency (RF) based methods for health monitoring have also been proposed, using Radar, WiFi, RFID, and acoustic signals. Radar based methods mainly use Doppler/UWB/FMCW radar for health monitoring. They can obtain high monitoring accuracy for different applications such as sleeping staging and posture estimation. By using off-the-shelf WiFi device, for example WiFi RSS and CSI data from commodity WiFi NIC, we can monitor breathing and heart rates for single and multiple persons. In addition, WiFi signal can be used for other health related activity recognition based on machine learning algorithms. For acoustic based vital sign monitoring, the speaker and microphone of smartphones are used to build sonar based sensing system to monitor breathing and heart rates. The rapid developments of computer vision and RF sensing also give rise to new multi-modal learning techniques that expand the sensing capability by combining two modalities, while minimizing the need of human labels. The hybrid approach will also improve the performance of monitoring, such as using the camera images as beacon to gear human activity learning for the RF signals. The contactless monitoring of camera and RF will bring a rich set of compelling healthcare applications that directly improve upon contact-based monitoring solutions and improve people’s care experience and quality of life, called “AI health monitoring”. The tutorial of “Contactless AI Healthcare using Cameras and Wireless Sensors” will have three parts: camera based monitoring, RF based monitoring, and hybrid of camera and RF for health monitoring, to introduce the latest developments in this field thoroughly.

Live content is unavailable. Log in and register to view live content