A Multimodal Wearable and Camera-Based System for Behavior Monitoring and Personalized Intervention in Children with Autism
Project Description
This project builds a multimodal wearable and camera-based system for early screening and personalized intervention system in children with Autism Spectrum Disorder (ASD). intervention The project combines behavioral characteristics and physiological signal data of children leveraging advanced multimodal data fusion federated learning and large language model (LLM) technologies to achieve precise early screening and personalized interventions ultimately providing scientifically effective support strategies for children with ASD. We will (1) digitize clinical scales of ASD into multi-dimensional symptoms, (2) integrate wearable physiological sensing (HR, EDA, respiration) and camera-based behavioral capture (micro-expressions, actions) to detect these symptoms. We will also validate the system performance in real-world deployment on children with ASD.
Supervisor
OUYANG, Xiaomin
Quota
2
Course type
UROP1000
UROP1100
UROP2100
UROP3100
UROP3200
UROP4100
Applicant's Roles
1. Develop mobile/wearable apps and APIs for data collection (physiological, IMU, camera), e.g. with Fitbit watch.
2. Implement baseline multimodal models (physio + vision + scales) and iteratively harden them for real-world use via robustness tests and domain adaptation.
3. Design and execute application scenario surveys (screening, progress tracking, social skills training); support data collection, and pilot experiments with clear evaluation metrics.
2. Implement baseline multimodal models (physio + vision + scales) and iteratively harden them for real-world use via robustness tests and domain adaptation.
3. Design and execute application scenario surveys (screening, progress tracking, social skills training); support data collection, and pilot experiments with clear evaluation metrics.
Applicant's Learning Objectives
1. Learn research workflows for mobile/IoT machine learning and clinically grounded evaluation.
2. Gain proficiency in PyTorch and transformer toolkits for multimodal fusion and deployment.
3. Understand open-source LLM and multimodal architectures, and their use for contextual integration and privacy-aware training.
2. Gain proficiency in PyTorch and transformer toolkits for multimodal fusion and deployment.
3. Understand open-source LLM and multimodal architectures, and their use for contextual integration and privacy-aware training.
Complexity of the project
Moderate