Key research themes
1. How do multimodal sensor integrations improve context-aware human activity recognition in smart living environments?
This research area investigates the fusion of heterogeneous sensor modalities—such as video, wearable inertial sensors (IMUs), and ambient environmental sensors—to enhance the accuracy and robustness of human activity recognition (HAR) in smart living and ambient assisted living (AAL) contexts. It matters because single-modality systems often suffer from limitations like privacy concerns, occlusions, or incomplete data, whereas multimodal approaches leverage complementary information, improving system resilience and personalization in real-time living environments.
2. What approaches enhance HAR model generalization across diverse users, sensor placements, and real-life scenarios through advanced machine learning techniques?
This theme focuses on overcoming challenges posed by variability in user behavior, sensor placement, and environmental conditions through sophisticated machine learning models, including deep learning, hierarchical probabilistic models, HMMs, and generative approaches. The goal is to develop scalable, robust HAR systems capable of generalizing across subjects and contexts using methods such as layered HMMs, recurrent neural networks (RNNs), and generative models with improved emission distributions.
3. How can semantic structures and contextual information of activity labels enhance human activity recognition models?
This research investigates leveraging semantic relationships within activity label names and contextual information to improve HAR recognition accuracy, especially in limited data and few-shot scenarios. By modeling label names as sequences with shared substructures (e.g., common verbs or objects), and using language models for label augmentation and embedding, systems can capture inter-activity similarities often overlooked by traditional classification, thus enriching the learned feature-label mappings.