Key research themes
1. How can multimodal sensor fusion improve the robustness and accuracy of human activity recognition in real-world and assisted living environments?
This theme investigates leveraging and integrating heterogeneous data sources—such as vision, inertial measurement units (IMUs), and ambient sensors—to enhance human activity recognition (HAR) performance, especially in complex real-world scenarios like Ambient Assisted Living (AAL). By combining modalities, systems can overcome limitations of individual sensors, address occlusions, and provide more reliable recognition critical for healthcare, elderly monitoring, and smart home applications.
2. What machine learning techniques and feature engineering strategies enhance activity classification from wearable and motion data under real-world conditions?
This theme explores supervised learning algorithms, active learning, feature selection, and signal preprocessing techniques tailored to wearable sensor data (IMU, accelerometers, gyroscopes) for efficient and accurate HAR. The focus includes improving sample efficiency to handle noisy, high-dimensional data with limited labeled trials, personalization for users, and real-time classification essential for mobile and pervasive computing environments.
3. How can vision-based HAR systems address challenges such as occlusion, viewpoint variation, and temporal complexity using hierarchical and layered probabilistic models?
This theme focuses on the use of computer vision data and advanced probabilistic frameworks like Hidden Markov Models (HMMs), Dynamic Bayesian Networks (DBNs), and layered hierarchical models for robust activity recognition despite challenges like occlusion and viewpoint changes. It also covers real-time multimodal fusion for complex temporal sequence modeling suited for smart office and domestic environments.