Papers by Kristof Van Laerhoven

Sensors
Focal onset epileptic seizures are highly heterogeneous in their clinical manifestations, and a r... more Focal onset epileptic seizures are highly heterogeneous in their clinical manifestations, and a robust seizure detection across patient cohorts has to date not been achieved. Here, we assess and discuss the potential of supervised machine learning models for the detection of focal onset motor seizures by means of a wrist-worn wearable device, both in a personalized context as well as across patients. Wearable data were recorded in-hospital from patients with epilepsy at two epilepsy centers. Accelerometry, electrodermal activity, and blood volume pulse data were processed and features for each of the biosignal modalities were calculated. Following a leave-one-out approach, a gradient tree boosting machine learning model was optimized and tested in an intra-subject and inter-subject evaluation. In total, 20 seizures from 9 patients were included and we report sensitivities of 67% to 100% and false alarm rates of down to 0.85 per 24 h in the individualized assessment. Conversely, for ...
Proceedings of the 1st International Workshop on Earable Computing, 2019
Head motion-based interfaces for controlling robot arms in real time have been presented in both ... more Head motion-based interfaces for controlling robot arms in real time have been presented in both medical-oriented research as well as human-robot interaction. We present an especially minimal and low-cost solution that uses the eSense [1] ear-worn prototype as a small head-worn controller, enabling direct control of an inexpensive robot arm in the environment. We report on the hardware and software setup, as well as the experiment design and early results. CCS CONCEPTS • Computer systems organization → Robotics; • Humancentered computing → Ubiquitous and mobile computing.

Proceedings of the Third Workshop on Data: Acquisition To Analysis, 2020
Photoplethysmography is an optical measurement principle which is present in most modern wearable... more Photoplethysmography is an optical measurement principle which is present in most modern wearable devices such as fitness trackers and smartwatches. As the analysis of physiological signals requires reliable but energy-efficient algorithms, suitable datasets are essential for their development, evaluation, and benchmark. A broad variety of clinical datasets is available with recordings from medical pulse oximeters which traditionally apply transmission mode photoplethysmography at the fingertip or earlobe. However, only few publicly available datasets utilize recent reflective mode sensors which are typically worn at the wrist and whose signals show different characteristics. Moreover, the recordings are often advertised as raw, but then turn out to be preprocessed and filtered while the applied parameters are not stated. In this way, the heart rate and its variability can be extracted, but interesting secondary information from the non-stationary signal is often lost. Consequently, the test of novel signal processing approaches for wearable devices usually implies the gathering of own or the use of inappropriate data. In this paper, we present a multi-varied method to analyze the suitability and applicability of presumably raw photoplethysmography signals. We present an analytical tool which applies 7 decision metrics to characterize 10 publicly available datasets with a focus on less or ideally unfiltered, raw signals. Besides the review, we finally provide a guideline for future datasets, to suit to and to be applicable in digital signal processing, to support the development and evaluation of algorithms for resource-limited wearable devices. CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile devices; • Hardware → Digital signal processing; • Applied computing → Health informatics.

Proceedings of the 5th international Workshop on Sensor-based Activity Recognition and Interaction, 2018
Game play in the sport of basketball tends to combine highly dynamic phases in which the teams st... more Game play in the sport of basketball tends to combine highly dynamic phases in which the teams strategically move across the field, with specific actions made by individual players. Analysis of basketball games usually focuses on the locations of players at particular points in the game, whereas the capture of what actions the players were performing remains underrepresented. In this paper, we present an approach that allows to monitor players' actions during a game, such as dribbling, shooting, blocking, or passing, with wrist-worn inertial sensors. In a feasibility study, inertial data from a sensor worn on the wrist were recorded during training and game sessions from three players. We illustrate that common features and classifiers are able to recognize short actions, with overall accuracy performances around 83.6% (k-Nearest-Neighbor) and 87.5% (Random Forest). Some actions, such as jump shots, performed well (± 95% accuracy), whereas some types of dribbling achieving low (± 44%) recall. CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile computing; • Computing methodologies → Machine learning; Feature selection; Cross-validation;

ArXiv, 2018
Affect recognition aims to detect a person's affective state based on observables, with the g... more Affect recognition aims to detect a person's affective state based on observables, with the goal to e.g. provide reasoning for decision making or support mental wellbeing. Recently, besides approaches based on audio, visual or text information, solutions relying on wearable sensors as observables (recording mainly physiological and inertial parameters) have received increasing attention. Wearable systems offer an ideal platform for long-term affect recognition applications due to their rich functionality and form factor. However, existing literature lacks a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods, and best practices of wearable affect and stress recognition. We summarise psychological models, and detail affect-related physiological changes and their measurement with wearables. We outline lab protocols ...

Sensors (Basel, Switzerland), 2021
In the past decade, inertial measurement sensors have found their way into many wearable devices ... more In the past decade, inertial measurement sensors have found their way into many wearable devices where they are used in a broad range of applications, including fitness tracking, step counting, navigation, activity recognition, or motion capturing. One of their key features that is widely used in motion capturing applications is their capability of estimating the orientation of the device and, thus, the orientation of the limb it is attached to. However, tracking a human’s motion at reasonable sampling rates comes with the drawback that a substantial amount of data needs to be transmitted between devices or to an end point where all device data is fused into the overall body pose. The communication typically happens wirelessly, which severely drains battery capacity and limits the use time. In this paper, we introduce fastSW, a novel piecewise linear approximation technique that efficiently reduces the amount of data required to be transmitted between devices. It takes advantage of ...

Proceedings of the 5th international Workshop on Sensor-based Activity Recognition and Interaction, 2018
Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables,... more Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables, as the cost and size of current PPG modules have dropped significantly. Research in the analysis of PPG data has recently expanded beyond the fast and accurate characterization of heart rate, into the adaptive handling of artifacts within the signal and even the capturing of respiration rate. In this paper, we instead explore using state-of-the-art PPG sensor modules for long-term wearable deployment and the observation of trends over minutes, rather than seconds. By focusing specifically on lowering the sampling rate and via analysis of the spectrum of frequencies alone, our approach minimizes the costly illumination-based sensing and can be used to detect the dominant frequencies of heart rate and respiration rate, but also enables to infer on activity of the sympathetic nervous system. We show in two experiments that such detections and measurements can still be achieved at low sampl...

ArXiv, 2021
Activity recognition systems that are capable of estimating human activities from wearable inerti... more Activity recognition systems that are capable of estimating human activities from wearable inertial sensors have come a long way in the past decades. Not only have state-of-the-art methods moved away from feature engineering and have fully adopted end-to-end deep learning approaches, best practices for setting up experiments, preparing datasets, and validating activity recognition approaches have similarly evolved. This tutorial was first held at the 2021 ACM International Symposium on Wearable Computers (ISWC’21) and International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’21). The tutorial, after a short introduction in the research field of activity recognition, provides a hands-on and interactive walk-through of the most important steps in the data pipeline for the deep learning of human activities. All presentation slides shown during the tutorial, which also contain links to all code exercises, as well as the link of the GitHub page of the tutorial can be ...

2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2018
Capturing and digitizing all nuances during presentations is notoriously difficult. At best, digi... more Capturing and digitizing all nuances during presentations is notoriously difficult. At best, digital slides tend to be combined with audio, while video footage of the presenter’s body language often turns out to be either too sensitive, occluded, or hard to achieve for common lighting conditions. If presentations require capturing what is written on the whiteboard, more expensive setups are usually needed. In this paper, we present an approach that complements the data from a wrist-worn inertial sensor with depth camera footage, to obtain an accurate posture representation of the presenter. A wearable inertial measurement unit complements the depth footage by providing more accurate arm rotations and wrist postures when the depth images are occluded, whereas the depth images provide an accurate full-body posture for indoor environments. In an experiment with 10 volunteers, we show that posture estimates from depth images and inertial sensors complement each other well, resulting in ...
ArXiv, 2021
Data augmentation is a widely used technique in classification to increase data used in training.... more Data augmentation is a widely used technique in classification to increase data used in training. It improves generalization and reduces amount of annotated human activity data needed for training which reduces labour and time needed with the dataset. Sensor time-series data, unlike images, cannot be augmented by computationally simple transformation algorithms. State of the art models like Recurrent Generative Adversarial Networks (RGAN) are used to generate realistic synthetic data. In this paper, transformer based generative adversarial networks which have global attention on data, are compared on PAMAP2 and Real World Human Activity Recognition data sets with RGAN. The newer approach provides improvements in time and savings in computational resources needed for data augmentation than previous approach.

Proceedings of the 5th international Workshop on Sensor-based Activity Recognition and Interaction, 2018
Depth cameras have been known to be capable of picking up the small changes in distance from user... more Depth cameras have been known to be capable of picking up the small changes in distance from users' torsos, to estimate respiration rate. Several studies have shown that under certain conditions, the respiration rate from a non-mobile user facing the camera can be accurately estimated from parts of the depth data. It is however to date not clear, what factors might hinder the application of this technology in any setting, what areas of the torso need to be observed, and how readings are affected for persons at larger distances from the RGB-D camera. In this paper, we present a benchmark dataset that consists of the point cloud data from a depth camera, which monitors 7 volunteers at variable distances, for variable methods to pinpoint the person's torso, and at variable breathing rates. Our findings show that the respiration signal's signal-to-noise ratio becomes debilitating as the distance to the person approaches 4 metres, and that bigger windows over the person's chest work particularly well. The sampling rate of the depth camera was also found to impact the signal's quality significantly.

Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, 2018
PPG-based continuous heart rate estimation is challenging due to the effects of physical activity... more PPG-based continuous heart rate estimation is challenging due to the effects of physical activity. Recently, methods based on time-frequency spectra emerged to compensate motion artefacts. However, existing approaches are highly parametrised and optimised for specific scenarios. In this paper, we first argue that cross-validation schemes should be adapted to this topic, and show that the generalisation capabilities of current approaches are limited. We then introduce deep learning, specifically CNN-models, to this domain. We investigate different CNN-architectures (e.g. the number of convolutional layers, applying batch normalisation, or ensemble prediction), and report insights based on our systematic evaluation on two publicly available datasets. Finally, we show that our CNN-based approach performs comparably to classical methods.
Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, 2018
In affective computing (AC) field studies it is impossible to obtain an objective ground truth. H... more In affective computing (AC) field studies it is impossible to obtain an objective ground truth. Hence, self-reports in form of ecological momentary assessments (EMAs) are frequently used in lieu of ground truth. Based on four paradigms, we formulate practical guidelines to increase the accuracy of labels generated via EMAs. In addition, we detail how these guidelines were implemented in a recent AC field study of ours. During our field study, 1081 EMAs were collected from 10 subjects over a duration of 148 days. Based on these EMAs, we perform a qualitative analysis of the effectiveness of our proposed guidelines. Furthermore, we present insights and lessons learned from the field study.

2021 International Symposium on Wearable Computers, 2021
Recent studies in Human Activity Recognition (HAR) have shown that Deep Learning methods are able... more Recent studies in Human Activity Recognition (HAR) have shown that Deep Learning methods are able to outperform classical Machine Learning algorithms. One popular Deep Learning architecture in HAR is the DeepConvLSTM. In this paper we propose to alter the DeepConvLSTM architecture to employ a 1-layered instead of a 2-layered LSTM. We validate our architecture change on 5 publicly available HAR datasets by comparing the predictive performance with and without the change employing varying hidden units within the LSTM layer(s). Results show that across all datasets, our architecture consistently improves on the original one: Recognition performance increases up to 11.7% for the F1-score, and our architecture significantly decreases the amount of learnable parameters. This improvement over DeepConvLSTM decreases training time by as much as 48%. Our results stand in contrast to the belief that one needs at least a 2-layered LSTM when dealing with sequential data. Based on our results we argue that said claim might not be applicable to sensor-based HAR. CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile computing design and evaluation methods; • Computing methodologies → Neural networks.

Proceedings of the 1st International Workshop on Earable Computing, 2019
Wearable activity recognition research needs benchmark data, which rely heavily on synchronizing ... more Wearable activity recognition research needs benchmark data, which rely heavily on synchronizing and annotating the inertial sensor data, in order to validate the activity classifiers. Such validation studies become challenging when recording outside the lab, over longer stretches of time. This paper presents a method that uses an inconspicuous, earworn device that allows the wearer to annotate his or her activities as the recording takes place. Since the ear-worn device has integrated inertial sensors, we use cross-correlation over all wearable inertial signals to propagate the annotations over all sensor streams. In a feasibility study with 7 participants performing 6 different physical activities, we show that our algorithm is able to synchronize signals between sensors worn on the body using cross-correlation, typically within a second. A comfort rating scale study has shown that attachment is critical. Button presses can thus define markers in synchronized activity data, resulting in a fast, comfortable, and reliable annotation method. CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile computing; • Computing methodologies → Machine learning.

Proceedings of the 20th ACM International Conference on Multimodal Interaction, 2018
Affect recognition aims to detect a person's affective state based on observables, with the goal ... more Affect recognition aims to detect a person's affective state based on observables, with the goal to e.g. improve human-computer interaction. Long-term stress is known to have severe implications on wellbeing, which call for continuous and automated stress monitoring systems. However, the affective computing community lacks commonly used standard datasets for wearable stress detection which a) provide multimodal high-quality data, and b) include multiple affective states. Therefore, we introduce WESAD, a new publicly available dataset for wearable stress and affect detection. This multimodal dataset features physiological and motion data, recorded from both a wrist-and a chest-worn device, of 15 subjects during a lab study. The following sensor modalities are included: blood volume pulse, electrocardiogram, electrodermal activity, electromyogram, respiration, body temperature, and threeaxis acceleration. Moreover, the dataset bridges the gap between previous lab studies on stress and emotions, by containing three different affective states (neutral, stress, amusement). In addition, self-reports of the subjects, which were obtained using several established questionnaires, are contained in the dataset. Furthermore, a benchmark is created on the dataset, using well-known features and standard machine learning methods. Considering the threeclass classification problem (baseline vs. stress vs. amusement), we achieved classification accuracies of up to 80 %. In the binary case (stress vs. non-stress), accuracies of up to 93 % were reached. Finally, we provide a detailed analysis and comparison of the two device locations (chest vs. wrist) as well as the different sensor modalities.
Proceedings of the 6th international Workshop on Sensor-based Activity Recognition and Interaction, 2019
Figure 1: A video frame from our video-EEG epilepsy monitoring unit during a patient's focal moto... more Figure 1: A video frame from our video-EEG epilepsy monitoring unit during a patient's focal motor seizure (left), with the timeseries over a 5 min segment from the patient's right wrist. The timeseries shows from top to bottom: 3D Accelerometry in x/y/z, blood volume pulse, and electrodermal activity, with the video frame's timestamp marked by the black line.

Sensors, 2019
Photoplethysmography (PPG)-based continuous heart rate monitoring is essential in a number of dom... more Photoplethysmography (PPG)-based continuous heart rate monitoring is essential in a number of domains, e.g., for healthcare or fitness applications. Recently, methods based on time-frequency spectra emerged to address the challenges of motion artefact compensation. However, existing approaches are highly parametrised and optimised for specific scenarios of small, public datasets. We address this fragmentation by contributing research into the robustness and generalisation capabilities of PPG-based heart rate estimation approaches. First, we introduce a novel large-scale dataset (called PPG-DaLiA), including a wide range of activities performed under close to real-life conditions. Second, we extend a state-of-the-art algorithm, significantly improving its performance on several datasets. Third, we introduce deep learning to this domain, and investigate various convolutional neural network architectures. Our end-to-end learning approach takes the time-frequency spectra of synchronised...
IEEE Pervasive Computing, 2017
With wearable computing research recently passing the 20-year mark, this survey looks back at how... more With wearable computing research recently passing the 20-year mark, this survey looks back at how the field developed and explores where it's headed. According to the authors, wearable computing is entering its most exciting phase yet, as it transitions from demonstrations to the creation of sustained markets and industries, which in turn should drive future research and innovation.
Uploads
Papers by Kristof Van Laerhoven