inproceedings by Kristof Van Laerhoven

Embedding Intelligent Features for Vibration-Based Machine Condition Monitoring
Today’s demands regarding workpiece quality in cutting machine tool processing require automated ... more Today’s demands regarding workpiece quality in cutting machine tool processing require automated monitoring of both machine condition and the cutting process. Currently, best-performing monitoring approaches rely on high-frequency acoustic emission (AE) sensor data and definition of advanced features, which involve complex computations. This approach is challenging for machine monitoring via embedded sensor systems with constrained computational power and energy budget. To cope with constrained energy, we rely on data recording with microelectromechanical system (MEMS) vibration sensors, which rely on lower-frequency sampling. To clarify whether these lower-frequency signals bear information for typical machine monitoring prediction tasks, we evaluate data for the most generic machine monitoring task of tool condition monitoring (TCM). To cope with computational complexity of advanced features, we introduce two intelligent preprocessing algorithms. First, we split non-stationary signals of recurrent structure into similar segments. Then, we identify most discriminative spectral differences in the segmented signals that allow for best separation of classes for the given TCM task. Subsequent feature extraction only in most relevant signal segments and spectral regions enables high expressiveness even for simple features. Extensive evaluation of the outlined approach on multiple data sets of different combinations of cutting machine tools, tool types and workpieces confirms its sensibility. Intelligent preprocessing enables reliable identification of stationary segments and most discriminative frequency bands. With subsequent extraction of simple but tailor-made features in these spectral-temporal regions of interest (RoIs), TCM typically framed as multi feature classification problem can be converted to a single feature threshold comparison problem with an average F1 score of 97.89%.

PresentPostures: A Wrist and Body Capture Approach for Augmenting Presentations
Capturing and digitizing all nuances during presentations is notoriously difficult. At best, digi... more Capturing and digitizing all nuances during presentations is notoriously difficult. At best, digital slides tend to be combined with audio, while video footage of the presenter's body language often turns out to be either too sensitive, occluded, or hard to achieve for common lighting conditions. If presentations require capturing what is written on the whiteboard, more expensive setups are usually needed. In this paper, we present an approach that complements the data from a wrist-worn inertial sensor with depth camera footage, to obtain an accurate posture representation of the presenter. A wearable inertial measurement unit complements the depth footage by providing more accurate arm rotations and wrist postures when the depth images are occluded, whereas the depth images provide an accurate full-body posture for indoor environments. In an experiment with 10 volunteers, we show that posture estimates from depth images and inertial sensors complement each other well, resulting in far less occlusions and tracking of the wrist with an accuracy that supports capturing sketches.

Passive Link Quality Estimation for Accurate and Stable Parent Selection in Dense 6TiSCH Networks
Industrial applications are increasingly demanding more low-power operations, deterministic commu... more Industrial applications are increasingly demanding more low-power operations, deterministic communications and end-to-end reliability that approaches 100%. Recent standardization efforts focus on exploiting slow channel hopping while properly scheduling the transmissions to provide a strict Quality of Service for the Industrial Internet of Things (IIoT). By keeping nodes time-synchronized and by employing a channel hopping approach, IEEE 802.15.4-TSCH (Time-Slotted Channel Hoping) aims at providing high-level network reliability. For this, however, we need to construct an accurate schedule, able to exploit reliable paths. In particular, radio links with high Packet Error Rate should not be exploited since they are less energy-efficient (more retransmissions are required) and they negatively impact the reliability. In this work, we take advantage of the continuously advertisement packets transmitted by the nodes to infer the link quality. We argue that the reception rate of broadcast packets provide means to estimate the link quality to different neighbors, even when the data packets use different, collision-free transmission opportunities. Our experiments on a large-scale platform highlight that our approach improves the convergence delay, identifying the best routes to the sink during the bootstrapping (or reconverging) phase without adding any extra control packet.

PPG-based Heart Rate Estimation with Time-Frequency Spectra: A Deep Learning Approach
PPG-based continuous heart rate estimation is challenging due to the effects of physical activity... more PPG-based continuous heart rate estimation is challenging due to the effects of physical activity. Recently, methods based on time-frequency spectra emerged to compen- sate motion artefacts. However, existing approaches are highly parametrised and optimised for specific scenarios. In this paper, we first argue that cross-validation schemes should be adapted to this topic, and show that the generalisation capabilities of current approaches are limited. We then introduce deep learning, specifically CNN-models, to this domain. We investigate different CNN-architectures (e.g. the number of convolutional layers, applying batch normalisation, or ensemble prediction), and report insights based on our systematic evaluation on two publicly available datasets. Finally, we show that our CNN-based approach performs comparably to classical methods.
Labelling Affective States ”in the wild”: Practical Guidelines and Lessons Learned
In affective computing (AC) field studies it is impossible to obtain an objective ground truth. H... more In affective computing (AC) field studies it is impossible to obtain an objective ground truth. Hence, self-reports in form of ecological momentary assessments (EMAs) are frequently used in lieu of ground truth. Based on four paradigms, we formulate practical guidelines to increase the accuracy of labels generated via EMAs. In addition, we detail how these guidelines were implemented in a recent AC field study of ours. During our field study, 1081 EMAs were collected from 10 subjects over a duration of 148 days. Based on these EMAs, we perform a qualitative analysis of the effectiveness of our proposed guidelines. Furthermore, we present insights and lessons learned from the field study.

Respiration Rate Estimation with Depth Cameras: An Evaluation of Parameters
Depth cameras have been known to be capable of picking up the small changes in distance from user... more Depth cameras have been known to be capable of picking up the small changes in distance from users' torsos, to estimate respiration rate. Several studies have shown that under certain conditions, the respiration rate from a non-mobile user facing the camera can be accurately estimated from parts of the depth data. It is however to date not clear, what factors might hinder the application of this technology in any setting, what areas of the torso need to be observed, and how readings are affected for persons at larger distances from the RGB-D camera. In this paper, we present a benchmark dataset that consists of the point cloud data from a depth camera, which monitors 7 volunteers at variable distances, for variable methods to pin-point the person's torso, and at variable breathing rates. Our findings show that the respiration signal's signal-to-noise ratio becomes debilitating as the distance to the person approaches 4 metres, and that bigger windows over the person's chest work particularly well. The sampling rate of the depth camera was also found to impact the signal's quality significantly.

Using Wrist-Worn Activity Recognition for Basketball Game Analysis
Game play in the sport of basketball tends to combine highly dynamic phases in which the teams st... more Game play in the sport of basketball tends to combine highly dynamic phases in which the teams strategically move across the field, with specific actions made by individual players. Analysis of basketball games usually focuses on the locations of players at particular points in the game, whereas the capture of what actions the players were performing remains underrepresented. In this paper, we present an approach that allows to monitor players' actions during a game, such as dribbling, shooting, blocking, or passing, with wrist-worn inertial sensors. In a feasibility study, inertial data from a sensor worn on the wrist were recorded during training and game sessions from three players. We illustrate that common features and classifiers are able to recognize short actions, with overall accuracy performances around 83.6% (k-Nearest-Neighbor) and 87.5% (Random Forest). Some actions, such as jump shots, performed well (± 95% accuracy), whereas some types of dribbling achieving low (± 44%) recall.

Wearable affect and stress recognition: A review
Affect recognition aims to detect a person’s affective state based on observables, with the goal ... more Affect recognition aims to detect a person’s affective state based on observables, with the goal to e.g. provide reasoning for decision making or support mental wellbeing. Recently, besides approaches based on audio, visual or text information, solutions relying on wearable sensors as observables (recording mainly physiological and inertial parameters) have received increasing attention. Wearable systems offer an ideal platform for long-term affect recognition applications due to their rich functionality and form factor. However, existing literature lacks a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods, and best practices of wearable affect and stress recognition. We summarise psychological models, and detail affect-related physiological changes and their measurement with wearables. We outline lab protocols eliciting affective states, and provide guidelines for ground truth generation in field studies. We also describe the standard data processing chain, and review common approaches to preprocessing, feature extraction, and classification. By providing a comprehensive summary of the state-of-the-art and guidelines to various aspects, we would like to enable other researchers in the field of affect recognition to conduct and evaluate user studies and develop wearable systems.

Fewer Samples for a Longer Life Span:Towards Long-Term Wearable PPG Analysis
Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables,... more Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables, as the cost and sizeof current PPG modules have dropped significantly. Research in the analysis of PPG data has recently expanded beyond the fast and accurate characterization of heart rate, into the adaptive handling of artifacts within the signal and even the capturing of respiration rate. In this paper, we instead explore using state-of-the-art PPG sensor modules for long-term wearable deployment and the observation of trends over minutes, rather than seconds. By focusing specifically on lowering the sampling rate and via analysis of the spectrum of frequencies alone, our approach minimizes the costly illumination-based sensing and can be used to detect the dominant frequencies of heart rate and respiration rate, but also enables to infer on activity of the sympathetic nervous system. We show in two experiments that such detections and measurements can still be achieved at low sampling rates down to10 Hz, within a power-efficient platform. This approach enables miniature sensor designs that monitor average heart rate, respiration rate, and sympathetic nerve activity over longer stretches of time.

Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection
Affect recognition aims to detect a person’s affective state based on observables, with the goal ... more Affect recognition aims to detect a person’s affective state based on observables, with the goal to e.g. improve human-computer interaction. Long-term stress is known to have severe implications on wellbeing, which call for continuous and automated stress monitoring systems. However, the affective computing community lacks commonly used standard datasets for wearable stress detection which a) provide multimodal high-quality data, and b) include multiple affective states. Therefore, we introduce WESAD, a new publicly available dataset for wearable stress and affect detection. This multimodal dataset features physiological and motion data, recorded from both a wrist- and a chest-worn device, of 15 subjects during a lab study. The following sensor modalities are included: blood volume pulse, electrocardiogram, electrodermal activity, electromyogram, respiration, body temperature, and three- axis acceleration. Moreover, the dataset bridges the gap between previous lab studies on stress and emotions, by containing three different affective states (neutral, stress, amusement). In addition, self-reports of the subjects, which were obtained using several established questionnaires, are contained in the dataset. Furthermore, a benchmark is created on the dataset, using well-known features and standard machine learning methods. Considering the three- class classification problem (baseline vs. stress vs. amusement), we achieved classification accuracies of up to 80 %. In the binary case (stress vs. non-stress), accuracies of up to 93 % were reached. Finally, we provide a detailed analysis and comparison of the two device locations (chest vs. wrist) as well as the different sensor modalities.
Using the eSense Wearable Earbud as a Light-Weight Robot Arm Controller
Head motion-based interfaces for controlling robot arms in real time have been presented in both ... more Head motion-based interfaces for controlling robot arms in real time have been presented in both medical-oriented re- search as well as human-robot interaction. We present an especially minimal and low-cost solution that uses the eS- ense [1] ear-worn prototype as a small head-worn controller, enabling direct control of an inexpensive robot arm in the environment. We report on the hardware and software setup, as well as the experiment design and early results.

Using Multimodal Biosignal Data from Wearables to Detect Focal Motor Seizures in Individual Epilepsy Patients
Epilepsy seizure detection with wearable devices is an emerging research field. As opposed to the... more Epilepsy seizure detection with wearable devices is an emerging research field. As opposed to the gold standard that is currently practiced, consisting of simultaneous video and EEG monitoring of patients, wearables have the advantage that they put a lower burden on epilepsy patients. This paper reports on the first stages in a research effort that is dedicated to the development of a multimodal seizure detection system specifically for focal onset epileptic seizures. In a focused analysis on data from three in-hospital patients with each having six to nine seizures recorded, we show that this type of seizures can manifest very differently and thus significantly impact classification. Using a Random Forest model on a rich set of features, we have obtained overall precision and recall scores of up to 0.92 and 0.72 respectively. These results show that the approach has validity, but we identify the type of focal seizure to be a critical factor for the classification performance.

Using an in-ear wearable to annotate activity data across multiple inertial sensors
Wearable activity recognition research needs benchmark data, which rely heavily on synchronizing ... more Wearable activity recognition research needs benchmark data, which rely heavily on synchronizing and annotating the inertial sensor data, in order to validate the activity classifiers. Such validation studies become challenging when recording outside the lab, over longer stretches of time. This paper presents a method that uses an inconspicuous, ear-worn device that allows the wearer to annotate his or her activities as the recording takes place. Since the ear-worn device has integrated inertial sensors, we use cross-correlation over all wearable inertial signals to propagate the annotations over all sensor streams. In a feasibility study with 7 participants performing 6 different physical activities, we show that our algorithm is able to synchronize signals between sensors worn on the body using cross-correlation, typically within a second. A comfort rating scale study has shown that attachment is critical. Button presses can thus define markers in synchronized activity data, resulting in a fast, comfortable, and reliable annotation method.

Bayesian Estimation of Recurrent Changepoints for Signal Segmentation and Anomaly Detection
Signal segmentation is a generic task in many time series applications. We propose approaching it... more Signal segmentation is a generic task in many time series applications. We propose approaching it via Bayesian changepoint algorithms, i.e., by assigning segments between changepoints. When successive signals show a recurrent changepoint pattern, estimating changepoint recurrence is beneficial for two reasons: While recurrent changepoints yield more robust signal segment estimates, non-recurrent changepoints bear valuable information for unsupervised anomaly detection. This study introduces the changepoint recurrence distribution (CPRD) as an empirical estimate of the recurrent behavior of observed changepoints. Two generic methods for incorporating the estimated CPRD into the process of assessing recurrence of future changepoints are suggested. The knowledge of non-recurrent changepoints arising from one of these methods allows additional unsupervised anomaly detection. The quality both of changepoint recurrence estimation via CPRD and of changepoint-related signal segmentation and unsupervised anomaly detection are verified in a proof-of-concept study for two exemplary machine tool monitoring tasks.

Multi-target Affect Detection in the Wild: An Exploratory Study
Affective computing aims to detect a person's affective state (e.g. emotion) based on observables... more Affective computing aims to detect a person's affective state (e.g. emotion) based on observables. The link between affective states and biophysical data, collected in lab settings, has been established successfully. However, the number of realistic studies targeting affect detection in the wild is still limited. In this paper we present an exploratory field study, using physiological data of 11 healthy subjects. We aim to classify arousal, State-Trait Anxiety Inventory (STAI), stress, and valence self-reports, utilizing feature-based and CNN methods. In addition, we extend the CNNs to multi-task CNNs, classifying all labels of interest simultaneously. Comparing the F1 score averaged over the different tasks and classifiers the CNNs reach an 1.8% higher score than the classical methods. However, the F1 scores barely exceed 45%. In the light of these results, we discuss pitfalls and challenges for physiology-based affective computing in the wild.
Uploads
inproceedings by Kristof Van Laerhoven