Papers by Jeroen van der Hooft

2022 IEEE International Conference on Image Processing (ICIP), Oct 16, 2022
Point cloud video streaming is a fundamental application of immersive multimedia. In it, objects ... more Point cloud video streaming is a fundamental application of immersive multimedia. In it, objects represented as sets of points are streamed and displayed to remote users. Given the high bandwidth requirements of this content, small changes in the network and/or encoding can affect the users' perceived quality in unexpected manners. To tackle the degradation of the service as fast as possible, real-time Quality of Experience (QoE) assessment is needed. As subjective evaluations are not feasible in real time due to their inherent costs and duration, low-complexity objective quality assessment is a must. Traditional No-Reference (NR) objective metrics at client side are best suited to fulfill the task. However, they lack on accuracy to human perception. In this paper, we present a clusterbased objective NR QoE assessment model for point cloud video. By means of Machine Learning (ML)-based clustering and prediction techniques combined with NR pixel-based features (e.g., blur and noise), the model shows high correlations (up to a 0.977 Pearson Linear Correlation Coefficient (PLCC)) and low Root Mean Squared Error (RMSE) (down to 0.077 on a zero-to-one scale) towards objective benchmarks after evaluation on an adaptive streaming point cloud dataset consisting of sixteen source videos and 453 sequences in total.

Interactive immersive media come with stringent network-level requirements such as high bandwidth... more Interactive immersive media come with stringent network-level requirements such as high bandwidth (i.e., several Gbps) and low latency (i.e., five milliseconds). Today, most video-streaming applications leverage the transmission control protocol (TCP) for reliable end-to-end transmission. However, the reliability of TCP comes at the cost of additional delay due to factors such as connection establishment, head-of-line (HOL) blocking, and retransmissions under sub-optimal network conditions. Such behavior can lead to stalling events or freezes, which are highly detrimental to the user's Quality of Experience (QoE). Recently, QUIC has gained traction in the research community, as it promises to overcome the shortcomings of TCP without compromising on reliability. However, while QUIC vastly reduces the connection establishment time and HOL blocking, thus increasing interactivity, it still underperforms while delivering multimedia due to retransmissions under lossy conditions. To cope with these, QUIC offers the possibility to support unreliable delivery, like that of the user datagram protocol (UDP). While live-video streaming applications usually opt for completely unreliable protocols, such an approach is not optimal for immersive media delivery since it is not affordable to lose certain data that might affect the end user's QoE. In this paper, we propose a partially reliable QUIC-based data delivery mechanism that supports both reliable (streams) and unreliable (datagrams) delivery. To evaluate its performance, we have considered two immersive-video delivery use cases, namely tiled 360-degree video and volumetric point clouds. Our approach outperforms state-of-the-art protocols, especially in the presence of network losses and delay. Even at a packet loss ratio as high as 5%, the number of freezing events for a 120-second video is almost zero as against 120 for TCP.

The Tactile Internet envisions the real-time, fullfledged control of remote infrastructures by me... more The Tactile Internet envisions the real-time, fullfledged control of remote infrastructures by means of tactile sensors (haptics) and aided by audiovisual immersive inputs. End-to-end delays of at most 5 ms (minimum detectable by the human eye) and ultra-high-speed transmissions will be needed by services such as telehealth or Industry 4.0. To comply with these requirements, both networks and applications need to be dramatically improved. On the one hand, in the last years a massive effort is being put onto improving the current network infrastructures. Examples of this are the 5G paradigm or the advances on mechanisms of network precision control using network slicing, Software-defined networking solutions or novel service oriented protocols. On the other hand, Virtual Reality (VR) video streaming, which is the best-suited technology to provide the immersive visual feed of the Tactile Internet, is still far from the envisioned levels of real-timeliness and quality. The aim of this paper is to pinpoint the open challenges to enable VR video streaming technologies for the Tactile Internet. This paper first provides a thorough analysis of the state-of-the-art on VR video streaming technologies both, theoretically and by means of an experimental demonstrator. Based on this, we present the different research areas and key challenges within. In addition, possible solutions and research directions are introduced. We believe this work provides the means to open new opportunities for research not only within the challenging VR arena, but also in the field of management and control of wireless networks.

Multimedia Tools and Applications, Aug 11, 2018
Remote video collaboration is common nowadays in conferencing, telehealth and remote teaching app... more Remote video collaboration is common nowadays in conferencing, telehealth and remote teaching applications. To support these low-latency and interactive use cases, Real-Time Communication (RTC) solutions are generally used. WebRTC is an open-source project for real-time browser-based conferencing, developed with a peer-to-peer architecture in mind. In this peer-to-peer architecture, each sending peer needs to encode a separate, independent stream for each receiving peer participating in the remote session, which makes this approach expensive in terms of encoders and not able to scale well for a large number of users. This paper proposes a WebRTC-compliant framework to solve this scalability issue, without impacting the quality delivered to the remote peers. In the proposed framework, each sending peer is only equipped with a limited number of encoders, much smaller than and independent of the number of receiving peers. Consequently, each encoder transmits to a multitude of receivers at the same time, to improve scalability. A centralized node based on the Selective Forwarding Unit (SFU) principle, called conference controller, forwards the best stream to the receiving peers, based on their bandwidth conditions. Moreover, the conference controller dynamically recomputes the encoding bitrates of the sending peers, to maximize the quality delivered to the receiving peers. This approach allows to closely follow the long-term bandwidth variations of the receivers, even with a limited number of encoders at sender-side, and increase the delivered video quality. An integer linear programming formulation for the bitrate recomputation problem is presented, which can be optimally solved when the number of receivers is small. An approximate, scalable method is also proposed using the K-means clustering algorithm. The gains brought by the proposed framework have been confirmed in both simulation and emulation, through a testbed implementation using the Google Chrome browser and the open-source Jitsi-Videobridge software. Particularly, we focus on a remote collaboration scenario where the interaction among the remote participants is dominated by a single peer, as in a remote teaching scenario. When a single sending peer equipped with three encoders transmits to 28 receiving peers, the proposed framework improves the average received video bitrate up to 15%, compared to a static solution where the encoding bitrates do not change over time. Moreover, the dynamic bitrate recomputation is more efficient than a static association in terms of encoders used at sender-side. For the same configuration mentioned above, the same received bitrate is obtained in the static case using four encoders as in the dynamic case using three encoders.
Immersive and Interactive Subjective Quality Assessment of Dynamic Volumetric Meshes
2023 15th International Conference on Quality of Multimedia Experience (QoMEX)

2022 IEEE International Conference on Image Processing (ICIP)
Point cloud video streaming is a fundamental application of immersive multimedia. In it, objects ... more Point cloud video streaming is a fundamental application of immersive multimedia. In it, objects represented as sets of points are streamed and displayed to remote users. Given the high bandwidth requirements of this content, small changes in the network and/or encoding can affect the users' perceived quality in unexpected manners. To tackle the degradation of the service as fast as possible, real-time Quality of Experience (QoE) assessment is needed. As subjective evaluations are not feasible in real time due to their inherent costs and duration, low-complexity objective quality assessment is a must. Traditional No-Reference (NR) objective metrics at client side are best suited to fulfill the task. However, they lack on accuracy to human perception. In this paper, we present a clusterbased objective NR QoE assessment model for point cloud video. By means of Machine Learning (ML)-based clustering and prediction techniques combined with NR pixel-based features (e.g., blur and noise), the model shows high correlations (up to a 0.977 Pearson Linear Correlation Coefficient (PLCC)) and low Root Mean Squared Error (RMSE) (down to 0.077 on a zero-to-one scale) towards objective benchmarks after evaluation on an adaptive streaming point cloud dataset consisting of sixteen source videos and 453 sequences in total.

HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming.... more HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming. Even though todayās results are promising, one drawback is that current implementations are generally hard coded. Fixed parameter values are used to provide a decent Quality of Experience (QoE) under all circumstances, resulting in suboptimal solutions. By adaptively changing parameters however, results can be sifnificantly improved. In this master dissertation, we show how the concept of reinforcement learning can be applied in HAS solutions to provide the user both with an acceptable QoE and a low play-out delay. A self-learning client is proposed, adaptively changing the parameter configuration of two existing rate adaptation algorithms: the Microsoft IIS Smooth Streaming algorithm and the Fair In-Network Enhanced Adaptive Streaming algorithm by Petrangeli et al. [1][2]. Results indicate that this approach is indeed useful when video is streamed under changing network conditions.

2018 14th International Conference on Network and Service Management (CNSM), 2018
The Tactile Internet envisions the real-time, full-fledged control of remote infrastructures by m... more The Tactile Internet envisions the real-time, full-fledged control of remote infrastructures by means of tactile sensors (haptics) and aided by audiovisual immersive inputs. End-to-end delays of at most 5 ms (minimum detectable by the human eye) and ultra-high-speed transmissions will be needed by services such as telehealth or Industry 4.0. To comply with these requirements, both networks and applications need to be dramatically improved. On the one hand, in the last years a massive effort is being put onto improving the current network infrastructures. Examples of this are the 5G paradigm or the advances on mechanisms of network precision control using network slicing, Software-defined networking solutions or novel service oriented protocols. On the other hand, Virtual Reality (VR) video streaming, which is the best-suited technology to provide the immersive visual feed of the Tactile Internet, is still far from the envisioned levels of real-timeliness and quality. The aim of this...

Multimedia Tools and Applications, 2018
Remote video collaboration is common nowadays in conferencing, telehealth and remote teaching app... more Remote video collaboration is common nowadays in conferencing, telehealth and remote teaching applications. To support these low-latency and interactive use cases, Real-Time Communication (RTC) solutions are generally used. WebRTC is an open-source project for real-time browser-based conferencing, developed with a peer-to-peer architecture in mind. In this peer-to-peer architecture, each sending peer needs to encode a separate, independent stream for each receiving peer participating in the remote session, which makes this approach expensive in terms of encoders and not able to scale well for a large number of users. This paper proposes a WebRTC-compliant framework to solve this scalability issue, without impacting the quality delivered to the remote peers. In the proposed framework, each sending peer is only equipped with a limited number of encoders, much smaller than and independent of the number of receiving peers. Consequently, each encoder transmits to a multitude of receivers at the same time, to improve scalability. A centralized node based on the Selective Forwarding Unit (SFU) principle, called conference controller, forwards the best stream to the receiving peers, based on their bandwidth conditions. Moreover, the conference controller dynamically recomputes the encoding bitrates of the sending peers, to maximize the quality delivered to the receiving peers. This approach allows to closely follow the long-term bandwidth variations of the receivers, even with a limited number of encoders at sender-side, and increase the delivered video quality. An integer linear programming formulation for the bitrate recomputation problem is presented, which can be optimally solved when the number of receivers is small. An approximate, scalable method is also proposed using the K-means clustering algorithm. The gains brought by the proposed framework have been confirmed in both simulation and emulation, through a testbed implementation using the Google Chrome browser and the open-source Jitsi-Videobridge software. Particularly, we focus on a remote collaboration scenario where the interaction among the remote participants is dominated by a single peer, as in a remote teaching scenario. When a single sending peer equipped with three encoders transmits to 28 receiving peers, the proposed framework improves the average received video bitrate up to 15%, compared to a static solution where the encoding bitrates do not change over time. Moreover, the dynamic bitrate recomputation is more efficient than a static association in terms of encoders used at sender-side. For the same configuration mentioned above, the same received bitrate is obtained in the static case using four encoders as in the dynamic case using three encoders.
IEEE Communications Magazine, Oct 1, 2020
Technological improvements are rapidly advancing holographic-type content distribution. Significa... more Technological improvements are rapidly advancing holographic-type content distribution. Significant research efforts have been made to meet the low-latency and high-bandwidth requirements set forward by interactive applications such as remote surgery and virtual reality. Recent research made six degrees of freedom (6DoF) for immersive media possible, where users may both move their head and change their position within a scene. In this article, we present the status and challenges of 6DoF applications based on volumetric media, focusing on the key aspects required to deliver such services. Furthermore, we present results from a subjective study to highlight relevant directions for future research.

Tile-based Adaptive Streaming for Virtual Reality Video
ACM Transactions on Multimedia Computing, Communications, and Applications, Nov 30, 2019
The increasing popularity of head-mounted devices and 360° video cameras allows content providers... more The increasing popularity of head-mounted devices and 360° video cameras allows content providers to provide virtual reality (VR) video streaming over the Internet, using a two-dimensional representation of the immersive content combined with traditional HTTP adaptive streaming (HAS) techniques. However, since only a limited part of the video (i.e., the viewport) is watched by the user, the available bandwidth is not optimally used. Recent studies have shown the benefits of adaptive tile-based video streaming; rather than sending the whole 360° video at once, the video is cut into temporal segments and spatial tiles, each of which can be requested at a different quality level. This allows prioritization of viewable video content and thus results in an increased bandwidth utilization. Given the early stages of research, there are still a number of open challenges to unlock the full potential of adaptive tile-based VR streaming. The aim of this work is to provide an answer to several of these open research questions. Among others, we propose two tile-based rate adaptation heuristics for equirectangular VR video, which use the great-circle distance between the viewport center and the center of each of the tiles to decide upon the most appropriate quality representation. We also introduce a feedback loop in the quality decision process, which allows the client to revise prior decisions based on more recent information on the viewport location. Furthermore, we investigate the benefits of parallel TCP connections and the use of HTTP/2 as an application layer optimization. Through an extensive evaluation, we show that the proposed optimizations result in a significant improvement in terms of video quality (more than twice the time spent on the highest quality layer), compared to non-tiled HAS solutions.

IEEE Communications Letters, Nov 1, 2016
In HTTP Adaptive Streaming (HAS), video content is temporally divided into multiple segments, eac... more In HTTP Adaptive Streaming (HAS), video content is temporally divided into multiple segments, each encoded at several quality levels. The client can adapt the requested video quality to network changes, generally resulting in a smoother playback. Unfortunately, live streaming solutions still often suffer from playout freezes and a large end-to-end delay. By reducing the segment duration, the client can use a smaller temporal buffer and respond even faster to network changes. However, since segments are requested subsequently, this approach is susceptible to high round-trip times. In this letter, we discuss the merits of an HTTP/2 push-based approach. We present the details of a measurement study on the available bandwidth in real 4G/LTE networks, and analyze the induced bit rate overhead for HEVCencoded video segments with a sub-second duration. Through an extensive evaluation with the generated video content, we show that the proposed approach results in a higher video quality (+7.5%) and a lower freeze time (-50.4%), and allows to reduce the live delay compared to traditional solutions over HTTP/1.1.

IEEE Access, 2023
The increasing popularity of head-mounted displays (HMD) and depth cameras has encouraged content... more The increasing popularity of head-mounted displays (HMD) and depth cameras has encouraged content providers to offer interactive immersive media content over the internet. Traditionally, dynamic adaptive streaming over HTTP (DASH) is the go-to standard for video streaming. However, HTTP is built on top of protocols such as the transmission control protocol (TCP), which prioritize reliability over latency, thereby, inducing additional delay due to acknowledgments and retransmissions, especially on lossy networks. In addition, such reliable protocols suffer from head-of-line (HOL) blocking problem at various levels, leading to playout interruptions of the video streaming application. The third generation of HTTP, i.e., HTTP/3 was recently introduced to deal with the issues posed by TCP. As a major change, HTTP/3 replaces TCP with QUIC at the transport layer, which solves the HOL problem at the transport layer. Moreover, the datagram extension of QUIC allows for unreliable data delivery, just like UDP, which could substantially reduce the latency. Combining this feature of QUIC with the quality adaptation capabilities of DASH-based streaming could bring interactive immersive media delivery to the next level. This work proposes the integration of DASH with the concept of partial reliability of QUIC to reduce playout interruptions and increase the quality of the delivered immersive content on lossy networks. Here, the DASH scheme takes quality and prioritization decisions based on the changing network conditions and the user's viewport, respectively. Then, the part of the content with the highest priority, i.e., within the viewport, is delivered reliably and the rest unreliably. To the best of our knowledge, this is the first work to combine adaptive streaming with partial reliability. Herein, we provide an implementation of a headless player which supports HTTP/3 over partially reliable QUIC as well as state-of-the-art protocols like HTTP/3 over reliable QUIC and HTTP/2 over TCP. We performed an extensive evaluation of our proposed solution using real-world 5G throughput traces and bursty packet loss conditions, using point cloud streaming as the use case. Firstly, our evaluation shows that HTTP/2 is highly intolerant to loss and not suitable for streaming immersive media. Furthermore, even at a loss as high as 5%, the partially reliable framework achieves 46% higher throughput and delivers the content with 33% fewer playout interruptions compared to the reliable counterpart. Since current point cloud decoders are sensitive to loss, we applied the forward error correction mechanism to the data sent unreliably to ensure that the client decodes the content at a probability of 99.9%. Applying this overhead to our solution provides a significant gain of 25% in the throughput compared to the state of the art.

Integrated Network Management, May 17, 2021
The advent of softwarized networks has enabled the deployment of service chains of virtual networ... more The advent of softwarized networks has enabled the deployment of service chains of virtual network components on computational resources from the cloud up to the edge. The next generation of use cases (e.g. Virtual Reality (VR) content delivery services) puts even more stringent requirements on the infrastructure, calling for considerable advancements towards fully cloud-native architectures. This paper identifies important challenges for next-generation architectures to support low latency applications throughout their execution life cycle. A flexible architecture named SRFog is presented for the support of VR content delivery in next-generation networks. The approach combines Fog Computing (FC) concepts, an extension of cloud computing, with Segment Routing (SR), which leverages the source routing paradigm. Service Function Chaining (SFC) is also discussed as a major functionality for the proper orchestration of emerging use cases. Early implementations show promising results when deploying container-based VR chains in a flexible FC architecture.

IEEE Access, 2018
News-based websites and portals provide significant amounts of multimedia content to accompany ne... more News-based websites and portals provide significant amounts of multimedia content to accompany news stories and articles. In this context, the HTTP adaptive streaming is generally used to deliver video over the best-effort Internet, allowing smooth video playback and an acceptable Quality Of Experience (QoE). To stimulate the user engagement with the provided content, such as browsing between videos, reducing the videos' startup time has become more and more important: while the current median load time is in the order of seconds, research has shown that the user waiting times must remain below two seconds to achieve an acceptable QoE. In this paper, four complementary components are optimized and integrated into a comprehensive framework for low-latency delivery of news-related video content: 1) server-side encoding with short video segments; 2) HTTP/2 server push at the application layer; 3) serverside user profiling to identify relevant content for a given user; and 4) client-side storage to hold proactively delivered content. Using a large data set of a major Belgian news provider, containing millions of textand video-based article requests, we show that the proposed framework reduces the videos' startup time in different mobile network scenarios by over 50%, thereby improving the user interaction and skimming available content.

A Scalable Hierarchically Distributed Architecture for Next-Generation Applications
Journal of Network and Systems Management, Sep 14, 2021
The rigidity of traditional network architectures, with tightly coupled control and data planes, ... more The rigidity of traditional network architectures, with tightly coupled control and data planes, impairs their ability to adapt to highly dynamic requirements of future application domains. While Software-Defined Networking (SDN) can provide the required dynamism, it suffers from scalability issues. Therefore, efforts have been made to propose alternative decentralized solutions, such as the flat distributed SDN architecture. Such alternatives address the scalability problem for mainly local flows, but are impaired by a substantial increase in the overhead for cross-domain flow setup. To manage the trade-off between scalability and overhead, there is a need for intermediate hierarchical solutions. However, these have not been explored to the complete potential so far. Furthermore, the Network Function Virtualization (NFV) paradigm complements SDN by offering computational and storage services in the form of Virtual Network Functions (VNFs). When integrated seamlessly, both SDN and NFV can offer solutions to the problems posed by highly dynamic application domains. Hence, this work proposes a scalable hierarchical SDN control plane architecture for SDN/NFV-based next-generation application domains such as immersive media delivery system. We have implemented the proposed architecture based on the well-known state-of-the-art ZeroSDN controller. To evaluate the performance of the architecture, we have implemented an on-demand immersive media (point cloud) streaming application and varied the load on the control plane using the background traffic. To benchmark our solution, we have evaluated its performance in comparison with the centralized and flat distributed architectures. We show that the proposed architecture performs better than the rest in terms of scalability, lost flows, and processing latency. Our study shows that the proposed architecture when distributed to three controllers, accepts 23% more flows with almost 70% reduced processing latency compared to the state-of-the-art ONOS controller.

ACM Transactions on Multimedia Computing, Communications, and Applications, Nov 30, 2019
To cope with the massive bandwidth demands of Virtual Reality (VR) video streaming, both the scie... more To cope with the massive bandwidth demands of Virtual Reality (VR) video streaming, both the scientific community and the industry have been proposing optimization techniques such as viewport-aware streaming and tile-based adaptive bitrate heuristics. As most of the VR video traffic is expected to be delivered through mobile networks, a major problem arises: both the network performance and VR video optimization techniques have the potential to influence the video playout performance and the Quality of Experience (QoE). However, the interplay between them is neither trivial nor has it been properly investigated. To bridge this gap, in this article, we introduce VR-EXP, an open-source platform for carrying out VR video streaming performance evaluation. Furthermore, we consolidate a set of relevant VR video streaming techniques and evaluate them under variable network conditions, contributing to an in-depth understanding of what to expect when different combinations are employed. To the best of our knowledge, this is the first work to propose a systematic approach, accompanied by a software toolkit, which allows one to compare different optimization techniques under the same circumstances. Extensive evaluations carried out using realistic datasets demonstrate that VR-EXP is instrumental in providing valuable insights regarding the interplay between network performance and VR video streaming optimization techniques.

ACM Transactions on Multimedia Computing, Communications, and Applications, Apr 30, 2018
Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Str... more Video streaming applications currently dominate Internet traffic. Particularly, HTTP Adaptive Streaming (HAS) has emerged as the dominant standard for streaming videos over the best-effort Internet, thanks to its capability of matching the video quality to the available network resources. In HAS, the video client is equipped with a heuristic that dynamically decides the most suitable quality to stream the content, based on information such as the perceived network bandwidth or the video player buffer status. The goal of this heuristic is to optimize the quality as perceived by the user, the so-called Quality of Experience (QoE). Despite the many advantages brought by the adaptive streaming principle, optimizing users' QoE is far from trivial. Current heuristics are still suboptimal when sudden bandwidth drops occur, especially in wireless environments, thus leading to freezes in the video playout, the main factor influencing users' QoE. This issue is aggravated in case of live events, where the player buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In light of the above, in recent years, several works have been proposed with the aim of extending the classical purely client-based structure of adaptive video streaming, in order to fully optimize users' QoE. In this paper, a survey is presented of research works on this topic together with a classification based on where the optimization takes place. This classification goes beyond client-based heuristics to investigate the usage of server-and network-assisted architectures and of new application and transport layer protocols. In addition, we outline the major challenges currently arising in the field of multimedia delivery, which are going to be of extreme relevance in future years.

HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video dis... more HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-todisplay delay by 90.1 % and the average start-up delay by 40.1 %.
Adapting and tiling the streaming of virtual reality (VR) video content has the potential to redu... more Adapting and tiling the streaming of virtual reality (VR) video content has the potential to reduce the ultra-high bandwidth requirements of this type of multimedia services. Towards that goal, the optimization of a number of aspects is currently actively being researched. Novel rate adaptation heuristics, sophisticated viewport prediction algorithms and streaming protocol optimizations have proven their value to improve certain aspect of the VR streaming chain. However, the interplay between all these different optimizations as well as their tradeoff has not yet been explored in an experimental playground. The purpose of this demonstrator is to provide a full end-to-end adaptive tile-based VR video streaming system where each of the optimization aspects can be tuned with and their effect illustrated on-site.
Uploads
Papers by Jeroen van der Hooft