Papers by Nalini Venkatasubramanian
Handbook of Energy-Aware and Green Computing, Volume 1, 2012

Proceedings of the 8th International Workshop on Adaptive and Reflective MIddleware
Proceedings of the 8th International Workshop on Adaptive and Reflective MIddleware - ARM '09, 2009
On behalf of the organising committee we would like to welcome you to this 8th edition of the Wor... more On behalf of the organising committee we would like to welcome you to this 8th edition of the Workshop on Adaptive and Reflective Middleware, which this year is hosted in Urbana Champaign, Illinois, USA. The goal of the ARM workshop series for this and its seven prior incarnations is to bring together researchers working on techniques and middleware platforms to engineer dynamic adaptations in distributed systems. This year we received ten research paper submissions around this topic; subsequently a rigorous review process was followed with each paper receiving either three of four reviews from international experts. Based upon this process, six papers were finally selected to appear in these proceedings, and be presented at the workshop. The papers cover a range of topics related to the key issues of dynamic adaptation. These include new techniques to perform and manage adaptation in distributed; and also cover example application domains where the use of adaptive middleware is fundamentally important. We hope these papers will form the basis of lively discussion at the workshop.

Proceedings of the 15th International Middleware Conference on - Middleware '14, 2014
In this paper, we present CrowdWiFi, a novel vehicular middleware to identify and localize roadsi... more In this paper, we present CrowdWiFi, a novel vehicular middleware to identify and localize roadside WiFi APs that are located outside or inside buildings. Our work is motivated by the recent surge in availability of open WiFi access points (APs) that are enabling opportunistic data services to moving vehicles. Two key elements of CrowdWiFi that provide vehicles with opportunistic WiFi access include (a) an online compressive sensing component and (b) an offline crowdsourcing module. Online compressive sensing (CS) techniques are primarily used to for the coarse-grained estimation of nearby APs along the driving route; here, the received signal strength (RSS) values are recorded at runtime, and the number and locations of APs are recovered immediately based on limited RSS readings. The offline crowdsourcing mechanism assigns the online CS tasks to crowd-vehicles and aggregates answers using a bipartite graphical model. This offline crowdsourcing executes at a crowd-server that iteratively infers the reliability of each crowd-vehicle from the aggregated sensing results and refines the estimation of APs using weighted centroid processing. Extensive simulation results and real testbed experiments confirm that CrowdWiFi can successfully reduce the number of measurements needed for AP recovery, while maintaining satisfactory counting and localization accuracy. In addition, the impact of CrowdWiFi middleware on WiFi handoff and data transmission applications is examined.
transmitted, in any form or by any means, with the prior permission in writing of the publishers,... more transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made.

MILCOM 2008 - 2008 IEEE Military Communications Conference, 2008
Communication networks are vulnerable to natural disasters, such as earthquakes or floods, as wel... more Communication networks are vulnerable to natural disasters, such as earthquakes or floods, as well as to human attacks, such as an electromagnetic pulse (EMP) attack. Such real-world events have geographical locations, and therefore, the geographical structure of the network graph affects the impact of these events. In this paper we focus on assessing the vulnerability of (geographical) networks to such disasters. In particular, we aim to identify the location of a disaster that would have the maximum effect on network capacity. We consider a geometric graph model in which nodes and links are geographically located on a plane. Specifically, we model the physical network as a bipartite graph (in the topological and geographical sense) and consider the set of all vertical line segment cuts. For that model, we develop a polynomial time algorithm for finding a worst possible cut. Our approach has the potential to be extended to general graphs and provides a promising new direction for network design to avert geographical disasters or attacks.

Proceedings of the fifth ACM international conference on Multimedia - MULTIMEDIA '97, 1997
In this paper, we address the issues in designing metrics that are important in evaluating the Qo... more In this paper, we address the issues in designing metrics that are important in evaluating the QoS of video transmission. There has been little work in det ermining effective metrics of QoS for video transmission that characterize both cost (revenue generated or service demand) and guaranteed service. The metrics of analysis and comparison for video transmission must be determined as an end-to-end measure of QoS from video server to end-user(s). By developing these metrics, we hope to enhance the client, server and networking components of a system with monitoring capabilities to measure and evaluate video characterizations. This paper is organized as follows. In Section 2, we discuss a workload model for developing and understanding QoS metrics. Section 3 presents empirical studies and experimental justification for the metric selection based on the three systems-VOSAIC, hierarchical VOD and the remote VCR systems. Section 4 proposes a new integrated metric for measuring video QoS and the analytical framework to express the tradeoffs. We also propose a metric-based QoS architecture along with negotiation and reward protocols. In Section 5 we discuss related work and conclude with future research directions in Section 6. of video transmission. We propose a new metric for video QoS called the weighted cost-satisfaction ratio based on requirements from two perspectives: the user and the service provider. To understand real video workload environments and user behavioral patterns, we obtained and analyzed empirical results from the VOSAIC (video-over-the-Web) system, a hierarchical video-on-demand (VOD) system and a remote VCR system. Based on these results, we define parameters of resource consumption (storage and network bandwidth etc.) and user satisfaction (jitter, syncbronization skew) and derive analytical interrelationships among the metric parameters. We also draw an economic relationship between the user-satisfaction and resource consumption factors to solve metric optimization relations.
Extending Database Technology, 2008
In this demonstration, we present the design and features of iDataGuard. iDataGuard is an interop... more In this demonstration, we present the design and features of iDataGuard. iDataGuard is an interoperable security mid- dleware that allows users to outsource their file systems to heterogeneous data storage providers available on the Inter- net. Examples of data storage providers include Amazon S3 service, Rapidshare.de and Nivarnix. In the iDataGuard ar- chitecture, data storage providers are untrusted. Therefore, iDataGuard

IEEE Network, 2004
In the future, applications will need to execute in a ubiquitous environment with varying network... more In the future, applications will need to execute in a ubiquitous environment with varying network conditions (connectivity, bandwidth, etc.) and system constraints (e.g., power and storage). The distributed object paradigm is often used to facilitate the development of large-scale distributed applications. However, the traditional object messaging layer operates with limited awareness of underlying system and network conditions, whereas current system and network monitoring tools operate at the network layer with little awareness of application-level object communication requirements. This article explores the possibility, mechanisms, and benefits of filling the gap between object messaging and system monitoring. We introduce connection abstraction as the mechanism for these two layers to communicate and exchange information. Through this integration, object messaging can proactively adapt to changing system conditions; system monitoring policies and parameters can be optimized based on interobject communication properties.
Communications of the ACM, 2002
Look to composable middleware frameworks to ensure safe middleware interactions for ubiquitous co... more Look to composable middleware frameworks to ensure safe middleware interactions for ubiquitous computing applications.

Open distributed systems evolve dynamically, and their components interact with environments that... more Open distributed systems evolve dynamically, and their components interact with environments that are not under their control. A reflective model of distributed computation supports separation of concerns (for example, functionality and different QoS properties) and dynamic adaptation to changing environments or requirements. In such an ODS, a wide range of services and activities must execute concurrently and share resources. To avoid resource conflicts, deadlocks, inconsistencies, and incorrect execution semantics, the underlying resource management system-middleware-must ensure that concurrent system activities compose in a correct manner. Designers and programmers must consider interactions within and across reflective levels, clearly spell out the semantics of shared distributed resources, and develop new notions of overall system correctness that account for a dynamic, distributed, and reflective setting. To better understand the semantic issues involved in reflective distributed systems, we developed the TLAM, 1-3 a two-level actor model based on the actor model of object-based distributed computation. 4-6 Actors is a model of distributed reactive objects and has a built-in notion of encapsulation and interaction, making it well suited to represent evolution and coordination among interacting components in distributed applications. Traditional passive objects encapsulate the execution state and a set of procedures that manipulate it; an actor encapsulates a thread of control as well. Each actor potentially executes in parallel with other actors and interacts only by sending and receiving messages. As the name suggests, in the TLAM, a system is composed of two kinds of actors-base-level and metalevel-distributed over a network of processing nodes. Base-level actors carry out application-level computation, while meta-level actors are part of the runtime system (middleware) that manages system resources and controls the base-level's runtime semantics. The TLAM abstracts from the choice of a specific programming language or system architecture, providing a framework for reasoning about heterogeneous systems. It supports dynamic customizability and separation of concerns in designing and reasoning about ODS components. In addition, it uses reification (base object state as data at the metaobject level) and reflection (meta-objects modify the base-object state) with support for implicit invocation of meta objects in response to changes of base-level state. This provides for debugging, monitoring, and other services. TLAM approach We have used the TLAM framework in several case studies-distributed garbage collection, 7,8 composition of migration and reachability services, 1,3 and QoS-based resource management for multimedia

In this paper, we integrate components for (a) resource provisioning and (b)information collectio... more In this paper, we integrate components for (a) resource provisioning and (b)information collection that allows for cost-effective enforcement of application QoS while tolerating imprecision of system state information. We discuss a family of policies for each component and provide mechanisms for dynamically selecting the appropriate combinations of the policy. Our objective is to select suitable management policies that maximizes the number of concurrent supported users while minimizing the overhead needed to ensure QoS enforcement for these users. Our performance results indicate that the "best" combination of policies is dependent on existing system conditions. However, we generally observe that the highly adaptive dynamic range-based information collection mechanisms exhibit superior performance under most system conditions and user workloads. All of these observations are applied into our integrated middleware framework AutoSeC (Automatic Service Composition). AutoSeC provides support for dynamic service brokering that ensures effective utilization of system resources in a distributed environment.
In this paper, we present the design of DataGuard middleware that allows users to outsource their... more In this paper, we present the design of DataGuard middleware that allows users to outsource their file systems to heterogeneous data storage providers available on the Internet. Examples of data storage providers include gmail.com, rapidshare.de and Amazon S3 service. In the DataGuard architecture, data storage providers are untrusted. Therefore, DataGuard preserves data confidentiality and integrity of outsourced information by using cryptographic techniques. DataGuard effectively builds a secure network drive on top of any data storage provider on the Internet. We propose techniques that realize a secure file system over the heterogeneous data models offered by the diverse storage providers. To evaluate the practicality of DataGuard, we implemented a version of the middleware layer to test its performance, much to our satisfaction.

We consider the problem of evaluating continuous selection queries over sensor-generated values i... more We consider the problem of evaluating continuous selection queries over sensor-generated values in the presence of faults. Small sensors are fragile, have finite energy and memory, and communicate over a lossy medium; hence, tuples produced by them may not reach the querying node, resulting in an incomplete and ambiguous answer, as any of the non-reporting sensors may have produced a tuple which was lost. We develop a protocol, FAult Tolerant Evaluation of Continuous Selection Queries (FATE-CSQ), which guarantees a user-requested level of quality in an efficient manner. When many faults occur, this may not be achievable; in that case, we aim for the best possible answer, under the query's time constraints. FATE-CSQ is designed to be resilient to different kinds of failures. Our design decisions are based on an analytical model of different fault tolerance strategies based on feedback and retransmission. Additionally, we demonstrate the good performance of FATE-CSQ compared to competing protocols with realistic simulation parameters and under a variety of conditions.
This paper presents DataVault, an architecture designed for Web users that allows them to securel... more This paper presents DataVault, an architecture designed for Web users that allows them to securely access their data from any machine connected to the Internet and also lets them selectively share their data with trusted peers. The DataVault architecture is built on the outsourced database model (ODB), where clients/users outsource their database to a remote service provider that provides data management services such backup, recovery, transportability and data sharing. In DataVault, the service provider is untrusted. DataVault utilizes a novel PKI infrastructure and encrypted storage model that allows data sharing to take place via an untrusted server. The confidentiality and integrity of the user's data is preserved using cryptographic techniques so that the service provider manages encrypted data without having access to the content.
In this paper, we develop an energy efficient approach to processing continuous aggregate queries... more In this paper, we develop an energy efficient approach to processing continuous aggregate queries in sensor networks with bounded quality constraints. Specifically, we exploit quality-aware in-network aggregation of query information to reduce communication costs. Given cluster structures over sensor nodes, we develop an adaptive approximate data processing protocol to aggregate data and process queries given error tolerance on user queries. The protocol enforces group quality constraints on clusters and adjusts error bounds to balance workload of clusters and reduce communication cost. It also tries to initiate local communication instead of global communication in the effort of minimizing overall communication overhead. Our experimental results indicate that significant benefits can be achieved by using our adaptive protocol.
Information Technology has the potential of improving the quality and the amount of information h... more Information Technology has the potential of improving the quality and the amount of information humans receive during emergency response. Testing this technology in realistic and flexible environments is a non-trivial task. DrillSim is an augmented reality simulation environment for testing IT solutions. It provides an environment where scientists and developers can bring their IT solutions and test their effectiveness on the context of disaster response. The architecture of DrillSim is based on a multi-agent simulation. The simulation of the disaster response activity is achieved by modeling each person involved as an agent. This finer granularity provides extensibility to the system since new scenarios can be defined by defining new agents. This paper presents the architecture of DrillSim and explains in detail how DrillSim deals with the edition and addition of agent roles.

Optimizing user experience for streaming video applications on handheld devices is a significant ... more Optimizing user experience for streaming video applications on handheld devices is a significant research challenge. In this paper, we propose an integrated power management approach that unifies low level architectural optimizations(CPU, memory, register), OS power-saving mechanisms (Dynamic Voltage Scaling) and adaptive middleware techniques(admission control, optimal transcoding, network traffic regulation). Specifically, we identify interaction parameters between the different levels and optimize them to significantly reduce power consumption. With knowledge of device configurations, dynamic device parameters and changing system conditions, the middleware layer selects an appropriate video quality and fine tunes the architecture for optimized delivery of video. Our performance results indicate that architectural optimizations that are cognizant of user level parameters(e.g. transcoded video quality) can provide energy gains as high as 57.5% for the CPU and memory. Middleware adaptations to changing network noise levels can save as much as 70% of energy consumed by the wireless network interface. Furthermore, we demonstrate how such an integrated framework, that supports tight coupling of inter-level parameters can enhance user experience on a handheld substantially.

Streaming multimedia content to heterogeneous handheld devices is a significant research challeng... more Streaming multimedia content to heterogeneous handheld devices is a significant research challenge, due to the diverse computation capabilities and battery lifetimes of these devices. A unified framework that integrates low level architectural optimizations (CPU, memory), OS power-saving mechanisms (Dynamic Voltage Scaling) and adaptive middleware techniques (admission control, transcoding, network traffic regulation) can provide significant improvements in both the system performance and user experience. In this paper, we present such an integrated framework and investigate the trade-offs involved in serving distributed clients simultaneously, while maintaining acceptable QoS levels for each client. We show that the power savings attained at both CPU/memory and network levels can be aggregated for increased overall performance. Based on this, we demonstrate how an integrated framework, that supports tight coupling of inter-level parameters can enhance user experience on handheld devices.

Streaming multimedia content to heterogeneous handheld devices is a significant research challeng... more Streaming multimedia content to heterogeneous handheld devices is a significant research challenge, due to the diverse computation capabilities and battery lifetimes of these devices. A unified framework that integrates low level architectural optimizations (CPU, memory), OS power-saving mechanisms (Dynamic Voltage Scaling) and adaptive middleware techniques (admission control, transcoding, network traffic regulation) can provide significant improvements in both the system performance and user experience. In this paper, we present such an integrated framework and investigate the trade-offs involved in serving distributed clients simultaneously, while maintaining acceptable QoS levels for each client. We show that the power savings attained at both CPU/memory and network levels can be aggregated for increased overall performance. Based on this, we demonstrate how an integrated framework, that supports tight coupling of inter-level parameters can enhance user experience on handheld devices.
Uploads
Papers by Nalini Venkatasubramanian