Sustainable Computing: Informatics and Systems, 2021
• TAGS a job allocation algorithm modelled using Markovian processing algebra, known as PEPA. • T... more • TAGS a job allocation algorithm modelled using Markovian processing algebra, known as PEPA. • The working environment is assumed to be a heterogeneous, and the job size distribution is assumed to be two phase hyper-exponential. • The analysis of the results reveals the TAGS algorithm is sensitive to the time out value. • Concerning total energy consumption, TAGS is shown to consume more energy than the shortest queue and the weighted random in all the server combinations considered. • Energy per job could be used to identify the best time out value for TAGS, i.e. that which produces the highest possible throughput with minimal impact on energy consumption.
INTRODUCTION Two aspects of our research concern the application of formal methods in human-compu... more INTRODUCTION Two aspects of our research concern the application of formal methods in human-computer interaction. The first aspect is the modelling and analysis of interactive devices with a particular emphasis on the user device dyad. The second is the modelling and analysis of ubiquitous systems where there are many users, one might say crowds of users. The common thread of both is to articulate and prove properties of interactive systems, to explore interactive behaviour as it influences the user, with a particular emphasis on interaction failure. The goal is to develop systematic techniques that can be packaged in such a way that they can be used effectively by developers. This “white paper” will briefly describe the two approaches and their potential value as well as their limitations and development opportunities.
Electronic Notes in Theoretical Computer Science, 2020
We evaluate energy consumption under unknown service demands using three policies: task assignmen... more We evaluate energy consumption under unknown service demands using three policies: task assignment based on guessing size (TAGS), the shortest queue strategy and random allocation in a homogeneous environment. We modelled these policies using performance evaluation processing algebra (PEPA) to derive numerical solutions. Our results show that servers running under TAGS consumes more energy than other policies in terms of total energy consumption. In contrast, TAGS consumes less energy than random allocation in terms of energy per job when the arrival rate is high and the job size is variable.
Electronic Notes in Theoretical Computer Science, 2020
In this paper we present two performance models of a web-based sales system, one without the pres... more In this paper we present two performance models of a web-based sales system, one without the presence of an attack and the other with the presence of a denial of service attack. Models are formulated using the PEPA formalism. The PEPA eclipse plug-in is used to support the creation of the PEPA models for the web-based sales system and the automatic calculation of the performance measures identified to evaluate the models. The evaluation of the models illustrates how the performance of the warehouse's sale is negatively affected by denial of service attack through preventing some or all customers' orders from being fulfilled. The resultant delay on selling perishable products would result on products being discarded.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported ... more This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License Newcastle University ePrints-eprint.ncl.ac.uk Forshaw M, McGough AS, Thomas N. Energy-efficient Checkpointing in High-throughput Cycle-stealing Distributed Systems.
Abstract. Passage time densities are useful performance measurements in stochastic systems. With ... more Abstract. Passage time densities are useful performance measurements in stochastic systems. With them the modeller can extract probabilistic quality-of-service guarantees such as: the probability that the time taken for a network header packet to travel across a heterogeneous network is less than 10ms must be at least 0.95. In this paper, we show how new tools can extract passage time densities and distributions from stochastic models defined in PEPA, a stochastic process algebra. In stochastic process algebras, the synchronisation policy is important for defining how different system components interact. We also show how these passage time results can vary according to which synchronisation strategy is used. We compare results from two popular strategies. 1
In this paper we present three formal performance models, using PEPA, for three types of misbehav... more In this paper we present three formal performance models, using PEPA, for three types of misbehaving voters when using the DRE-i e-voting system. We use the constructed performance models to study the impact of the intervention of misbehaving voters on the throughput of four main actions of the DRE-i e-voting system. Our performance analysis reveals that the three types of misbehaving voters have a negative impact on the throughput of the DRE-i server actions.
In this paper we consider variations in performance between different communicating pairs of node... more In this paper we consider variations in performance between different communicating pairs of nodes within a restricted network topology. This scenario highlights potential unfairness in network access, leading to one or more pair of communicating nodes being adversely penalised, potentially meaning that high bandwidth applications could not be supported. In particular we explore the effect that variable frame lengths can have on fairness, which suggests that reducing relative frame length variance at affected nodes might be one way to alleviate some of the effect of unfairness in network access.
Today distributed server systems have been widely used in many areas because they enhance the com... more Today distributed server systems have been widely used in many areas because they enhance the computing power while being cost-effective and more efficient. Meanwhile, some novel scheduling strategies are employed to optimize the task assignment process. This project closely explored the performance of the novel scheduling strategies through computer simulation. The research was carried out regarding the simulation of a novel scheduling policy (Task Assignment by Guessing Size) and other two previous task assignment policies (Random and JSQ). The performance of novel scheduling strategy (TAGS) is achieved by comparing TAGS policy with other two preceding policies. To facilitate the performance, computer simulation is applied to perform the statistical measurements. The findings were, indeed, very interesting, showing that the novel scheduling strategy (TAGS) merely obtains an optimal performance under heavy-tail distributed computing environment. The paper concludes by summarizing t...
In this paper we constructed a formal performance model for a secure and scalable e-voting scheme... more In this paper we constructed a formal performance model for a secure and scalable e-voting scheme known as DRE-i voting scheme. The well-known formal stochastic performance evaluation process algebra (PEPA) language and PEPA Eclipse plug-in were used to represent the voting scheme and analyse its performance characteristics. Timely responses of remote electronic voting protocols are important to increase voters’ confidence in e-voting systems. Therefore we evaluated the average response time that voters may observe when they cast their votes using remote electronic voting systems, such as DRE-i, and we also evaluated the throughput and queue length of the DRE-i server’s actions for different number of voters inside the DRE-i e-voting system. The performance evaluation of the DRE-i scheme reveals that PEPA language is efficient in investigating the performance properties of large scale e-voting schemes.
This paper addresses the issue of how the behaviour of the model may be used to directly show pro... more This paper addresses the issue of how the behaviour of the model may be used to directly show product form results in general models without relying on additional insight from the modeller. To do this properties of the model are defined which allow the identification of model decompositions which are then shown to exhibit product by employing the reversed process.
Grid computing offers many scientific and commercial benefits, but also many technical and organi... more Grid computing offers many scientific and commercial benefits, but also many technical and organisational challenges. This has led to many research areas in traditional distributed systems directing their efforts on to grid systems. Amongst the challenges for grid computing is the need to provide reliable quality of service to end users. This paper seeks to identify some of the challenges and opportunities that exist in the field of grid computing relating to system performance and dependability. These challenges are extensive and vary considerably as to the amount of effort that they are currently attracting.
Performance overhead introduced by security properties of e-voting schemes needs to be investigat... more Performance overhead introduced by security properties of e-voting schemes needs to be investigated to have an insight on the average response times that voters will observe when they cast their votes using remote electronic voting systems. Timely responses of remote electronic voting protocols are important to increase voters’ confidence in e-voting systems. In this paper we will study the individual verifiability impact of e-voting schemes on average response times of large scale e-voting scheme known as DRE-i by using the well-known formal stochastic performance evaluation process algebra language, PEPA. We will present a PEPA model of the e-voting scheme and show the response time analysis when voters verify the integrity of their votes.
Virtual machine (VM) consolidation is one of the strategies implemented to accomplish energy e ci... more Virtual machine (VM) consolidation is one of the strategies implemented to accomplish energy e ciency in data centres. Data centres take advantage of VM live migration to reduce the energy consumption without application downtime. However, the cost of VM live migration is not considered in some of the VM consolidation approaches. The key focus of this paper is to show how di↵erent workloads can impact the time of VM live migration. We demonstrate through live experiment the link between various workload characteristics and the time of VM live migration. We used the Kernel-based Virtual Machine (KVM) as a hypervisor and SPECjvm2008 benchmark to generate various workloads. Our results show a link between VM migration time and memory size of the VM as well as the speed of the network. We also provide a testing framework to facilitate automated experimentation and benchmarking of VM live migration by other researchers.
This paper reports on two strands of work that are being undertaken as part of the EPSRC funded D... more This paper reports on two strands of work that are being undertaken as part of the EPSRC funded DOPCHE project. The paper focuses on open software architectures for dynamic operating policies and a performance model used to find optimal operating policies.
Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering
This paper explores a type of non-repudiation protocol, called an anonymous and failure resilient... more This paper explores a type of non-repudiation protocol, called an anonymous and failure resilient fair-exchange e-commerce protocol, which guarantees a fair-exchange between two parties in an e-commerce environment. Models are formulated using the PEPA formalism to investigate the performance overheads introduced by the security properties and behaviour of the protocol. The PEPA eclipse plug-in is used to support the creation of the PEPA models for the security protocol and the automatic calculation of the performance measures identified for the protocol models.
This special issue brings together extended versions of papers selected from the 21 UK Performanc... more This special issue brings together extended versions of papers selected from the 21 UK Performance Engineering Workshop (UKPEW), held in Newcastle in July 2005. UKPEW is the leading UK forum for the presentation of all aspects of performance modelling and analysis of computer and telecommunication systems. The workshop attracts papers from the leading research groups in the UK and several from overseas. The papers in this issue were selected as amongst the best research presented at UKPEW 2005. They also represent a broad cross-section of performance engineering research combining sound theory and practical relevance.
Proceedings of the 9th EAI International Conference on Performance Evaluation Methodologies and Tools, 2016
The adoption of the cloud computing paradigm is associated with increasing security concerns. Clo... more The adoption of the cloud computing paradigm is associated with increasing security concerns. Cloud computing service models (SaaS, PaaS and IaaS) are exposed to different security threats in each level of services. The Trusted Cloud Computing Platform (TCCP) proposes a security model that protects customers VMs at the IaaS level. In this paper, we investigate methodologies for the specification and scalability of a performance model and evaluation of the VM Launch security protocol within the TCCP model. The Markovian process algebra PEPA has been used to specify and analyse the VM Launch protocol.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Intern... more This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International licence Newcastle University ePrints-eprint.ncl.ac.uk
Uploads
Papers by Nigel Thomas