Jordan University of Science and Technology
Computer Engineering
Cloud computing is an evolving and fast-spreading computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating Cloud computing into their IT curricula. One major... more
Cloud computing is an evolving and fast-spreading computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating Cloud computing into their IT curricula. One major challenge facing Cloud computing instructors is the lack of a teaching tool to experiment we introduce TeachCloud, a modeling and simulation environment for cloud computing. Students can use TeachCloud to experiment with different cloud components such as: processing elements, data centers, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). TeachCloud is an extension of CloudSim, a research-oriented simulator used for the development and validation in cloud computing.
- by Moath Jarrah and +2
- •
Cloud computing is an emerging and fast-growing computing paradigm that has gained great interest from both industry and academia. Consequently, many researchers are actively involved in cloud computing research projects. One major... more
Cloud computing is an emerging and fast-growing computing paradigm that has gained great interest from both industry and academia. Consequently, many researchers are actively involved in cloud computing research projects. One major challenge facing cloud computing researchers is the lack of a comprehensive cloud computing experimental tool to use in their studies. This paper introduces CloudExp, a modeling and simulation environment for cloud computing. CloudExp can be used to evaluate a wide spectrum of cloud components such as processing elements, data centers, storage, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). Moreover, CloudExp introduces the Rain workload generator which emulates real workloads in cloud environments. Also, MapReduce processing model is integrated in Clou-dExp in order to handle the processing of big data problems.
Cloud computing is an evolving and fast-spreading computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating Cloud computing into their IT curricula. One major... more
Cloud computing is an evolving and fast-spreading computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating Cloud computing into their IT curricula. One major challenge facing Cloud computing instructors is the lack of a teaching tool to experiment we introduce TeachCloud, a modeling and simulation environment for cloud computing. Students can use TeachCloud to experiment with different cloud components such as: processing elements, data centers, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). TeachCloud is an extension of CloudSim, a research-oriented simulator used for the development and validation in cloud computing.
- by Yaser Jararweh and +1
- •
Different negotiation engineering domains require from the system designer to tailor the negotiation framework according to the domain under which it will be used. This process of system design is timely consuming when supporting... more
Different negotiation engineering domains require from the system designer to tailor the negotiation framework according to the domain under which it will be used. This process of system design is timely consuming when supporting different geographically distributed and dynamic environments. Here we show a methodology to design negotiation systems by integrating domain-dependent message structure ontology with domain-independent marketplace architecture. The methodology gives the system designers a powerful modeling tool that can be used to tailor the framework in order to support different negotiation behaviors under different domains. System Entity Structure (SES) formalism is used to build the domain-dependent ontology while Finite Deterministic Discrete EVent System (FD-DEVS) formalism is used to build the marketplace model. The discrete event system with service oriented architecture (DEVS/SOA) simulation environment was employed to demonstrate a proof of concept of applicability to web service domains.
Using the 2D multi-group, flux-limited diffusion version of the code VUL-CAN/2D, that also incorporates rotation, we have calculated the collapse, bounce, shock formation, and early post-bounce evolutionary phases of a corecollapse... more
Using the 2D multi-group, flux-limited diffusion version of the code VUL-CAN/2D, that also incorporates rotation, we have calculated the collapse, bounce, shock formation, and early post-bounce evolutionary phases of a corecollapse supernova for a variety of initial rotation rates. This is the first series of such multi-group calculations undertaken in supernova theory with fully multi-D tools. We find that though rotation generates pole-to-equator angular anisotropies in the neutrino radiation fields, the magnitude of the asymmetries is not as large as previously estimated. The finite width of the neutrino decoupling surfaces and the significant emissivity above the τ = 2/3 surface moderate the angular contrast. Moreover, we find that the radiation field is always more spherically symmetric than the matter distribution, with its plumes and convective eddies. The radiation field at a point is an integral over many sources from the different contributing directions. As such, its distribution is much smoother than that of the matter and has very little power at high spatial frequencies. We present the dependence of the angular anisotropy of the neutrino fields on neutrino species, neutrino energy, and initial rotation rate. Only for our most rapidly rotating model do we start to see qualitatively different hydrodynamics, but for the lower rates consistent with the pre-collapse rotational profiles derived in the literature the anisotropies, though interesting, are modest. This does not mean -2that rotation does not play a key role in supernova dynamics. The decrease in the effective gravity due to the centripetal effect can be quite important. Rather, it means that when a realistic mapping between initial and final rotational profiles and 2D multi-group radiation-hydrodynamics are incorporated into collapse simulations the anisotropy of the radiation fields may be only a secondary, not a pivotal factor, in the supernova mechanism.
Environmental concerns and high prices of fossil fuels increase the feasibility of using renewable energy sources in smart grid. Smart grid technologies are currently being developed to provide efficient and clean power systems.... more
Environmental concerns and high prices of fossil fuels increase the feasibility of using renewable energy sources in smart grid. Smart grid technologies are currently being developed to provide efficient and clean power systems. Communication in smart grid allows different components to collaborate and exchange information. Traditionally, the utility company uses a central management unit to schedule energy generation, distribution, and consumption. Using centralized management in a very large scale smart grid forms a single point of failure and leads to serious scalability issues in terms of information delivery and processing. In this paper, a three-level hierarchical optimization approach is proposed to solve scalability, computational overhead, and minimize daily electricity cost through maximizing the used percentage of renewable energy. At level one, a single home or a group of homes are combined to form an optimized power entity (OPE) that satisfies its load demand from its own renewable energy sources (RESs). At level two, a group of OPEs satisfies energy requirements of all OPEs within the group. At level three, excess in renewable energy from different groups along with the energy from the grid is used to fulfill unsatisfied demands and the remaining energy are sent to storage devices.
- by Moath Jarrah and +2
- •
- Information Systems
The complexity of today's applications has increased exponentially with the growing demand for unlimited computational resources. Cluster based supercomputing systems and high performance data centers have traditionally been the ideal... more
The complexity of today's applications has increased exponentially with the growing demand for unlimited computational resources. Cluster based supercomputing systems and high performance data centers have traditionally been the ideal assets to fulfill the ever increase in computing demands. Such computing resources require multi-millions dollars investment for building the system. The cost increases with the amount of power needed for operating and cooling the system along with the maintenance. In this paper we investigate the Personal SuperComputing (PSC) as an emerging concept in high performance computing that provides an opportunity to overcome most of aforementioned problems. We explore and evaluate the GPU-based Personal SuperComputing system, and we compare it with conventional high performance cluster-based computing system. Our evaluations show promising opportunities in using GPU-based clusters for high performance computing applications.
- by Moath Jarrah and +1
- •
Medical applications are compute and data intensive applications. The complex mathematical models and differential equations in the medical domain require huge computations. Cardiac simulation is one example of such compute-intensive... more
Medical applications are compute and data intensive applications. The complex mathematical models and differential equations in the medical domain require huge computations. Cardiac simulation is one example of such compute-intensive application. In this work, we present an evaluation study for porting the cardiac simulator to the high performance GPUs accelerators. We also conducted a comparative evaluation for using conventional computing platforms: a single CPU and a CPU Cluster system. Our study shows that tremendous speed-up gain can be achieved using GPU-based system over cluster-based system or single CPU systems.
- by Moath Jarrah and +1
- •
- Cardiology, Biomedical Imaging, Electric Potential
The design of an effective last-level cache (LLC) is crucial to the overall processor performance and, consequently, continues to be the center of substantial research. Unfortunately, LLCs in modern high-performance processors are not... more
The design of an effective last-level cache (LLC) is crucial to the overall processor performance and, consequently, continues to be the center of substantial research. Unfortunately, LLCs in modern high-performance processors are not used efficiently. One major problem suffered by LLCs is their low hit rates caused by the large fraction of cache blocks that do not get re-accessed after being brought into the LLC following a cache miss. These blocks do not contribute any cache hits and usually induce cache pollution and thrashing. Cache bypassing presents an effective solution to this problem. Cache blocks that are predicted not to be accessed while residing in the cache are not inserted into the LLC following a miss, instead they bypass the LLC and are only inserted in the higher cache levels. This paper presents a simple, low-hardware overhead, yet effective, cache bypassing algorithm that dynamically chooses which blocks to insert into the LLC and which to bypass it following a miss based on past access/bypass patterns. Our proposed algorithm is thoroughly evaluated using a detailed simulation environment where its effectiveness, performance-improvement capabilities, and robustness are demonstrated. Moreover, it is shown to outperform the state-of-the-art cache bypassing algorithm in both a uniprocessor and a multi-core processor settings.
- by Mazen Kharbutli and +2
- •
The ability to study complex systems has become feasible with the new intensive computing resources such as GPU, multi-core, clusters, and Cloud infrastructures. Many companies and scientific applications use multi-agent modeling and... more
The ability to study complex systems has become feasible with the new intensive computing resources such as GPU, multi-core, clusters, and Cloud infrastructures. Many companies and scientific applications use multi-agent modeling and simulation platforms to study complex processes where analytical approach is not feasible. In this paper, we use two negotiation protocols to generalize the interaction behaviors between agents in multi-agent environments. The negotiation protocols are enforced by a domain-independent marketplace agent. In order to provide the agents with flexible language structure, a domain-dependent ontology is used. The integration of the domain-independent marketplace with the domain-dependent language ontology is accomplished through an automatic code generation tool. The tool simplifies deploying the framework for a specific domain of interest. Our methodology is implemented in FD-DEVS simulation environment and SES ontological framework.
- by Moath Jarrah
- •
Environmental concerns and high prices of fossil fuels increase the feasibility of using renewable energy sources in smart grid. Nowadays many homes adopt the use of renewable energy sources to satisfy their load demand. In this paper we... more
Environmental concerns and high prices of fossil fuels increase the feasibility of using renewable energy sources in smart grid. Nowadays many homes adopt the use of renewable energy sources to satisfy their load demand. In this paper we propose a mechanism for scheduling load demand of home appliances according to the availability of renewable energy and varying price of grid energy. Binary linear programming is used to model the proposed mechanism. Two types of appliances are used in this model: 1) Must run appliances. 2) Scheduled appliances. The proposed mechanism aims to minimize smart home electricity cost by maximizing the usage of renewable energy. Simulation shows that the proposed energy scheduling mechanism minimizes total electricity cost by 48% and maximizes the used renewable energy to be 65% of the total generated renewable energy.
- by Moath Jarrah and +2
- •
The ability to manage and exploit geographically distributed systems of service providers is rather limited in today engineering solutions. Existing techniques suffer from three main problems: first, current techniques cannot provide... more
The ability to manage and exploit geographically distributed systems of service providers is rather limited in today engineering solutions. Existing techniques suffer from three main problems: first, current techniques cannot provide brokering in managing loosely coupled service providers. Second, the engineering design of existing management tools does not provide enough expressive capabilities for varying user behaviors or when different domains are encountered. Third, lack of interaction between different requestors and providers yields inefficient and very costly agreements. In this dissertation, we will present an automated Domain-Independent Marketplace architecture that allows user agents to interact with provider agents using two simple and yet powerful negotiation protocols which define the rules of interactions in multi-agent environments.
- by Moath Jarrah
- •
In this paper, we will present a generic Domain-Independent Marketplace architecture that allows user agents to interact with service providers using two simple and yet powerful negotiation protocols. Service providers have different... more
In this paper, we will present a generic Domain-Independent Marketplace architecture that allows user agents to interact with service providers using two simple and yet powerful negotiation protocols. Service providers have different capabilities depending on the domain of interest. Hence, a dynamic message structuring capability is needed. A key role to support such an expressive power is to design an ontology that contains specializations between different domains. Integrating of the Domain-Dependent Ontology with the Domain-Independent marketplace gives the designer a powerful tool in which systems can be tailored based on the operational purposes. Encounter, Oceanography designing a Domain-Independent Marketplace architecture that allows user agents to interact with service providers using two simple and yet powerful negotiation protocols which define the rules of interactions in multi-agent environments. Having a trusted third party marketplace supports privacy and transparency among collaborative agents and service providers. Service providers have different capabilities depending on the domain of interest. Such providers can be Radar sensors as in oceanography surveillance systems, print servers in distributed printing jobs community, or they can be online stores providing products on the Web in the e-commerce domain. In order to provide negotiation in different domains, a dynamic message structuring capability is needed. A key role to support such an expressive power is to design an Ontology that contains specialization relations between the different domains of interest.
- by Moath Jarrah
- •
the GPU-based platforms and the authors present a comparison discussion against the conventional high performance cluster-based computing systems. The authors' evaluation shows potential advantages of using GPU-based systems for high... more
the GPU-based platforms and the authors present a comparison discussion against the conventional high performance cluster-based computing systems. The authors' evaluation shows potential advantages of using GPU-based systems for high performance computing applications while meeting different scaling granularities.
Environmental concerns and high prices of fossil fuels increase the feasibility of using renewable energy sources in smart grid. Nowadays many homes adopt the use of renewable energy sources to satisfy their load demand. In this paper we... more
Environmental concerns and high prices of fossil fuels increase the feasibility of using renewable energy sources in smart grid. Nowadays many homes adopt the use of renewable energy sources to satisfy their load demand. In this paper we propose a mechanism for scheduling load demand of home appliances according to the availability of renewable energy and varying price of grid energy. Binary linear programming is used to model the proposed mechanism. Two types of appliances are used in this model: 1) Must run appliances. 2) Scheduled appliances. The proposed mechanism aims to minimize smart home electricity cost by maximizing the usage of renewable energy. Simulation shows that the proposed energy scheduling mechanism minimizes total electricity cost by 48% and maximizes the used renewable energy to be 65% of the total generated renewable energy.
- by Moath Jarrah and +2
- •
Fuzzy clustering is one of the most popular techniques in medical image segmentation. The fuzzy C-means (FCM) algorithm has been widely used as it provides better performance and more information than other algorithms. As the data set... more
Fuzzy clustering is one of the most popular techniques in medical image segmentation. The fuzzy C-means (FCM) algorithm has been widely used as it provides better performance and more information than other algorithms. As the data set becomes large, the serial implementation of the FCM algorithm becomes too slow to accomplish the clustering task within acceptable time. Hence, a parallel implementation [for example, using today's fast graphics processing unit (GPU)] is needed. In this paper, we implement brFCM algorithm, a faster variant of the FCM algorithm, on two different GPU cards, Tesla M2070 and Tesla K20m. We compare our brFCM GPU-based implementation with its CPU-based sequential implementation. Moreover, we compare brFCM with the traditional version of the FCM algorithm. The experiments used lung CT and knee MRI images for clustering. The results show that our implementation has a significant improvement over the traditional CPU sequential implementation. GPU parallel brFCM is 2.24 times faster than its CPU implementation, and 23.43 times faster than a GPU parallel implementation of the traditional
- by Moath Jarrah and +3
- •
- Distributed Computing
Smart sensor networks provide numerous opportunities for smart grid applications including power monitoring, demand-side energy management, coordination of distributed storage, and integration of renewable energy generators. Because of... more
Smart sensor networks provide numerous opportunities for smart grid applications including power monitoring, demand-side energy management, coordination of distributed storage, and integration of renewable energy generators. Because of their low cost and ease-of-deployment, smart sensor networks are likely to be used on a large scale in future of smart power grids. The result is a huge volume of different variety of data sets. Processing and analyzing these data reveals deeper insights that can help expert to improve the operation of power grid to achieve better performance. The technology to collect massive amounts of data is available today, but managing the data efficiently and extracting the most useful information out of it remains a challenge. This paper discusses and provides recommendations and practices to be used in the future of smart grid and Internet of things. We explore the different applications of smart sensor networks in the domain of smart power grid. Also we discuss the techniques used to manage big data generated by sensors and meters for application processing.
- by Mahmoud Al-Ayyoub and +3
- •
- Technology
The rapid increase in wired Internet speed and the constant growth in the number of attacks make network protection a challenge. Intrusion detection systems (IDSs) play a crucial role in discovering suspicious activities and also in... more
The rapid increase in wired Internet speed and the constant growth in the number of attacks make network protection a challenge. Intrusion detection systems (IDSs) play a crucial role in discovering suspicious activities and also in preventing their harmful impact. Existing signature-based IDSs have significant overheads in terms of execution time and memory usage mainly due to the pattern matching operation. Therefore, there is a need to design an efficient system to reduce overhead. This research intends to accelerate the pattern matching operation through parallelizing a matching algorithm on a multi-core CPU. In this paper, we parallelize a bit-vector algorithm, Myers algorithm, on a multi-core CPU under the MapReduce framework. On average, we achieve four times speedup using our multi-core implementations when compared to the serial version. Additionally, we use two implementations of MapReduce to parallelize the Myers algorithm using Phoenix++ and MAPCG. Our MapReduce parallel implementations of the Myers algorithm are compared with an earlier message passing interface (MPI)-based parallel implementation of the algorithm. The results show 1.3 and 1.7 times improvement for Phoenix++ and MAPCG MapReduce implementations over MPI respectively.
- by Moath Jarrah
- •
The rapid increase in wired Internet speed and the constant growth in the number of attacks make network protection a challenge. Intrusion detection systems (IDSs) play a crucial role in discovering suspicious activities and also in... more
The rapid increase in wired Internet speed and the constant growth in the number of attacks make network protection a challenge. Intrusion detection systems (IDSs) play a crucial role in discovering suspicious activities and also in preventing their harmful impact. Existing signature-based IDSs have significant overheads in terms of execution time and memory usage mainly due to the pattern matching operation. Therefore, there is a need to design an efficient system to reduce overhead. This research intends to accelerate the pattern matching operation through parallelizing a matching algorithm on a multi-core CPU. In this paper, we parallelize a bit-vector algorithm, Myers algorithm, on a multi-core CPU under the MapReduce framework. On average, we achieve four times speedup using our multi-core implementations when compared to the serial version. Additionally, we use two implementations of MapReduce to parallelize the Myers algorithm using Phoenix++ and MAPCG. Our MapReduce parallel...
- by Moath Jarrah
- •