Performance Evaluation of Popular Cloud IaaS Providers
2011
Sign up for access to the world's latest research
Abstract
Cloud computing has become a compelling and emerging computing model for delivering and consuming on demand computing resources. In this paper, we study and compare the performance of three popular Cloud IaaS (Infrastructure as a Service) providers. These three popular Cloud IaaS providers include Amazon EC2, ElasticHosts, and BlueLock. The performance is studied in terms of execution time of CPU-bound processes, size of memory bandwidth, and speed of read write disk I/O. To make the comparison fair, we strived to create virtual server on each of the providers with similar hardware and system configurations. Experiment results show that the performance of these selected servers varies with regard to different benchmarks.
FAQs
AI
How do CPU performance benchmarks compare among Cloud IaaS providers?add
The study reveals that BlueLock exhibits superior CPU performance in single-instance runs, while ElasticHosts and Amazon EC2 perform similarly under two instances, indicating a variance in core utilization.
What does the memory bandwidth performance indicate across providers?add
The results show that BlueLock achieves over double the memory bandwidth compared to Amazon EC2, highlighting significant variations in IaaS provider architectures.
How does storage I/O bandwidth differ across the evaluated IaaS providers?add
Amazon EC2 leads in both read and write bandwidth metrics, surpassing ElasticHosts and BlueLock by a significant margin in sequential I/O tests.
When did the term 'Cloud Computing' emerge and how is it misinterpreted?add
The term 'Cloud Computing' originated in 2005 and is often confused with grid computing and utility computing.
What challenges face cloud computing architectures according to recent findings?add
Key challenges include transitioning from HDDs to SSDs for storage and managing data replication across geographically dispersed data centers.
Related papers
jastt, 2020
The cloud is the best method used for the utilization and organization of data. The cloud provides many resources for us via the internet. There are many technologies used in cloud computing systems; each one uses a different kind of protocols and methods. Many tasks can execute on different servers per second, which cannot execute on their computer. The most popular technologies used in the cloud system are Hadoop, Dryad, and another map reducing framework. Also, there are many tools used to optimize the performance of the cloud system, such as Cap3, HEP, and Cloudburst. This paper reviews in detail the cloud computing system, its used technologies, and the best technologies used with it according to multiple factors and criteria such as the procedure cost, speed cons and pros. Moreover, A comprehensive comparison of the tools used for the utilization of cloud computing systems is presented.
IJSRD - International Journal for Scientific Research and Development, 2018
— Cloud computing is a method or technique for enabling convenient, on demand network access to a shared pool of computing resources (such as computer networks, servers, applications, storage, and services) that can be continuously provisioned and released with minimum management efforts. In this paper we have used some of the existing performance monitoring tools and techniques popular among variety of users. During this study a number of factors such as response time, bandwidth and latency etc. have been determined. Considering the frequency of factors appearing from the review work it has been inferred that the cloud response time is very crucial for the cloud performance and selected for further study.
2013
Cloud Computing is defined as a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers. It is based on service-level agreements that established between service providers and consumers. Cloud Computing opens up many new possibilities for Internet application developers. The VMware is used to integrate the virtual servers which are used for performance analysis.
2016
Abstract: Cloud computing is business infrastructure paradigm that promising to remove the need for organizations to keep up an exclusive computing hardware. Cloud computing provides to users with various capabilities to store and process the data in third-party data centers. Cloud computing maximizes the effectiveness of shared resources. During the use of time sharing and virtualization cloud address with the particular set of material resources in a large scaled user‟s base with different needs. In this paper computing the performance of Platform-as-a-Service (PaaS) model and integrating the mechanisms to capture the virtual machine migrations. We study the cloud services on different large applications. In this paper we are presenting the performance of Platform-as-a-Service (PaaS) by using systematic model to perform end-to-end analysis of a cloud service. The systematic model designed by using scheduling algorithm i.e., Adaptive First Come First Serve under different job sizes...
International Journal on Cloud Computing: Services and Architecture, 2012
Cloud computing today has now been growing as new technologies and new business models. In distributed technology perspective, cloud computing most like client-server services like web-based or web-service but it used virtual resources to execute. Currently, cloud computing relies on the use of an elastic virtual machine and the use of network for data exchange. We conduct an experimental setup to measure the quality of service received by cloud computing customers. Experimental setup done by creating a HTTP service that runs in the cloud computing infrastructure. We interest to know about the impact of increasing the number of users on the average quality received by users. The qualities received by user measured within two parameters consist of average response times and the number of requests time out. Experimental results of this study show that increasing the number of users has increased the average response time. Similarly, the number of request time out increasing with increasing number of users. It means that the qualities of service received by user are decreasing also. We found that the impact of the number of users on the quality of service is no longer in linear trend. The results of this study can be used as a reference model for the network operator in performing services in which a certain number of users in order to obtain optimal quality services.
Abstract— Recent boom in the cloud computing industry has caused a shift in the information technology industry and has affected the way how information and data is stored and shared among the enterprise. The advent of social applications also demands the availability of resources that can be shared among the others. Cloud based architecture has made it possible for enterprises to utilize the computation power that was not available in the past. This paper takes a look and compares the top available service providers on the basis of the cost for each computing model as well takes a look at the performance by measuring the response time. It is observed that at all these service providers and elaborated the comparison of all based on their available architectures. Keywords— Comparison of Cloud Operators, Cloud Computing, Azure, RackSpace, Amazon Web Services https://sites.google.com/site/ijcsis/vol-13-no-12-dec-2015
2010
Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very different from those at traditional supercomputing centers. It is therefore critical to evaluate the performance of HPC applications in today's cloud environments to understand the tradeoffs inherent in migrating to the cloud. This work represents the most comprehensive evaluation to date comparing conventional HPC platforms to Amazon EC2, using real applications representative of the workload at a typical supercomputing center. Overall results indicate that EC2 is six times slower than a typical mid-range Linux cluster, and twenty times slower than a modern HPC system. The interconnect on the EC2 cloud platform severely limits performance and causes significant variability.
ABSTRACT With the increasing prevalence and demand of large scale cloud computing environment, a researcher has to draw more attention towards the services provided by the CLOUD. As the access to the server is increasing, centralized and distributed computing architecture will produce bottlenecks data which affect the quality of cloud computing services and bring the huge support to users. In this paper we are going to propose certain vital aspects such as memory utilization, storage capacity to check the efficiency and performance of various clouds in cloud computing environment. This is based upon the static data. The proposed mechanism enables users to access memories in various systems depending on the predefined criteria. Selection method for accessing the memory of a resource is properly introduced in this paper. Our evaluation results show that the aggregation of various clouds is effective in indicating the better efficiency and also to reduce network traffic sent over cloud networks.
Cloud Computing, 2010
Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks, kernels, and e-Science workloads. We also compare using long-term traces the performance characteristics and cost models of clouds with those of other platforms accessible to scientists. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community.
2013 IEEE Sixth International Conference on Cloud Computing, 2013
One of the main challenges faced by users of infrastructure-as-a-service (IaaS) clouds is the difficulty to adequately estimate the virtual resources necessary for their applications. Although many cloud providers offer programatic ways to rapidly acquire and release resources, it is important that users have a prior understanding of the impact that each virtual resource type offered by the provider may impose on application performance. This paper presents Cloud Crawler, a new declarative environment aimed at supporting users in describing and automatically executing application performance tests in IaaS clouds. To this end, the environment provides a novel declarative domain-specific language, called Crawl, which supports the description of a variety of performance evaluation scenarios in multiple IaaS clouds; and an extensible Java-based cloud execution engine, called Crawler, which automatically configures, executes and collects the results of each performance evaluation scenario described in Crawl. To illustrate Cloud Crawler's potential benefits, the paper reports on an experimental evaluation of a social network application in two public IaaS cloud providers, in which the proposed environment has successfully been used to investigate the application performance for different virtual machine configurations and under different demand levels.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.