Search results

1 – 10 of 744
Article
Publication date: 28 October 2014

Bao Rong Chang, Hsiu-Fen Tsai, Chi-Ming Chen and Chien-Feng Huang

The physical server transition to virtualized infrastructure server have encountered crucial problems such as server consolidation, virtual machine (VM) performance, workload…

Abstract

Purpose

The physical server transition to virtualized infrastructure server have encountered crucial problems such as server consolidation, virtual machine (VM) performance, workload density, total cost of ownership (TCO), and return on investments (ROIs). In order to solve the problems mentioned above, the purpose of this paper is to perform the analysis of virtualized cloud server together with shared storage as well as the estimation of consolidation ratio and TCO/ROI in server virtualization.

Design/methodology/approach

This paper introduces five distinct virtualized cloud computing servers (VCCSs), and provides the appropriate assessment to five well-known hypervisors built in VCCSs. The methodology the authors proposed in this paper will gives people an insight into the problem of physical server transition to virtualized infrastructure server.

Findings

As a matter of fact, VM performance seems almost to achieve the same level, but the estimation of VM density and TCO/ROI are totally different among hypervisors. As a result, the authors have the recommendation to choose the hypervisor ESX server if you need a scheme with higher ROI and lower TCO. Alternatively, Proxmox VE would be the second choice if you like to save the initial investment at first and own a pretty well management interface at console.

Research limitations/implications

In the performance analysis, instead of ESX 5.0, the authors adopted ESXi 5.0 that is free software, its function is limited, and does not have the full functionality of ESX server, such as: distributed resource scheduling, high availability, consolidated backup, fault tolerance, and disaster recovery. Moreover, this paper do not discuss the security problem on VCCS which is related to access control and cryptograph in VMs to be explored in the further work.

Practical implications

In the process of virtualizing the network, ESX/ESXi has restrictions on the brand of the physical network card, only certain network cards can be detected by the VM. For instance: Intel and Broadcom network cards. The newer versions of ESXi 5.0.0 and above now support parts of Realtek series (Realtek 8186, Realtek 8169, and Realtek 8111E).

Originality/value

How to precisely assess the hypervisor for server/desktop virtualization is also of hard question needed to deal with it crisply before deploying new IT with VCCS on site. The authors have utilized the VMware calculator and developed an approach to server/desktop consolidation, virtualization performance, VM density, TCO, and ROIs. As a result, in this paper the authors conducted a comprehensive approach to analyze five well-known hypervisors and will give the recommendation for IT manager to choose a right solution for server virtualization.

Details

Engineering Computations, vol. 31 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 8 September 2021

Senthil Kumar Angappan, Tezera Robe, Sisay Muleta and Bekele Worku M

Cloud computing services gained huge attention in recent years and many organizations started moving their business data traditional server to the cloud storage providers…

Abstract

Purpose

Cloud computing services gained huge attention in recent years and many organizations started moving their business data traditional server to the cloud storage providers. However, increased data storage introduces challenges like inefficient usage of resources in the cloud storage, in order to meet the demands of users and maintain the service level agreement with the clients, the cloud server has to allocate the physical machine to the virtual machines as requested, but the random resource allocations procedures lead to inefficient utilization of resources.

Design/methodology/approach

This thesis focuses on resource allocation for reasonable utilization of resources. The overall framework comprises of cloudlets, broker, cloud information system, virtual machines, virtual machine manager, and data center. Existing first fit and best fit algorithms consider the minimization of the number of bins but do not consider leftover bins.

Findings

The proposed algorithm effectively utilizes the resources compared to first, best and worst fit algorithms. The effect of this utilization efficiency can be seen in metrics where central processing unit (CPU), bandwidth (BW), random access memory (RAM) and power consumption outperformed very well than other algorithms by saving 15 kHz of CPU, 92.6kbps of BW, 6GB of RAM and saved 3kW of power compared to first and best fit algorithms.

Originality/value

The proposed multi-objective bin packing algorithm is better for packing VMs on physical servers in order to better utilize different parameters such as memory availability, CPU speed, power and bandwidth availability in the physical machine.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 30 January 2009

Nijaz Bajgoric and Young B. Moon

The purpose of this paper is to present a framework for developing an integrated operating environment (IOE) within an enterprise information system by incorporating business…

2668

Abstract

Purpose

The purpose of this paper is to present a framework for developing an integrated operating environment (IOE) within an enterprise information system by incorporating business continuity drivers. These drivers enable a business to continue with its operations even if some sort of failure or disaster occurs.

Design/methodology/approach

Development and implementation of the framework are based on holistic and top‐down approach. An IOE on server's side of contemporary business computing is investigated in depth.

Findings

Key disconnection points are identified, where systems integration technologies can be used to integrate platforms, protocols, data and application formats, etc. Downtime points are also identified and explained. A thorough list of main business continuity drivers (continuous computing (CC) technologies) for enhancing business continuity is identified and presented. The framework can be utilized in developing an integrated server operating environment for enhancing business continuity.

Originality/value

This paper presents a comprehensive framework including exhaustive handling of enabling drivers as well as disconnection points toward CC and business continuity.

Details

Industrial Management & Data Systems, vol. 109 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 7 February 2020

Haiyan Zhuang and Babak Esmaeilpour Ghouchani

Virtual machines (VMs) are suggested by the providers of cloud services as the services for the users over the internet. The consolidation of VM is the tactic of the competent and…

Abstract

Purpose

Virtual machines (VMs) are suggested by the providers of cloud services as the services for the users over the internet. The consolidation of VM is the tactic of the competent and smart utilization of resources from cloud data centers. Placement of a VM is one of the significant issues in cloud computing (CC). Physical machines in a cloud environment are aware of the way of the VM placement (VMP) as the mapping VMs. The basic target of placement of VM issue is to reduce the physical machines' items that are running or the hosts in cloud data centers. The VMP methods have an important role in the CC. However, there is no systematic and complete way to discuss and analyze the algorithms. The purpose of this paper is to present a systematic survey of VMP techniques. Also, the benefits and weaknesses connected with selected VMP techniques have been debated, and the significant issues of these techniques are addressed to develop the more efficient VMP technique for the future.

Design/methodology/approach

Because of the importance of VMP in the cloud environments, in this paper, the articles and important mechanisms in this domain have been investigated systematically. The VMP mechanisms have been categorized into two major groups, including static and dynamic mechanisms.

Findings

The results have indicated that an appropriate VMP has the capacity to decrease the resource consumption rate, energy consumption and carbon emission rate. VMP approaches in computing environment still need improvements in terms of reducing related overhead, consolidation of the cloud environment to become an extremely on-demand mechanism, balancing the load between physical machines, power consumption and refining performance.

Research limitations/implications

This study aimed to be comprehensive, but there were some limitations. Some perfect work may be eliminated because of applying some filters to choose the original articles. Surveying all the papers on the topic of VMP is impossible, too. Nevertheless, the authors are trying to present a complete survey over the VMP.

Practical implications

The consequences of this research will be valuable for academicians, and it can provide good ideas for future research in this domain. By providing comparative information and analyzing the contemporary developments in this area, this research will directly support academics and working professionals for better knowing the growth in the VMP area.

Originality/value

The gathered information in this paper helps to inform the researchers with the state of the art in the VMP area. Totally, the VMP's principal intention, current challenges, open issues, strategies and mechanisms in cloud systems are summarized by explaining the answers.

Details

Kybernetes, vol. 50 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 8 January 2018

Felipe Abaunza, Ari-Pekka Hameri and Tapio Niemi

Data centers (DCs) are similar to traditional factories in many aspects like response time constraints, limited capacity, and utilization levels. Several indicators have been…

Abstract

Purpose

Data centers (DCs) are similar to traditional factories in many aspects like response time constraints, limited capacity, and utilization levels. Several indicators have been developed to monitor and compare productivity in manufacturing. However, in DCs most used indicators focus on technical aspects of infrastructure, not efficiency of operations. The purpose of this paper is to rely on operations management to define a commensurate and proportionate DC performance indicator: the energy-efficient utilization indicator (EEUI). EEUI makes objective and comparative assessment of efficiency possible independently of the operating environment and its constraints.

Design/methodology/approach

The authors followed a design science approach, which follows the practitioner’s initial steps for finding solutions to business relevant problems prior to theory building. Therefore, this approach fits well with this research, as it is primarily motivated by business and management needs. EEUI combines both the amount of energy consumed by different components and their current energy efficiency (EE). It reaches its highest value when all server components are optimally loaded in EE sense. The authors tested EEUI by collecting data from three scientific DCs and performing controlled laboratory tests.

Findings

The results indicate that the optimization of EEUI makes it possible to run computing resources more efficiently. This leads to a higher EE and throughput of the DC while reducing the carbon footprint associated to DC operations. Both energy-related costs and the total cost of ownership are consequently reduced, since the amount of both energy and hardware resources needed decrease, while improving DC sustainability.

Practical implications

In comparison with current DC operations, the results imply that using the EEUI could help increase the EE of DCs. In order to optimize the proposed EEUIs, DC managers and operators should use resource management policies that increase the resource usage variation of the jobs being processed in the same computing resources (e.g. servers).

Originality/value

The paper provides a novel approach to monitor the EE at which computing resources are used. The proposed indicator not only considers the utilization levels at which server components are used but also takes into account their EE and energy proportionality.

Details

International Journal of Productivity and Performance Management, vol. 67 no. 1
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 23 November 2018

Mohamed Amine Kaaouache and Sadok Bouamama

This purpose of this paper is to propose a novel hybrid genetic algorithm based on a virtual machine (VM) placement method to improve energy efficiency in cloud data centers. How…

Abstract

Purpose

This purpose of this paper is to propose a novel hybrid genetic algorithm based on a virtual machine (VM) placement method to improve energy efficiency in cloud data centers. How to place VMs on physical machines (PMs) to improve resource utilization and reduce energy consumption is one of the major concerns for cloud providers. Over the past few years, many approaches for VM placement (VMP) have been proposed; however, existing VM placement approaches only consider energy consumption by PMs, and do not consider the energy consumption of the communication network of a data center.

Design/methodology/approach

This paper attempts to solve the energy consumption problem using a VM placement method in cloud data centers. Our approach uses a repairing procedure based on a best-fit decreasing heuristic to resolve violations caused by infeasible solutions that exceed the capacity of the resources during the evolution process.

Findings

In addition, by reducing the energy consumption time with the proposed technique, the number of VM migrations was reduced compared with existing techniques. Moreover, the communication network caused less service level agreement violations (SLAV).

Originality/value

The proposed algorithm aims to minimize energy consumption in both PMs and communication networks of data centers. Our hybrid genetic algorithm is scalable because the computation time increases nearly linearly when the number of VMs increases.

Details

Journal of Systems and Information Technology, vol. 20 no. 4
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 24 July 2007

Simon Forge

The paper aims to examine the contribution of information and communication technology (ICT) to climate change, the origins of ICT unsustainability and explores some possible

2339

Abstract

Purpose

The paper aims to examine the contribution of information and communication technology (ICT) to climate change, the origins of ICT unsustainability and explores some possible remedies.

Design/methodology/approach

The paper draws on a variety of sources to survey the many problems of sustainable ICTs; their energy consumption trends; planned obsolescence; hazardous materials and hazardous disposal; and analyses the way forward.

Findings

Highlights the unsustainability of many ICT trends, e.g. power consumption in data centers, and the extent to which ICT affects progress towards an economy's environmental sustainability.

Originality/value

This paper provides a novel approach to ICT sustainability, highlighting unsustainability of current software technology and related hardware trends, especially the threat of operating systems to planetary sustainability, as well as the growing power consumption trends in data centers.

Details

Foresight, vol. 9 no. 4
Type: Research Article
ISSN: 1463-6689

Keywords

Book part
Publication date: 10 May 2023

Shazib Ahmad, Saksham Mishra and Vandana Sharma

Purpose: Green computing is a way of using the computer resource in an eco-friendly while maintaining and decreasing the harmful environmental impact. Minimising toxic materials…

Abstract

Purpose: Green computing is a way of using the computer resource in an eco-friendly while maintaining and decreasing the harmful environmental impact. Minimising toxic materials and reducing energy usage can also be used to recycle the product.

Need for the Study: The motivation of the study is to use green computing resources to decrease carbon emissions and their adverse effect on the environment.

Methodology: The study uses a qualitative method of collecting resources and data to address the opportunities, challenges, and future trends in green computing for Sustainable Future Technologies. The study focusses on multiple kinds of cloud computing services collected and executed into single remote servers. The service demand processor offers these services to the client per their needs. The simultaneous requests to access the cloud services, processing and expertly managing these requests by the processors are discussed and analysed.

Findings: The findings suggest that green computing is an upcoming and most promising area. The number of resources employed for green computing can be beneficial for lowering E-waste so that computing can be environmentally friendly and self-sustainable.

Practical Implications: Green computing applies across all industries and service sectors like healthcare, entertainment, tourism, and education. The convergence of technologies like Cloud Computing, AI, and Internet of Things (IoT) is greatly impacting Green Supply Chain Management (GSCM) market.

Details

Contemporary Studies of Risks in Emerging Technology, Part A
Type: Book
ISBN: 978-1-80455-563-7

Keywords

Article
Publication date: 5 August 2014

Mona A. Mohamed and Sharma Pillutla

The main aim of this paper is to investigate the potential of Cloud Computing as a multilayer integrative collaboration space for knowledge acquisition, nurturing and sharing. The…

2268

Abstract

Purpose

The main aim of this paper is to investigate the potential of Cloud Computing as a multilayer integrative collaboration space for knowledge acquisition, nurturing and sharing. The paper will pinpoint benefits and challenges of Cloud Computing in satisfying the new techno-sociological requirements of the knowledge society through the provision of information technology (IT) green services. Furthermore, the article calls for the engagement of researchers to generate additional discussion and dialog in this emerging and challenging area.

Design/methodology/approach

The paper applies a conceptual analysis to explore the utilization of the Cloud ecosystem as a new platform for knowledge management (KM) technologies characterized by environmental and economic benefits.

Findings

This paper reveals the emergence of a new layer in the Cloud stack known as Knowledge Management-as-a-Service. The article discusses how KM has the opportunity to evolve in synergy with Cloud Computing technologies using the modified Metcalfe’s law, while simultaneously pursuing other benefits. This research reveals that if Cloud Computing is successfully deployed, it will contribute to the efficient use of the under-utilized computing resources and enable a low carbon economy. However, challenges such as security, information overload and legal issues must be addressed by researchers before Cloud Computing becomes the de facto KM platform.

Originality/value

While the technical, legal and environmental complications of Cloud Computing have received the attention warranted, the KM concepts and implementation facets within the realm of the knowledge society have not yet received adequate consideration. This paper provides enterprise KM architects, planners, chief information officers (CIOs) and chief knowledge officers (CKOs) with a comprehensive review of the critical issues, many of which are often overlooked or treated in a fragmented manner within the Cloud environment.

Article
Publication date: 25 September 2009

Tugrul Daim, Jay Justice, Mark Krampits, Matthew Letts, Ganesh Subramanian and Mukundan Thirumalai

The purpose of this paper is to identify energy efficiency metrics that can be used by IT managers to measure and maintain the implementation of cost savings and green initiatives…

2178

Abstract

Purpose

The purpose of this paper is to identify energy efficiency metrics that can be used by IT managers to measure and maintain the implementation of cost savings and green initiatives in data centers.

Design/methodology/approach

The paper looks at the background of the problem and explores the reasons why energy savings in the data center are an important issue. Included are interviews and survey results from IT professionals serving at four unique organizations. A model of the measurable components of a data center is created to provide a framework for organizing metrics and communicating results throughout the corporation. The strengths and weaknesses of two of the most common data center metrics, PUE and DCP, are examined closely.

Findings

The paper concludes with future metric recommendations and a proposed credit‐based system that could be applied to encourage closer management of these metrics.

Practical implications

The metric recommendations can be used by IT managers resulting in energy efficiency improvements in their data centers.

Originality/value

The paper provides a good comprehension of multiple approaches and makes recommendations for a platform metric that can be further developed and adopted as a standard.

Details

Management of Environmental Quality: An International Journal, vol. 20 no. 6
Type: Research Article
ISSN: 1477-7835

Keywords

1 – 10 of 744