Search results

1 – 10 of over 3000
Article
Publication date: 31 January 2020

Guan-hong Zhang, Odbal and Karlo Abnoosian

Today, with the rapid growth of cloud computing (CC), there exist several users that require to execute their tasks by the available resources to obtain the best performance…

Abstract

Purpose

Today, with the rapid growth of cloud computing (CC), there exist several users that require to execute their tasks by the available resources to obtain the best performance, reduce response time and use resources. However, despite the significance of the scheduling issue in CC, as far as the authors know, there is not any systematic and inclusive paper about studying and analyzing the recent methods. This paper aims to review the current mechanisms and techniques, which can be addressed in this area.

Design/methodology/approach

The central purpose of this paper refers to offering a complete study of the state-of-the-art planning algorithms in the cloud and also instructions for future research. Besides, this paper offers a methodological analysis of the scheduling mechanisms in the cloud environment.

Findings

The central role of this paper is to present a summary of the present issues related to scheduling in the cloud environment, providing a structure of some popular techniques in cloud scheduling scope and defining key areas for the development of cloud scheduling techniques in the future research.

Research limitations/implications

In this paper, scheduling mechanisms are classified into two main categories include deterministic and non-deterministic algorithms; however, it can also be classified into different categories. In addition, the selection of all related papers could not be ensured. It is possible that some appropriate and related papers were removed in the search process.

Practical implications

According to the results of this paper, the requirement for more suitable algorithms exists to allocate tasks for resources in cloud environments. In addition, some principal rules in cloud scheduling should be re-evaluated to achieve maximum productivity and minimize wasted expense and effort. In this direction, to stay away from overloading and under loading of components and resources, the proposed method should execute workloads in an adaptable and scalable way. As the number of users increased in cloud environments, the number of tasks in the cloud that needed to be scheduled proportionally increased. Thus, an efficient mechanism is needed for scheduling tasks in these environments.

Originality/value

The general information gathered in this study makes the researchers acquainted with the state-of-the-art scheduling area of the cloud. Entirely, the answers to the research questions summarized the main objective of scheduling, current challenges, mechanisms and methods in the cloud systems. The authors hope that the results of this paper lead researchers to present more efficient scheduling techniques in cloud systems.

Details

Kybernetes, vol. 49 no. 12
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 24 December 2021

Aref Gholizadeh Manghutay, Mehdi Salay Naderi and Seyed Hamid Fathi

Heuristic algorithms have been widely used in different types of optimization problems. Their unique features in terms of running time and flexibility have made them superior to…

Abstract

Purpose

Heuristic algorithms have been widely used in different types of optimization problems. Their unique features in terms of running time and flexibility have made them superior to deterministic algorithms. To accurately compare different heuristic algorithms in solving optimization problems, the final optimal solution needs to be known. Existing deterministic methods such as Exhaustive Search and Integer Linear Programming can provide the final global optimal solution for small-scale optimization problems. However, as the system grows the number of calculations and required memory size incredibly increases, so applying existing deterministic methods is no longer possible for medium and large-scale systems. The purpose of this paper is to introduce a novel deterministic method with short running time and small memory size requirement for optimal placement of Micro Phasor Measurement Units (µPMUs) in radial electricity distribution systems to make the system completely observable.

Design/methodology/approach

First, the principle of the method is explained and the observability of the system is analyzed. Then, the algorithm’s running time and memory usage when applying on some of the modified versions of the Institute of Electrical and Electronics Engineers 123-node test feeder are obtained and compared with those of its deterministic counterparts.

Findings

Because of the innovative method of step-by-step placement of µPMUs, a unique method is developed. Simulation results elucidate that the proposed method has unique features of short running time and small memory size requirements.

Originality/value

While the mathematical background of the observability study of electricity distribution systems is very well-presented in the referenced papers, the proposed step-by-step placement method of µPMUs, which shrinks unobservable parts of the system in each step, is not discussed yet. The presented paper is directly applicable to typical problems in the field of power systems.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 June 2000

P.Di Barba

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed…

Abstract

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed performance. Notes that 18 papers from the Symposium are grouped in the area of automated optimal design. Describes the main challenges that condition computational electromagnetism’s future development. Concludes by itemizing the range of applications from small activators to optimization of induction heating systems in this third chapter.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 July 2006

Giuseppe Delvecchio, Claudio Lofrumento, Ferrante Neri and Marcello Sylos Labini

This paper aims to design an algorithm able to locate all the possible dangerous areas generated by the leaking of a fault current in a grounding system (i.e. the areas where the…

Abstract

Purpose

This paper aims to design an algorithm able to locate all the possible dangerous areas generated by the leaking of a fault current in a grounding system (i.e. the areas where the limits of the technical standards are not respected) and thus locate, inside each area, the point which takes locally the maximum value of touch voltage.

Design/methodology/approach

A fast evolutionary‐deterministic algorithm to solve constrained multimodal optimization problems is proposed. The algorithm is composed by three algorithmic blocks: a Quasi Genetic Algorithm to find a population of feasible solutions, a Fitness Sharing Selection to choose a subpopulation of feasible and fitter solutions having high diversity, a Hooke‐Jeeves Algorithm to find all the global and local feasible maxima.

Findings

The proposed algorithm has been successfully applied to various current field (i.e. to many shapes of grounding grids) problems to find the dangerous values of touch voltages generated by various grounding systems having any shape and it has turned out to be fast and reliable.

Originality/value

For this kind of problems, in fact, there is a lack, in literature, of multimodal optimization methods under safety constraints and the application of classical methods (e.g. genetic algorithms or deterministic methods) would be often inadequate since these methods are made so as to converge towards a single maximum point and so they unavoidably lose the information related to all the other possible maxima. On the contrary, a good application of the proposed allows the overcoming of these limits.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 25 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 December 2021

Alexander Zemliak

In this paper, the previously developed idea of generalized optimization of circuits for deterministic methods has been extended to genetic algorithm (GA) to demonstrate new…

Abstract

Purpose

In this paper, the previously developed idea of generalized optimization of circuits for deterministic methods has been extended to genetic algorithm (GA) to demonstrate new possibilities for solving an optimization problem that enhance accuracy and significantly reduce computing time.

Design/methodology/approach

The disadvantages of GAs are premature convergence to local minima and an increase in the computer operation time when setting a sufficiently high accuracy for obtaining the minimum. The idea of generalized optimization of circuits, previously developed for the methods of deterministic optimization, is built into the GA and allows one to implement various optimization strategies based on GA. The shape of the fitness function, as well as the length and structure of the chromosomes, is determined by a control vector artificially introduced within the framework of generalized optimization. This study found that changing the control vector that determines the method for calculating the fitness function makes it possible to bypass local minima and find the global minimum with high accuracy and a significant reduction in central processing unit (CPU) time.

Findings

The structure of the control vector is found, which makes it possible to reduce the CPU time by several orders of magnitude and increase the accuracy of the optimization process compared with the traditional approach for GAs.

Originality/value

It was demonstrated that incorporating the idea of generalized optimization into the body of a stochastic optimization method leads to qualitatively new properties of the optimization process, increasing the accuracy and minimizing the CPU time.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 21 August 2009

Beatriz Pontes, Federico Divina, Raúl Giráldez and Jesús S. Aguilar‐Ruiz

The purpose of this paper is to present a novel control mechanism for avoiding overlapping among biclusters in expression data.

Abstract

Purpose

The purpose of this paper is to present a novel control mechanism for avoiding overlapping among biclusters in expression data.

Design/methodology/approach

Biclustering is a technique used in analysis of microarray data. One of the most popular biclustering algorithms is introduced by Cheng and Church (2000) (Ch&Ch). Even if this heuristic is successful at finding interesting biclusters, it presents several drawbacks. The main shortcoming is that it introduces random values in the expression matrix to control the overlapping. The overlapping control method presented in this paper is based on a matrix of weights, that is used to estimate the overlapping of a bicluster with already found ones. In this way, the algorithm is always working on real data and so the biclusters it discovers contain only original data.

Findings

The paper shows that the original algorithm wrongly estimates the quality of the biclusters after some iterations, due to random values that it introduces. The empirical results show that the proposed approach is effective in order to improve the heuristic. It is also important to highlight that many interesting biclusters found by using our approach would have not been obtained using the original algorithm.

Originality/value

The original algorithm proposed by Ch&Ch is one of the most successful algorithms for discovering biclusters in microarray data. However, it presents some limitations, the most relevant being the substitution phase adopted in order to avoid overlapping among biclusters. The modified version of the algorithm proposed in this paper improves the original one, as proven in the experimentation.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 May 2006

Jake M. Kosior and Doug Strong

The purpose of this research is to describe how total cost concept with logistical based costing (LBC) is developed in detail and then used to build logistical models on the…

5134

Abstract

Purpose

The purpose of this research is to describe how total cost concept with logistical based costing (LBC) is developed in detail and then used to build logistical models on the Microsoft Excel platform that are integrated from the customer's factory to the supplier's door.

Design/methodology/approach

The models developed in this project are deterministic, event‐based algorithms to compare logistical conduits for bulk and containerized commodities. The demand chain approach is used to derive the pathways in reverse order from the customer to the supplier. The methodology is necessary to find all possible conduits from origin to destination, including points where product may cross over between various logistics systems. The approach is applied to the bulk and container system with disconnects (elevators, ports) serving as the demarcation points. The pathways from supplier to end‐user must be identified prior to application of classification and costing techniques. A goal of this research was to compare the per unit cost of two different logistical systems – bulk versus container – in two case studies. The first case study was for a miller in Northern China and the second was for a mill in Helsinki, Finland.

Findings

The spreadsheet models produced results that were within 3 percent of real world costs. Each demand chain was shown to be unique and required customized cost functions to properly configure algorithms.

Research limitations/implications

The paper suggests that, while a core algorithm may exist for all supply/demand chains, no one particular algorithm configuration suffices. Each supply/demand chain is unique, in terms of both costs and performance. The use of modular cost functions provides the customization necessary to address this issue.

Practical implications

This project verifies that successful implementation of a model is dependent on following a set of procedures that begins with a clear statement of what the model is to measure, along with what is to be included and what are the constraints imposed on the algorithm. Mapping the flow of the goods through logistical systems provides visibility as to where costs are incurred and how they are to be assigned to the supplier or customer. An improperly assigned variable in the early stages of a supply/demand chain reduces accuracy of subsequent calculations. LBC increases the precision of models by properly establishing the configuration of cost drivers for each stage of the supply/demand chain by avoiding the use of the cost averaging used in statistical analysis.

Originality/value

This paper provides a standardized approach for mapping, costing and building global supply/demand chain models. The ultimate customer, once thought of as the “end of the line”, now dictates the cost and performance requirements of logistical conduits. While this paper encapsulates methods for building total cost models from the customer's perspective, other configurations can be readily constructed to examine physical and performance characteristics.

Details

Journal of Enterprise Information Management, vol. 19 no. 3
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 4 December 2019

Herbjørn Andresen

The purpose of this paper is to raise attention within the records management community about evolving demands for explanations that make it possible to understand the content of…

Abstract

Purpose

The purpose of this paper is to raise attention within the records management community about evolving demands for explanations that make it possible to understand the content of records, also when they reflect output from algorithms.

Design/methodology/approach

The methodological approach is a conceptual analysis based in records management theory and the philosophy of science. The concepts that are developed are thereafter applied to “the right to an explanation” and “an algorithmic ethics approach,” respectively, to further examine their viability.

Findings

Different forms of explanations, ranging from “certain” explanations to predictions, as well as varying degrees of control over the input data to algorithms, affect the nature of the explanations and what kinds of records the explanations may reside in.

Originality/value

This paper contributes to a conceptual frame for discussing where explanations to algorithms may be documented, within different kinds of records, emanating from different kinds of processes.

Details

Records Management Journal, vol. 30 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

Book part
Publication date: 15 January 2010

Emma Frejinger and Michel Bierlaire

This paper deals with choice set generation for the estimation of route choice models. Two different frameworks are presented in the literature: one aims at generating…

Abstract

This paper deals with choice set generation for the estimation of route choice models. Two different frameworks are presented in the literature: one aims at generating consideration sets and one samples alternatives from the set of all paths. Most algorithms are designed to generate consideration sets but fail in general to do so because some observed paths are not generated. In the sampling approach, the observed path as well as all considered paths is in the choice set by design. However, few algorithms can be actually used in the sampling context.

In this paper, we present the two frameworks, with an emphasis on the sampling approach, and discuss the applicability of existing algorithms to each of the frameworks.

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Article
Publication date: 16 November 2017

Thomas Fischer

The scientific criterion of determinability (predictability) can be framed in realist or in constructivist terms. This can pose a challenge to design researchers who operate…

Abstract

Purpose

The scientific criterion of determinability (predictability) can be framed in realist or in constructivist terms. This can pose a challenge to design researchers who operate between scientific research (which favors a realist view of determinism/indeterminism) and design practice (which favors a constructivist view of determinability/indeterminability). This paper aims to develop a framework to navigate this challenge.

Design/methodology/approach

A critical approach to “scientific” design research is developed by examining the notion of (in)determinism, with particular attention to the observer-based projection of systemic boundaries, and the constructivist understanding of how such boundaries are constituted. This is illustrated using automata theory. A decision-making framework is then developed based on a diagram known as the epistemological triangle.

Findings

The navigation between determinism as a property of the observed, and determinability as a property of the observer follows the navigation between realist and constructivist perspectives, and thus has a bearing on the navigation of the kinds of design research distinguished by Frayling, and their implied primary evaluation criteria.

Research limitations/implications

The presented argument advocates a constructivist view, which, however, is not meant to imply a rejection of, but rather, an additional degree of freedom extending the realist view.

Originality/value

This discussion contributes to the establishment of observational determinability as observer-dependent. The proposed framework connects the navigation between deterministic observables and determining observers to the navigation between the design criteria form, meaning and utility. This may be of value within and beyond design research.

Details

Kybernetes, vol. 46 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 3000