Search results

1 – 10 of 528
Article
Publication date: 20 February 2023

Zakaria Sakyoud, Abdessadek Aaroud and Khalid Akodadi

The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The…

Abstract

Purpose

The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The authors have worked on the public university as an implementation field.

Design/methodology/approach

The design of the research work followed the design science research (DSR) methodology for information systems. DSR is a research paradigm wherein a designer answers questions relevant to human problems through the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. The authors have adopted a techno-functional approach. The technical part consists of the development of an intelligent recommendation system that supports the choice of optimal information technology (IT) equipment for decision-makers. This intelligent recommendation system relies on a set of functional and business concepts, namely the Moroccan normative laws and Control Objectives for Information and Related Technology's (COBIT) guidelines in information system governance.

Findings

The modeling of business processes in public universities is established using business process model and notation (BPMN) in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature. Implementation of artificial intelligence techniques can bring great value in terms of transparency and fluidity in purchasing business process execution.

Research limitations/implications

Business limitations: First, the proposed system was modeled to handle one type products, which are computer-related equipment. Hence, the authors intend to extend the model to other types of products in future works. Conversely, the system proposes optimal purchasing order and assumes that decision makers will rely on this optimal purchasing order to choose between offers. In fact, as a perspective, the authors plan to work on a complete automation of the workflow to also include vendor selection and offer validation. Technical limitations: Natural language processing (NLP) is a widely used sentiment analysis (SA) technique that enabled the authors to validate the proposed system. Even working on samples of datasets, the authors noticed NLP dependency on huge computing power. The authors intend to experiment with learning and knowledge-based SA and assess the' computing power consumption and accuracy of the analysis compared to NLP. Another technical limitation is related to the web scraping technique; in fact, the users' reviews are crucial for the authors' system. To guarantee timeliness and reliable reviews, the system has to look automatically in websites, which confront the authors with the limitations of the web scraping like the permanent changing of website structure and scraping restrictions.

Practical implications

The modeling of business processes in public universities is established using BPMN in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature.

Originality/value

The adopted techno-functional approach enabled the authors to bring information system governance from a highly abstract level to a practical implementation where the theoretical best practices and guidelines are transformed to a tangible application.

Details

Kybernetes, vol. 53 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 6 August 2018

Stephen Boakye Twum and Elaine Aspinwall

System reliability optimisation in today’s world is critical to ensuring customer satisfaction, businesses competitiveness, secure and uninterrupted delivery of services and…

Abstract

Purpose

System reliability optimisation in today’s world is critical to ensuring customer satisfaction, businesses competitiveness, secure and uninterrupted delivery of services and safety of operations. Among many systems configurations, complex systems are the most difficult to model for reliability optimisation. The purpose of this paper is to assess the performance of a novel optimisation methodology of the authors, developed to address the difficulties in the context of a gas carrying system (GCS) exhibiting dual failure modes and high initial reliability.

Design/methodology/approach

The minimum cut sets involving components of the system were obtained using the fault tree approach, and their reliability constituted into criteria which were maximised and the associated cost of improving their reliabilities minimised. Pareto optimal generic components and system reliabilities were subsequently obtained.

Findings

The results indicate that the optimisation methodology could improve the system’s reliability even from an initially high one, granted that the feasibility factor for improving a component’s reliability was very high. The results obtained, in spite of the size (41 objective functions and 18 decision variables), the complexity (dual failure modes) and the high initial reliability values provide confidence in the optimisation model and methodology and demonstrate their applicability to systems exhibiting multiple failure modes.

Research limitations/implications

The GCS was assumed either failed or operational, its parameters precisely determined, and non-repairable. The components failure rates were exponentially distributed and failure modes independent. A single weight vector representing expression of preference in which components reliabilities were weighted higher than cost was used due to the stability of the optimisation model to weight variations.

Practical implications

The high initial reliability values imply that reliability improvement interventions may not be a critical requirement for the GCS. The high levels could be sustained through planned and systematic inspection and maintenance activities. Even so, purely from an analytical stand point, the results nevertheless show that there was some room for reliability improvement however marginal that is. The improvement may be secured by: use of components with comparable levels of reliability to those achieved; use of redundancy techniques to achieve the desired levels of improvement in reliability; or redesigning of the components.

Originality/value

The novelty of this work is in the use of a reliability optimisation model and methodology that focuses on a system’s minimum cut sets as criteria to be optimised in order to optimise the system’s reliability, and the specific application to a complex system exhibiting dual failure modes and high component reliabilities.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 22 October 2019

Ming Li, Lisheng Chen and Yingcheng Xu

A large number of questions are posted on community question answering (CQA) websites every day. Providing a set of core questions will ease the question overload problem. These…

Abstract

Purpose

A large number of questions are posted on community question answering (CQA) websites every day. Providing a set of core questions will ease the question overload problem. These core questions should cover the main content of the original question set. There should be low redundancy within the core questions and a consistent distribution with the original question set. The paper aims to discuss these issues.

Design/methodology/approach

In the paper, a method named QueExt method for extracting core questions is proposed. First, questions are modeled using a biterm topic model. Then, these questions are clustered based on particle swarm optimization (PSO). With the clustering results, the number of core questions to be extracted from each cluster can be determined. Afterwards, the multi-objective PSO algorithm is proposed to extract the core questions. Both PSO algorithms are integrated with operators in genetic algorithms to avoid the local optimum.

Findings

Extensive experiments on real data collected from the famous CQA website Zhihu have been conducted and the experimental results demonstrate the superior performance over other benchmark methods.

Research limitations/implications

The proposed method provides new insight into and enriches research on information overload in CQA. It performs better than other methods in extracting core short text documents, and thus provides a better way to extract core data. The PSO is a novel method used for selecting core questions. The research on the application of the PSO model is expanded. The study also contributes to research on PSO-based clustering. With the integration of K-means++, the key parameter number of clusters is optimized.

Originality/value

The novel core question extraction method in CQA is proposed, which provides a novel and efficient way to alleviate the question overload. The PSO model is extended and novelty used in selecting core questions. The PSO model is integrated with K-means++ method to optimize the number of clusters, which is just the key parameter in text clustering based on PSO. It provides a new way to cluster texts.

Details

Data Technologies and Applications, vol. 53 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 11 November 2013

Giovanni Petrone, John Axerio-Cilies, Domenico Quagliarella and Gianluca Iaccarino

A probabilistic non-dominated sorting genetic algorithm (P-NSGA) for multi-objective optimization under uncertainty is presented. The purpose of this algorithm is to create a…

Abstract

Purpose

A probabilistic non-dominated sorting genetic algorithm (P-NSGA) for multi-objective optimization under uncertainty is presented. The purpose of this algorithm is to create a tight coupling between the optimization and uncertainty procedures, use all of the possible probabilistic information to drive the optimizer, and leverage high-performance parallel computing.

Design/methodology/approach

This algorithm is a generalization of a classical genetic algorithm for multi-objective optimization (NSGA-II) by Deb et al. The proposed algorithm relies on the use of all possible information in the probabilistic domain summarized by the cumulative distribution functions (CDFs) of the objective functions. Several analytic test functions are used to benchmark this algorithm, but only the results of the Fonseca-Fleming test function are shown. An industrial application is presented to show that P-NSGA can be used for multi-objective shape optimization of a Formula 1 tire brake duct, taking into account the geometrical uncertainties associated with the rotating rubber tire and uncertain inflow conditions.

Findings

This algorithm is shown to have deterministic consistency (i.e. it turns back to the original NSGA-II) when the objective functions are deterministic. When the quality of the CDF is increased (either using more points or higher fidelity resolution), the convergence behavior improves. Since all the information regarding uncertainty quantification is preserved, all the different types of Pareto fronts that exist in the probabilistic framework (e.g. mean value Pareto, mean value penalty Pareto, etc.) are shown to be generated a posteriori. An adaptive sampling approach and parallel computing (in both the uncertainty and optimization algorithms) are shown to have several fold speed-up in selecting optimal solutions under uncertainty.

Originality/value

There are no existing algorithms that use the full probabilistic distribution to guide the optimizer. The method presented herein bases its sorting on real function evaluations, not merely measures (i.e. mean of the probabilistic distribution) that potentially do not exist.

Details

Engineering Computations, vol. 30 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 24 February 2012

Marisa da Silva Maximiano, Miguel A. Vega‐Rodríguez, Juan A. Gómez‐Pulido and Juan M. Sánchez‐Pérez

The purpose of this paper is to address a multiobjective FAP (frequency assignment problem) formulation. More precisely, two conflicting objectives – the interference cost and the…

Abstract

Purpose

The purpose of this paper is to address a multiobjective FAP (frequency assignment problem) formulation. More precisely, two conflicting objectives – the interference cost and the separation cost – are considered to characterize FAP as an MO (multiobjective optimization) problem.

Design/methodology/approach

The contribution to this specific telecommunication problem in a real scenario follows a recent approach, for which the authors have already accomplished some preliminary results. In this paper, a much more complete analysis is performed, including two well‐known algorithms (such as the NSGA‐II and SPEA2), with new results, new comparisons and statistical studies. More concretely, in this paper five different algorithms are presented and compared. The popular multiobjective algorithms, NSGA‐II and SPEA2, are compared against the Differential Evolution with Pareto Tournaments (DEPT) algorithm, the Greedy Multiobjective Variable Neighborhood Search (GMO‐VNS) algorithm and its variant Greedy Multiobjective Skewed Variable Neighborhood Search (GMO‐SVNS). Furthermore, the authors also contribute with a new design of multiobjective metaheuristic named Multiobjective Artificial Bee Colony (MO‐ABC) that is included in the comparison; it represents a new metaheuristic that the authors have developed to address FAP. The results were analyzed using two complementary indicators: the hypervolume indicator and the coverage relation. Two large‐scale real‐world mobile networks were used to validate the performance comparison made among several multiobjective metaheuristics.

Findings

The final results show that the multiobjective proposal is very competitive, clearly surpassing the results obtained by the well‐known multiobjective algorithms (NSGA‐II and SPEA2).

Originality/value

The paper provides a comparison among several multiobjective metaheuristics to solve FAP as a real‐life telecommunication engineering problem. A new multiobjective metaheuristic is also presented. Preliminary results were enhanced with two well‐known multiobjective algorithms. To the authors' knowledge, they have never been investigated for FAP.

Article
Publication date: 17 July 2023

Youping Lin

The interval multi-objective optimization problems (IMOPs) are universal and vital uncertain optimization problems. In this study, an interval multi-objective grey wolf…

Abstract

Purpose

The interval multi-objective optimization problems (IMOPs) are universal and vital uncertain optimization problems. In this study, an interval multi-objective grey wolf optimization algorithm (GWO) based on fuzzy system is proposed to solve IMOPs effectively.

Design/methodology/approach

First, the classical genetic operators are embedded into the interval multi-objective GWO as local search strategies, which effectively balanced the global search ability and local development ability. Second, by constructing a fuzzy system, an effective local search activation mechanism is proposed to save computing resources as much as possible while ensuring the performance of the algorithm. The fuzzy system takes hypervolume, imprecision and number of iterations as inputs and outputs the activation index, local population size and maximum number of iterations. Then, the fuzzy inference rules are defined. It uses the activation index to determine whether to activate the local search process and sets the population size and the maximum number of iterations in the process.

Findings

The experimental results show that the proposed algorithm achieves optimal hypervolume results on 9 of the 10 benchmark test problems. The imprecision achieved on 8 test problems is significantly better than other algorithms. This means that the proposed algorithm has better performance than the commonly used interval multi-objective evolutionary algorithms. Moreover, through experiments show that the local search activation mechanism based on fuzzy system proposed in this study can effectively ensure that the local search is activated reasonably in the whole algorithm process, and reasonably allocate computing resources by adaptively setting the population size and maximum number of iterations in the local search process.

Originality/value

This study proposes an Interval multi-objective GWO, which could effectively balance the global search ability and local development ability. Then an effective local search activation mechanism is developed by using fuzzy inference system. It closely combines global optimization with local search, which improves the performance of the algorithm and saves computing resources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 January 2016

Gonggui Chen, Lilan Liu, Yanyan Guo and Shanwai Huang

For one thing, despite the fact that it is popular to research the minimization of the power losses in power systems, the optimization of single objective seems insufficient to…

Abstract

Purpose

For one thing, despite the fact that it is popular to research the minimization of the power losses in power systems, the optimization of single objective seems insufficient to fully improve the performance of power systems. Multi-objective VAR Dispatch (MVARD) generally minimizes two objectives simultaneously: power losses and voltage deviation. The purpose of this paper is to propose Multi-Objective Enhanced PSO (MOEPSO) algorithm that achieves a good performance when applied to solve MVARD problem. Thus, the new algorithm is worthwhile to be known by the public.

Design/methodology/approach

Motivated by differential evolution algorithm, cross-over operator is introduced to increase particle diversity and reinforce global searching capacity in conventional PSO. In addition to that, a constraint-handling approach considering Constrain-prior Pareto-Dominance (CPD) is presented to handle the inequality constraints on dependent variables. Constrain-prior Nondominated Sorting (CNS) and crowding distance methods are considered to maintain well-distributed Pareto optimal solutions. The method combining CPD approach, CNS technique, and cross-over operator is called the MOEPSO method.

Findings

The IEEE 30 node and IEEE 57 node on power systems have been used to examine and test the presented method. The simulation results show the MOEPSO method can achieve lower power losses, smaller voltage deviation, and better-distributed Pareto optimal solutions comparing with the Multi-Objective PSO approach.

Originality/value

The most original parts include: the presented MOEPSO algorithm, the CPD approach that is used to handle constraints on dependent variables, and the CNS method which is considered to maintain a well-distributed Pareto optimal solutions. The performance of the proposed algorithm successfully reflects the value of this paper.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 35 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 3 July 2017

Anand Amrit, Leifur Leifsson and Slawomir Koziel

This paper aims to investigates several design strategies to solve multi-objective aerodynamic optimization problems using high-fidelity simulations. The purpose is to find…

Abstract

Purpose

This paper aims to investigates several design strategies to solve multi-objective aerodynamic optimization problems using high-fidelity simulations. The purpose is to find strategies which reduce the overall optimization time while still maintaining accuracy at the high-fidelity level.

Design/methodology/approach

Design strategies are proposed that use an algorithmic framework composed of search space reduction, fast surrogate models constructed using a combination of physics-based surrogates and kriging and global refinement of the Pareto front with co-kriging. The strategies either search the full or reduced design space with a low-fidelity model or a physics-based surrogate.

Findings

Numerical investigations of airfoil shapes in two-dimensional transonic flow are used to characterize and compare the strategies. The results show that searching a reduced design space produces the same Pareto front as when searching the full space. Moreover, as the reduced space is two orders of magnitude smaller (volume-wise), the number of required samples to setup the surrogates can be reduced by an order of magnitude. Consequently, the computational time is reduced from over three days to less than half a day.

Originality/value

The proposed design strategies are novel and holistic. The strategies render multi-objective design of aerodynamic surfaces using high-fidelity simulation data in moderately sized search spaces computationally tractable.

Article
Publication date: 5 March 2018

Stéphane Brisset and Tuan-Vu Tran

This paper aims to propose a multiobjective branch and bound (MOBB) algorithm with a new criteria for the branching and discarding of nodes based on Pareto dominance and…

Abstract

Purpose

This paper aims to propose a multiobjective branch and bound (MOBB) algorithm with a new criteria for the branching and discarding of nodes based on Pareto dominance and contribution metric.

Design/methodology/approach

A multiobjective branch and bound (MOBB) method is presented and applied to the bi-objective combinatorial optimization of a safety transformer. A comparison with exhaustive enumeration and non-dominated sorting genetic algorithm (NSGA2) confirms the solutions.

Findings

It appears that MOBB and NSGA2 are both sensitive to their control parameters. The parameters for the MOBB algorithm are the number of starting points and the number of solutions on the relaxed Pareto front. The parameters of NSGA2 are the population size and the number of generations.

Originality/value

The comparison with exhaustive enumeration confirms that the proposed algorithm is able to find the complete set of non-dominated solutions in about 235 times fewer evaluations. As this last method is exact, its confidence level is higher.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 8 November 2018

Amos H.C. Ng, Florian Siegmund and Kalyanmoy Deb

Stochastic simulation is a popular tool among practitioners and researchers alike for quantitative analysis of systems. Recent advancement in research on formulating production…

Abstract

Purpose

Stochastic simulation is a popular tool among practitioners and researchers alike for quantitative analysis of systems. Recent advancement in research on formulating production systems improvement problems into multi-objective optimizations has provided the possibility to predict the optimal trade-offs between improvement costs and system performance, before making the final decision for implementation. However, the fact that stochastic simulations rely on running a large number of replications to cope with the randomness and obtain some accurate statistical estimates of the system outputs, has posed a serious issue for using this kind of multi-objective optimization in practice, especially with complex models. Therefore, the purpose of this study is to investigate the performance enhancements of a reference point based evolutionary multi-objective optimization algorithm in practical production systems improvement problems, when combined with various dynamic re-sampling mechanisms.

Design/methodology/approach

Many algorithms consider the preferences of decision makers to converge to optimal trade-off solutions faster. There also exist advanced dynamic resampling procedures to avoid wasting a multitude of simulation replications to non-optimal solutions. However, very few attempts have been made to study the advantages of combining these two approaches to further enhance the performance of computationally expensive optimizations for complex production systems. Therefore, this paper proposes some combinations of preference-based guided search with dynamic resampling mechanisms into an evolutionary multi-objective optimization algorithm to lower both the computational cost in re-sampling and the total number of simulation evaluations.

Findings

This paper shows the performance enhancements of the reference-point based algorithm, R-NSGA-II, when augmented with three different dynamic resampling mechanisms with increasing degrees of statistical sophistication, namely, time-based, distance-rank and optimal computing buffer allocation, when applied to two real-world production system improvement studies. The results have shown that the more stochasticity that the simulation models exert, the more the statistically advanced dynamic resampling mechanisms could significantly enhance the performance of the optimization process.

Originality/value

Contributions of this paper include combining decision makers’ preferences and dynamic resampling procedures; performance evaluations on two real-world production system improvement studies and illustrating statistically advanced dynamic resampling mechanism is needed for noisy models.

Details

Journal of Systems and Information Technology, vol. 20 no. 4
Type: Research Article
ISSN: 1328-7265

Keywords

1 – 10 of 528