Search results

1 – 10 of over 1000
Content available
Article
Publication date: 27 November 2020

Petar Jackovich, Bruce Cox and Raymond R. Hill

This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and vertex…

Abstract

Purpose

This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and vertex-greedy subclasses. As these subclasses of heuristics can create subtours, two known methodologies for subtour elimination on symmetric instances are reviewed and are expanded to cover asymmetric problem instances. This paper introduces a third novel subtour elimination methodology, the greedy tracker (GT), and compares it to both known methodologies.

Design/methodology/approach

Computational results for all three subtour elimination methodologies are generated across 17 symmetric instances ranging in size from 29 vertices to 5,934 vertices, as well as 9 asymmetric instances ranging in size from 17 to 443 vertices.

Findings

The results demonstrate the GT is the fastest method for preventing subtours for instances below 400 vertices. Additionally, a distinction between fragment constructive heuristics and the subtour elimination methodology used to ensure the feasibility of resulting solutions enables the introduction of a new vertex-greedy fragment heuristic called ordered greedy.

Originality/value

This research has two main contributions: first, it introduces a novel subtour elimination methodology. Second, the research introduces the concept of ordered lists which remaps the TSP into a new space with promising initial computational results.

Book part
Publication date: 15 August 2006

Seamus M. McGovern and Surendra M. Gupta

Disassembly takes place in remanufacturing, recycling, and disposal, with a line being the best choice for automation. The disassembly line balancing problem seeks a sequence that…

Abstract

Disassembly takes place in remanufacturing, recycling, and disposal, with a line being the best choice for automation. The disassembly line balancing problem seeks a sequence that is feasible, minimizes the number of workstations, and ensures similar idle times, as well as other end-of-life specific concerns. Finding the optimal balance is computationally intensive due to exponential growth. Combinatorial optimization methods hold promise for providing solutions to the problem, which is proven here to be NP-hard. Stochastic (genetic algorithm) and deterministic (greedy/hill-climbing hybrid heuristic) methods are presented and compared. Numerical results are obtained using a recent electronic product case study.

Details

Applications of Management Science: In Productivity, Finance, and Operations
Type: Book
ISBN: 978-0-85724-999-9

Article
Publication date: 1 July 2014

Byung-Won On, Gyu Sang Choi and Soo-Mok Jung

The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case…

Abstract

Purpose

The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case study of the name authority control problem in DLs.

Design/methodology/approach

To find a sample of name variants across DLs (e.g. DBLP and ACM) and in a single DL (e.g. ACM), the approach is based on two bipartite matching algorithms: Maximum Weighted Bipartite Matching and Maximum Cardinality Bipartite Matching.

Findings

First, the authors validated the effectiveness and efficiency of the bipartite matching algorithms. The authors also studied the nature of real cases of author name variants that had been found across DLs (e.g. ACM, CiteSeer and DBLP) and in a single DL.

Originality/value

To the best of the authors knowledge, there is less research effort to understand the nature of author name variants shown in DLs. A thorough analysis can help focus research effort on real problems that arise when the authors perform duplicate detection methods.

Details

Program, vol. 48 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 8 June 2015

hossein emari

– This study aims to propose a new construct – prodigality and develop a measurement scale to support the construct.

Abstract

Purpose

This study aims to propose a new construct – prodigality and develop a measurement scale to support the construct.

Design/methodology/approach

Combining the paradigms of Churchill, Malhotra and Birks, the item generation and content validity yielded the development of a modified scale. Three main steps in assessment of the scale: dimensional structure, reliability and validity led to the development of a prodigality scale. A total of 32 items were generated, through assessing Qur’anic verses that are related to Muslim consumption patterns linked to in Islam.

Findings

In total, 23 items remained after content validity. A pre-test using exploratory factor analysis on the 23-item scale created a two-factor scale. According to extracted validity and reliability scores, prodigality scale was statistically supported. A pool of nine items is proposed for the eventual measurement of the prodigality.

Research limitations/implications

The proposed measurement scale warrants further exploratory study. Future research should assess the validity across different Muslim geographies and Islamic schools of thought and practice.

Originality/value

Prodigality is proposed as a new construct that focuses primarily on the Qur’an and seeks to achieve relevance and acceptance by both Sunni and Shia denominations. The measurement scale is believed to extend the existing body of literature and contribute new knowledge on Muslim consumption.

Details

Journal of Islamic Marketing, vol. 6 no. 2
Type: Research Article
ISSN: 1759-0833

Keywords

Article
Publication date: 1 February 2003

A. Kaveh and G.R. Roosta

An improvement is presented for the existing minimal cycle basis selection algorithms increasing their efficiency. This consists of reducing the number of cycles to be considered…

Abstract

An improvement is presented for the existing minimal cycle basis selection algorithms increasing their efficiency. This consists of reducing the number of cycles to be considered as candidates for being the elements of a minimal cycle basis and makes practical use of the Greedy algorithm feasible. A modification is also included to form suboptimal‐minimal cycle bases in place of minimal bases. An efficient algorithm is developed to form suboptimal cycle bases of graphs, in which the Greedy algorithm is applied twice. First a suboptimal minimal cycle basis is formed, and then ignoring the minimality, a basis with elements having smaller overlaps is selected.

Details

Engineering Computations, vol. 20 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 December 2006

Tassos Dimitriou and Ioannis Krontiris

Nodes in sensor networks do not have enough topology information to make efficient routing decisions. To relay messages through intermediate sensors, geographic routing has been…

Abstract

Nodes in sensor networks do not have enough topology information to make efficient routing decisions. To relay messages through intermediate sensors, geographic routing has been proposed as such a solution. Its greedy nature, however, makes routing inefficient especially in the presence of topology voids or holes. In this paper we present GRAViTy (Geographic Routing Around Voids In any TopologY of sensor networks), a simple greedy forwarding algorithm that combines compass routing along with a mechanism that allows packets to explore the area around voids and bypass them without significant communication overhead. Using extended simulation results we show that our mechanism outperforms the right‐hand rule for bypassing voids and that the resulting paths found well approximate the corresponding shortest paths. GRAViTy uses a cross‐layered approach to improve routing paths for subsequent packets based on experience gained by former routing decisions. Furthermore, our protocol responds to topology changes, i.e. failure of nodes, and efficiently adjusts routing paths towards the destination.

Details

International Journal of Pervasive Computing and Communications, vol. 2 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 9 September 2013

Alexander Sommer, Ortwin Farle and Romanus Dyczij-Edlinger

The article aims to present an efficient numerical method for computing the far-fields of phased antenna arrays over broad frequency bands as well as wide ranges of steering and…

Abstract

Purpose

The article aims to present an efficient numerical method for computing the far-fields of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles.

Design/methodology/approach

The suggested approach combines finite-element analysis, projection-based model-order reduction, and empirical interpolation.

Findings

The reduced-order models are highly accurate but significantly smaller than the underlying finite-element models. Thus, they enable a highly efficient numerical far-field computation of phased antenna arrays. The frequency-slicing greedy method proposed in this paper greatly reduces the computational costs for constructing the reduced-order models, compared to state-of-the-art methods.

Research limitations/implications

The frequency-slicing greedy method is intended for use with matrix factorization methods. It is not applicable when the underlying finite-element system is solved by iterative methods.

Practical implications

In contrast to conventional finite-element models of phased antenna arrays, reduced-order models are very cheap to evaluate. Hence, they provide an enabling technology for computing radiation patterns over broad frequency bands and wide ranges of steering angles.

Originality/value

The paper presents a two-step model-order reduction method for efficiently computing the far-field patterns of phased antenna arrays. The suggested frequency-slicing greedy method constructs the reduced-order models in a systematic fashion and improves computing times, compared to existing methods.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 32 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 4 July 2016

Dilupa Nakandala, Henry Lau and Andrew Ning

When making sourcing decisions, both cost optimization and customer demand fulfillment are equally important for firm competitiveness. The purpose of this paper is to develop a…

Abstract

Purpose

When making sourcing decisions, both cost optimization and customer demand fulfillment are equally important for firm competitiveness. The purpose of this paper is to develop a stochastic search technique, hybrid genetic algorithm (HGA), for cost-optimized decision making in wholesaler inventory management in a supply chain network of wholesalers, retailers and suppliers.

Design/methodology/approach

This study develops a HGA by using a mixture of greedy-based and randomly generated solutions in the initial population and a local search method (hill climbing) applied to individuals selected for performing crossover before crossover is implemented and to the best individual in the population at the end of HGA as well as gene slice and integration.

Findings

The application of the proposed HGA is illustrated by considering multiple scenarios and comparing with the other commonly adopted methods of standard genetic algorithm, simulated annealing and tabu search. The simulation results demonstrate the capability of the proposed approach in producing more effective solutions.

Practical implications

The pragmatic importance of this method is for the inventory management of wholesaler operations and this can be scalable to address real contexts with multiple wholesalers and multiple suppliers with variable lead times.

Originality/value

The proposed stochastic-based search techniques have the capability in producing good-quality optimal or suboptimal solutions for large-scale problems within a reasonable time using ordinary computing resources available in firms.

Details

Business Process Management Journal, vol. 22 no. 4
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 11 May 2012

Ancău Mircea

The purpose of this paper is to outline the main features concerning the optimization of printed circuit board (PCB) fabrication by improving the manufacturing process…

Abstract

Purpose

The purpose of this paper is to outline the main features concerning the optimization of printed circuit board (PCB) fabrication by improving the manufacturing process productivity.

Design/methodology/approach

The author explored two different approaches to increase the manufacturing process productivity of PCBs. The first approach involved optimization of the PCB manufacturing process as a whole. The second approach was based on increasing the process productivity at the operational level.

Findings

To reduce the total manufacturing time, two heuristic algorithms for solving flowshop scheduling problems were designed. These algorithms were used for the computation of an optimal PCB manufacturing schedule. The case study shows both mono‐ and bi‐criteria optimization of the PCBs manufacturing.

Research limitations/implications

While the input data used in the case study were based on random numbers, the mathematical considerations drew only the main directions for manufacturing process optimization.

Originality/value

The paper shows two original heuristic algorithms for solving the flowshop scheduling problem, with high performance according to the best heuristics in the field. Besides their performances, these algorithms have the advantage of simplicity and ease of implementation on a computer. Using these algorithms, the optimal schedule for the PCB manufacturing process was calculated. For the case of the bi‐criteria optimization, the study of points which belong to the Pareto‐optimal set are presented.

Article
Publication date: 30 March 2022

Farzad Shafiei Dizaji and Mehrdad Shafiei Dizaji

The purpose is to reduce round-off errors in numerical simulations. In the numerical simulation, different kinds of errors may be created during analysis. Round-off error is one…

Abstract

Purpose

The purpose is to reduce round-off errors in numerical simulations. In the numerical simulation, different kinds of errors may be created during analysis. Round-off error is one of the sources of errors. In numerical analysis, sometimes handling numerical errors is challenging. However, by applying appropriate algorithms, these errors are manageable and can be reduced. In this study, five novel topological algorithms were proposed in setting up a structural flexibility matrix, and five different examples were used in applying the proposed algorithms. In doing so round-off errors were reduced remarkably.

Design/methodology/approach

Five new algorithms were proposed in order to optimize the conditioning of structural matrices. Along with decreasing the size and duration of analyses, minimizing analytical errors is a critical factor in the optimal computer analysis of skeletal structures. Appropriate matrices with a greater number of zeros (sparse), a well structure and a well condition are advantageous for this objective. As a result, a problem of optimization with various goals will be addressed. This study seeks to minimize analytical errors such as rounding errors in skeletal structural flexibility matrixes via the use of more consistent and appropriate mathematical methods. These errors become more pronounced in particular designs with ill-suited flexibility matrixes; structures with varying stiffness are a frequent example of this. Due to the usage of weak elements, the flexibility matrix has a large number of non-diagonal terms, resulting in analytical errors. In numerical analysis, the ill-condition of a matrix may be resolved by moving or substituting rows; this study examined the definition and execution of these modifications prior to creating the flexibility matrix. Simple topological and algebraic features have been mostly utilized in this study to find fundamental cycle bases with particular characteristics. In conclusion, appropriately conditioned flexibility matrices are obtained, and analytical errors are reduced accordingly.

Findings

(1) Five new algorithms were proposed in order to optimize the conditioning of structural flexibility matrices. (2) A JAVA programming language was written for all five algorithms and a friendly GUI software tool is developed to visualize sub-optimal cycle bases. (3) Topological and algebraic features of the structures were utilized in this study.

Research limitations/implications

This is a multi-objective optimization problem which means that sparsity and well conditioning of a matrix cannot be optimized simultaneously. In conclusion, well-conditioned flexibility matrices are obtained, and analytical errors are reduced accordingly.

Practical implications

Engineers always finding mathematical modeling of real-world problems and make them as simple as possible. In doing so, lots of errors will be created and these errors could cause the mathematical models useless. Applying decent algorithms could make the mathematical model as precise as possible.

Social implications

Errors in numerical simulations should reduce due to the fact that they are toxic for real-world applications and problems.

Originality/value

This is an original research. This paper proposes five novel topological mathematical algorithms in order to optimize the structural flexibility matrix.

Details

Engineering Computations, vol. 39 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 1000