Search results
1 – 10 of over 4000Though the components and concepts of cost of quality (COQ) are well understood in the domain of manufacturing, only limited data are available from the construction industry for…
Abstract
Purpose
Though the components and concepts of cost of quality (COQ) are well understood in the domain of manufacturing, only limited data are available from the construction industry for various reasons. The present study seeks to establish a relationship between project defect score (pds), representing the quality of construction in the project, and the COQ in the building construction industry. The study also seeks to estimate the contributions of the various components to the overall COQ in the construction industry, along with their distribution and interrelationships among themselves.
Design/methodology/approach
A framework for estimating COQ was developed, and the data regarding prevention, appraisal and failure cost were collected from 122 projects. Various mathematical and statistical tools like Pearson's correlation, multiple linear regression (MLR) and curve fitting have been used for data analysis.
Findings
The prevention–appraisal–failure (PAF) model was found to be appropriate to estimate COQ, and the prevention, appraisal, conformance cost (CC) and failure cost were found to vary between 0.19 and 8%, 0.05 and 5%, 0.3 and 10% and 0.01 and 5%, respectively, whereas the overall COQ varied from 3.5 to 10.01% of the project cost. The correlations between various components of COQ were found to be significant. MLR suggested that appraisal cost is more impactful in reducing failure cost than prevention cost. Using curve fitting, the cubic model appropriately represented all interrelationships. The optimal overall COQ was found to be 3.86%, and the reasons for low COQ have been explored.
Originality/value
The study evaluates the applicability of available models for COQ calculations for the construction industry and presents a framework to estimate its components. The study also explores the interrelationship between the various components of COQ and presents a generalized relationship between COQ and the pds.
Details
Keywords
Yiyo Kuo, Taho Yang, David Parker and Chin-Hsuan Sung
The purpose of this paper is to solve an integration of customer and supplier flexibility problem in a make-to-order (MTO) industry. The flexible strategies, where delivery…
Abstract
Purpose
The purpose of this paper is to solve an integration of customer and supplier flexibility problem in a make-to-order (MTO) industry. The flexible strategies, where delivery leadtime and unit price (or raw material cost) can be negotiated, are provided by customers and suppliers. Its effectiveness is illustrated by a practical application.
Design/methodology/approach
The present study is a rolling decision-making problem and is solved by a proposed combined mixed integer program (MIP) and simulation approach. A simulation model was developed for evaluating solutions of the MIP and will serve as the virtual factory to provide the initial work-in-process status for a new incoming order evaluation.
Findings
The experimental results show that when either customers or suppliers provide flexible strategies to the manufacturer, total profits can be increased. Moreover, when both customers and suppliers provide flexibility strategies to the manufacturer simultaneously, total profits can be significantly increased.
Research limitations/implications
An expanded experiment would be of help in realizing the relationship between the flexibility and profit. Moreover, there are other price-sensitivity functions for both customers and suppliers.
Practical implications
A fishing-net manufacturing company was used for the case study to illustrate the effectiveness and the feasibility of the proposed methodology and its application to industry.
Originality/value
The proposed methodology innovatively solved a practical application. The customer and supplier flexibility was investigated in a MTO production system that has no inventory of raw material. The experimental results are promising.
Details
Keywords
Marco A. Panduro and Carlos A. Brizuela
The purpose of this paper is to present the application of an efficient genetic algorithm to deal with the problem of computing the trade‐off curves for non‐uniform circular…
Abstract
Purpose
The purpose of this paper is to present the application of an efficient genetic algorithm to deal with the problem of computing the trade‐off curves for non‐uniform circular arrays. In order to answer questions related to the performance of the non‐uniform circular phased arrays, two criteria are considered to evaluate the design: the criteria of minimum main beam width and minimum side lobe level (SLL) during scanning.
Design/methodology/approach
The design of non‐uniform circular arrays is modeled as a multi‐objective optimization problem. The Non‐dominated Sorting Genetic Algorithm II (NSGA‐II) is employed as the methodology to solve the resulting optimization problem. This algorithm is considered to be one of the best evolutionary optimizer for multi‐objective problems. It is chosen for its ease of implementation and its efficiency for computation of non‐dominated ranks. The method is based on the survival of the fittest paradigm, where each individual in the population represents a feasible solution of the optimization problem being solved. The concept of fitness is adapted to take into account the concept of solution quality in multi‐objective problems. This evolutionary method can be used effectively for computing the trade‐off curves between the SLL and the main beam width.
Findings
The NSGA‐II algorithm can effectively compute the trade‐off curve of different non‐uniform circular arrays. The simulation results presented in this paper show design options that maintain a low SLL and main beam width without pattern distortion during beam steering. Moreover, these trade‐off curves provide a more realistic approach to the solution of the design problem.
Originality/value
The design problem is set to determine which are the best design configurations or separations between the antenna elements and the best amplitude excitations when a circular structure is employed. Owing to the complex feasible region and the non‐linear dependence of optimization criteria from the decision variables, simple traditional and more sophisticated mathematical programming approaches will lead us to local optimal solutions in the case we can apply them. To the best of our knowledge, this multi‐objective optimization problem has not dealt with before, when two or more conflicting design criteria are taken into account. Therefore, the solution to this problem constitutes the main contribution of our paper.
Details
Keywords
The purpose of this paper is to study the optimal long-run rate of inflation in the presence of a hybrid Phillips curve, which nests a purely backward-looking Phillips curve and…
Abstract
Purpose
The purpose of this paper is to study the optimal long-run rate of inflation in the presence of a hybrid Phillips curve, which nests a purely backward-looking Phillips curve and the purely forward-looking New Keynesian Phillips curve (NKPC) as special limiting cases.
Design/methodology/approach
This paper derives the long-run rate of inflation in a basic New Keynesian (NK) model, characterized by sticky prices and rule-of-thumb behavior by price setters. The monetary authority possesses commitment and its objective function stems from an approximation to the utility of the representative household.
Findings
Commitment solution for the monetary authority leads to steady-state outcomes in which inflation, albeit small, is positive. Rising from zero under the purely forward-looking NKPC, the optimal long-run rate of inflation reaches its maximum under the purely backward-looking Phillips curve. In this case, inflation bias arises, while, under the hybrid Phillips curve, positive long-run inflation is associated with an output gain.
Research limitations/implications
This paper serves as a clarification against the misperception that log-linearized models take as given the steady-state inflation rate rather than being capable of determining it. Analysis is sensitive to the basic NK setting, with the assumed rule-of-thumb behavior by price setters and price staggering.
Originality/value
The results are the first to quantify the optimal long-run rate of inflation in a fully microfounded model that nests different Phillips curves.
Details
Keywords
P. Di Barba and M.E. Mognaschi
The aim of this paper is to compare different methods for multiobjective optimisation with respect to the same benchmark problem.
Abstract
Purpose
The aim of this paper is to compare different methods for multiobjective optimisation with respect to the same benchmark problem.
Design/methodology/approach
The following methods are considered: equilibrium of gradients GB, multi‐individual multi‐objective evolution‐strategy MOESTRA, and goal attainment GATT, respectively. They are applied to the shape design of the pole pitch of an electrical machine, with the aim of maximizing the air‐gap induction and minimizing the stray field in the winding.
Findings
The same initial solution of the benchmark is considered for all methods. The final solution of GB and MOESTRA dominates the initial one because both objectives are improved. GB is the most expensive method and therefore is not suited when several non‐dominated solutions should be identified. Accordingly, the trade‐off curve was approximated by means of 15 non‐dominated solutions, resorting to MOESTRA.
Originality/value
A multi‐objective multi‐individual evolution strategy of lowest order has proven to be cost‐effective in identifying a set of non‐dominated solutions, so helping the designer to investigate the structure of the objective space. The requirements in terms of timing and facilities appear to be compatible with the resources of an automated design centre of an industrial company.
Details
Keywords
Zehra Canan Araci, Ahmed Al-Ashaab and Cesar Garcia Almeida
This paper aims to present a process to generate physics-based trade-off curves (ToCs) to facilitate lean product development processes by enabling two key activities of set-based…
Abstract
Purpose
This paper aims to present a process to generate physics-based trade-off curves (ToCs) to facilitate lean product development processes by enabling two key activities of set-based concurrent engineering (SBCE) process model that are comparing alternative design solutions and narrowing down the design set. The developed process of generating physics-based ToCs has been demonstrated via an industrial case study which is a research project.
Design/methodology/approach
The adapted research approach for this paper consists of three phases: a review of the related literature, developing the process of generating physics-based ToCs in the concept of lean product development, implementing the developed process in an industrial case study for validation through the SBCE process model.
Findings
Findings of this application showed that physics-based ToC is an effective tool to enable SBCE activities, as well as to save time and provide the required knowledge environment for the designers to support their decision-making.
Practical implications
Authors expect that this paper will guide companies, which are implementing SBCE processes throughout their lean product development journey. Physics-based ToCs will facilitate accurate decision-making in comparing and narrowing down the design-set through the provision of the right knowledge environment.
Originality/value
SBCE is a useful approach to develop a new product. It is essential to provide the right knowledge environment in a quick and visual manner which has been addressed by demonstrating physics knowledge in ToCs. Therefore, a systematic process has been developed and presented in this paper. The research found that physics-based ToCs could help to identify different physics characteristics of the product in the form of design parameters and visualise in a single graph for all stakeholders to understand without a need for an extensive engineering background and for designers to make a decision faster.
Details
Keywords
Marco Gallegati, James B. Ramsey, Mauro Gallegati and Willi Semmler
Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of…
Abstract
Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of such metrics and its significance must be interpreted correctly for evaluating different learning algorithms. Most of these measures are scalar metrics and some of them are graphical methods. This paper introduces a detailed overview of the classification assessment measures with the aim of providing the basics of these measures and to show how it works to serve as a comprehensive source for researchers who are interested in this field. This overview starts by highlighting the definition of the confusion matrix in binary and multi-class classification problems. Many classification measures are also explained in details, and the influence of balanced and imbalanced data on each metric is presented. An illustrative example is introduced to show (1) how to calculate these measures in binary and multi-class classification problems, and (2) the robustness of some measures against balanced and imbalanced data. Moreover, some graphical measures such as Receiver operating characteristics (ROC), Precision-Recall, and Detection error trade-off (DET) curves are presented with details. Additionally, in a step-by-step approach, different numerical examples are demonstrated to explain the preprocessing steps of plotting ROC, PR, and DET curves.
Details
Keywords
William Copacino and Donald B. Rosenfield
In this article the focus is upon analytic tools for strategic logistics planning. We discuss a broad set of tools roughly divided into two areas. One area is that of traditional…
Abstract
In this article the focus is upon analytic tools for strategic logistics planning. We discuss a broad set of tools roughly divided into two areas. One area is that of traditional tools such as functional cost analysis and the various modelling approaches and these are briefly reviewed. Principally, however, the discussion centres on newer tools which have not been as widely used and which can be highly effective for strategic planning and especially for strategic logistics planning. In addition, we will present a framework which outlines the various aspects or approaches for strategic logistics planning and identifies the analytic tools that are available and appropriate for each particular aspect of planning.
F.T.S. Chan, P. Humphreys and T.H. Lu
A simulation approach to measuring supply chain performance is evaluated which incorporates order release theory. Within manufacturing a number of order release mechanisms have…
Abstract
A simulation approach to measuring supply chain performance is evaluated which incorporates order release theory. Within manufacturing a number of order release mechanisms have been developed. The importance of order release is first examined and its applicability to monitoring the performance of the supply chain is proposed. A simulation model of a typical, single channel logistics network was developed. Using the simulation model, each of the order release mechanisms was assessed and close agreement was obtained with the work of previous researchers. A new order release approach is proposed which is found to be superior to those analysed previously and should lead to improved supply chain performance.
Details