Search results

1 – 10 of over 35000
To view the access options for this content please click here
Article
Publication date: 12 August 2014

Yu-Ting Cheng and Chih-Ching Yang

Constructing a fuzzy control chart with interval-valued fuzzy data is an important topic in the fields of medical, sociological, economics, service and management. In…

Abstract

Purpose

Constructing a fuzzy control chart with interval-valued fuzzy data is an important topic in the fields of medical, sociological, economics, service and management. In particular, when the data illustrates uncertainty, inconsistency and is incomplete which is often the. case of real data. Traditionally, we use variable control chart to detect the process shift with real value. However, when the real data is composed of interval-valued fuzzy, it is not feasible to use such an approach of traditional statistical process control (SPC) to monitor the fuzzy control chart. The purpose of this paper is to propose the designed standardized fuzzy control chart for interval-valued fuzzy data set.

Design/methodology/approach

The general statistical principles used on the standardized control chart are applied to fuzzy control chart for interval-valued fuzzy data.

Findings

When the real data is composed of interval-valued fuzzy, it is not feasible to use such an approach of traditional SPC to monitor the fuzzy control chart. This study proposes the designed standardized fuzzy control chart for interval-valued fuzzy data set of vegetable price from January 2009 to September 2010 in Taiwan obtained from Council of Agriculture, Executive Yuan. Empirical studies are used to illustrate the application for designing standardized fuzzy control chart. More related practical phenomena can be explained by this appropriate definition of fuzzy control chart.

Originality/value

This paper uses a simpler approach to construct the standardized interval-valued chart for fuzzy data based on traditional standardized control chart which is easy and straightforward. Moreover, the control limit of the designed standardized fuzzy control chart is an interval with (LCL, UCL), which consists of the conventional range of classical standardized control chart.

Details

Management Decision, vol. 52 no. 7
Type: Research Article
ISSN: 0025-1747

Keywords

To view the access options for this content please click here
Article
Publication date: 13 November 2007

Ling T. He and Chenyi Hu

The purpose of this study is to investigate the impacts of interval measured data, rather than traditional point data, on economic variability studies.

Abstract

Purpose

The purpose of this study is to investigate the impacts of interval measured data, rather than traditional point data, on economic variability studies.

Design/methodology/approach

The study uses interval measured data to forecast the variability of future stock market changes. The variability (interval) forecasts are then compared with point data‐based confidence interval forecasts.

Findings

Using interval measured data in stock market variability forecasting can significantly increase forecasting accuracy, compared with using traditional point data.

Originality/value

An interval forecast for stock prices essentially consists of predicted levels and a predicted variability which can reduce perceived uncertainty or risk embedded in future investments, and therefore, may influence required returns and capital asset prices.

Details

The Journal of Risk Finance, vol. 8 no. 5
Type: Research Article
ISSN: 1526-5943

Keywords

To view the access options for this content please click here
Article
Publication date: 4 April 2016

Qian Yu and Fujun Hou

The traditional data envelopment analysis (DEA) model as a non-parametric technique can measure the relative efficiencies of a decision-making units (DMUs) set with exact…

Abstract

Purpose

The traditional data envelopment analysis (DEA) model as a non-parametric technique can measure the relative efficiencies of a decision-making units (DMUs) set with exact values of inputs and outputs, but it cannot handle the imprecise data. The purpose of this paper is to establish a super efficiency interval data envelopment analysis (IDEA) model, an IDEA model based on cross-evaluation and a cross evaluation-based measure of super efficiency IDEA model. And the authors apply the proposed approach to data on the 29 public secondary schools in Greece, and further demonstrate the feasibility of the proposed approach.

Design/methodology/approach

In this paper, based on the IDEA model, the authors propose an improved version of establishing a super efficiency IDEA model, an IDEA model based on cross-evaluation, and then present a cross evaluation-based measure of super efficiency IDEA model by combining the super efficiency method with cross-evaluation. The proposed model cannot only discriminate the performance of efficient DMUs from inefficient ones, but also can distinguish between the efficient DMUs. By using the proposed approach, the overall performance of all DMUs with interval data can be fully ranked.

Findings

A numerical example is presented to illustrate the application of the proposed methodology. The result shows that the proposed approach is an effective and practical method to measure the efficiency of the DMUs with imprecise data.

Practical implications

The proposed model can avoid the fact that the original DEA model can only distinguish the performance of efficient DMUs from inefficient ones, but cannot discriminate between the efficient DMUs.

Originality/value

This paper introduces the effective method to obtain the complete rank of all DMUs with interval data.

Details

Kybernetes, vol. 45 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 27 January 2012

Jiefang Wang and Sifeng Liu

The purpose of this paper is to solve the DEA model with grey interval data while the inputs/outputs have large interval length.

Downloads
454

Abstract

Purpose

The purpose of this paper is to solve the DEA model with grey interval data while the inputs/outputs have large interval length.

Design/methodology/approach

Some methods have been developed to calculate the interval efficiencies of the decision‐making unit (DMU) in DEA model with interval data, in which there are two shortcomings. One is that the evaluated DMU and referenced DMUs are not be dealt with fairly, as they are not counterparts in locations of inputs and outputs within possible ranges. Another is that efficiency intervals may be too wide to provide valuable information. This paper proposes the hypotheses of data consistency in DEA model. Under the hypotheses, linear programming (LP) models to solve the upper and lower bounds of interval efficiencies are established.

Findings

It is found that lengths of efficiency intervals under the hypotheses are shorter, which produces more reliable and informative evaluation results and DMUs are dealt with more fairly.

Practical implications

The method proposed in the paper could be used in efficiencies evaluation of enterprises, governments, etc. when the classic methods are invalid for the high uncertainty evaluation results.

Originality/value

The paper succeeds in proposing the hypotheses of data consistency and solving the DEA model with interval grey data under that.

Details

Grey Systems: Theory and Application, vol. 2 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

To view the access options for this content please click here
Article
Publication date: 10 January 2018

Yao Wen, Qingxian An, Xuanhua Xu and Ya Chen

This paper aims to prioritize the most efficient Six Sigma project that can generate the greatest benefit to the organization, according to the relative performance among…

Abstract

Purpose

This paper aims to prioritize the most efficient Six Sigma project that can generate the greatest benefit to the organization, according to the relative performance among a set of homogenous projects (in here, DMUs). The selection of a Six Sigma project is a multiple-criteria decision-making problem, which is difficult in practice because the projects are not yet complete and the values of evaluation indicators are often interval or imprecise data. Managers stress the need for developing an effective performance evaluation methodology for selecting a Six Sigma project.

Design/methodology/approach

This study proposes a modified model considering interval or imprecise data based on common weight data envelopment analysis (DEA) approach to solve problems on project selection.

Findings

By comparing its findings with an example from a previous study, the new model obtained realistic and fair evaluation results and significantly reduced the difficulties and the time spent during calculation. Moreover, not only the best project is identified, but also the exact indicator information is obtained.

Originality/value

This study solves the problem of selecting the most efficient Six Sigma project in the preference of interval or imprecise data. Many studies have shown how a Six Sigma project is chosen, but only a few have integrated interval data into the selection process.

Details

Kybernetes, vol. 47 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 23 September 2019

Zoubida Chorfi, Abdelaziz Berrado and Loubna Benabbou

Evaluating the performance of supply chains is a convoluted task because of the complexity that is inextricably linked to the structure of the aforesaid chains. Therefore…

Abstract

Purpose

Evaluating the performance of supply chains is a convoluted task because of the complexity that is inextricably linked to the structure of the aforesaid chains. Therefore, the purpose of this paper is to present an integrated approach for evaluating and sizing real-life health-care supply chains in the presence of interval data.

Design/methodology/approach

To achieve the objective, this paper illustrates an approach called Latin hypercube sampling by replacement (LHSR) to identify a set of precise data from the interval data; then the standard data envelopment analysis (DEA) models can be used to assess the relative efficiencies of the supply chains under evaluation. A certain level of data aggregation is suggested to improve the discriminatory power of the DEA models and an experimental design is conducted to size the supply chains under assessment.

Findings

The newly developed integrated methodology assists the decision-makers (DMs) in comparing their real-life supply chains against peers and sizing their resources to achieve a certain level of production.

Practical implications

The proposed integrated DEA-based approach has been successfully implemented to suggest an appropriate structure to the actual public pharmaceutical supply chain in Morocco.

Originality/value

The originality of the proposed approach comes from the development of an integrated methodology to evaluate and size real-life health-care supply chains while taking into account interval data. This developed integrated technique certainly adds value to the health-care DMs for modelling their supply chains in today's world.

Details

Journal of Modelling in Management, vol. 15 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

To view the access options for this content please click here
Book part
Publication date: 6 September 2021

Rachel S. Rauvola, Cort W. Rudolph and Hannes Zacher

In this chapter, the authors consider the role of time for research in occupational stress and well-being. First, temporal issues in studying occupational health…

Abstract

In this chapter, the authors consider the role of time for research in occupational stress and well-being. First, temporal issues in studying occupational health longitudinally, focusing in particular on the role of time lags and their implications for observed results (e.g., effect detectability), analyses (e.g., handling unequal durations between measurement occasions), and interpretation (e.g., result generalizability, theoretical revision) were discussed. Then, time-based assumptions when modeling lagged effects in occupational health research, providing a focused review of how research has handled (or ignored) these assumptions in the past, and the relative benefits and drawbacks of these approaches were discussed. Finally, recommendations for readers, an accessible tutorial (including example data and code), and discussion of a new structural equation modeling technique, continuous time structural equation modeling, that can “handle” time in longitudinal studies of occupational health were provided.

Details

Examining and Exploring the Shifting Nature of Occupational Stress and Well-Being
Type: Book
ISBN: 978-1-80117-422-0

Keywords

To view the access options for this content please click here
Article
Publication date: 2 November 2015

Aanand Davé, Michael Oates, Christopher Turner and Peter Ball

This paper reports on the experimentation of an integrated manufacturing and building model to improve energy efficiency. Traditionally, manufacturing and…

Downloads
307

Abstract

Purpose

This paper reports on the experimentation of an integrated manufacturing and building model to improve energy efficiency. Traditionally, manufacturing and building-facilities engineers work independently, with their own performance objectives, methods and software support. However, with progresses in resource reduction, advances have become more challenging. Further opportunities for energy efficiency require an expansion of scope across the functional boundaries of facility, utility and manufacturing assets.

Design/methodology/approach

The design of methods that provide guidance on factory modelling is inductive. The literature review outlines techniques for the simulation of energy efficiency in manufacturing, utility and facility assets. It demonstrates that detailed guidance for modelling across these domains is sparse. Therefore, five experiments are undertaken in an integrated manufacturing, utility and facility simulation software IES < VE > . These evaluate the impact of time-step granularity on the modelling of a paint shop process.

Findings

Experimentation demonstrates that time-step granularity can have a significant impact on simulation model results quality. Linear deterioration in results can be assumed from time intervals of 10 minutes and beyond. Therefore, an appropriate logging interval, and time-step granularity should be chosen during the data composition process. Time-step granularity is vital factor in the modelling process, impacting the quality of simulation results produced.

Practical implications

This work supports progress towards sustainable factories by understanding the impact of time-step granularity on data composition, modelling, and on the quality of simulation results. Better understanding of this granularity factor will guide engineers to use an appropriate level of data and understand the impact of the choices they are making.

Originality/value

This paper reports on the use of simulation modelling tool that links manufacturing, utilities and facilities domains, enabling their joint analysis to reduce factory resource consumption. Currently, there are few available tools to link these areas together; hence, there is little or no understanding of how such combined factory analysis should be conducted to assess and reduce factory resource consumption.

Details

International Journal of Energy Sector Management, vol. 9 no. 4
Type: Research Article
ISSN: 1750-6220

Keywords

To view the access options for this content please click here
Article
Publication date: 27 February 2009

Ling T. He, Chenyi Hu and K. Michael Casey

The purpose of this paper is to forecast variability in mortgage rates by using interval measured data and interval computing method.

Downloads
511

Abstract

Purpose

The purpose of this paper is to forecast variability in mortgage rates by using interval measured data and interval computing method.

Design/methodology/approach

Variability (interval) forecasts generated by the interval computing are compared with lower‐ and upper‐bound forecasts based on the ordinary least squares (OLS) rolling regressions.

Findings

On average, 56 per cent of annual changes in mortgage rates may be predicted by OLS lower‐ and upper‐bound forecasts while the interval method improves forecasting accuracy to 72 per cent.

Research limitations/implications

This paper uses the interval computing method to forecast variability in mortgage rates. Future studies may expand variability forecasting into more risk‐managing areas.

Practical implications

Results of this study may be interesting to executive officers of banks, mortgage companies, and insurance companies, builders, investors, and other financial decision makers with an interest in mortgage rates.

Originality/value

Although it is well‐known that changes in mortgage rates can significantly affect the housing market and economy, there is not much serious research that attempts to forecast variability in mortgage rates in the literature. This study is the first endeavor in variability forecasting for mortgage rates.

Details

The Journal of Risk Finance, vol. 10 no. 2
Type: Research Article
ISSN: 1526-5943

Keywords

To view the access options for this content please click here
Book part
Publication date: 23 June 2016

Ai Han, Yongmiao Hong, Shouyang Wang and Xin Yun

Modelling and forecasting interval-valued time series (ITS) have received increasing attention in statistics and econometrics. An interval-valued observation contains more…

Abstract

Modelling and forecasting interval-valued time series (ITS) have received increasing attention in statistics and econometrics. An interval-valued observation contains more information than a point-valued observation in the same time period. The previous literature has mainly considered modelling and forecasting a univariate ITS. However, few works attempt to model a vector process of ITS. In this paper, we propose an interval-valued vector autoregressive moving average (IVARMA) model to capture the cross-dependence dynamics within an ITS vector system. A minimum-distance estimation method is developed to estimate the parameters of an IVARMA model, and consistency, asymptotic normality and asymptotic efficiency of the proposed estimator are established. A two-stage minimum-distance estimator is shown to be asymptotically most efficient among the class of minimum-distance estimators. Simulation studies show that the two-stage estimator indeed outperforms other minimum-distance estimators for various data-generating processes considered.

1 – 10 of over 35000