Search results

1 – 10 of 320
Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 14 March 2022

Haruo H. Horaguchi

This article examines the accuracy and bias inherent in the wisdom of crowd effect. The purpose is to clarify what kind of bias crowds have when they make predictions. In the…

1208

Abstract

Purpose

This article examines the accuracy and bias inherent in the wisdom of crowd effect. The purpose is to clarify what kind of bias crowds have when they make predictions. In the theoretical inquiry, the effect of the accumulated absolute deviation was simulated. In the empirical study, the observed biases were examined using data from forecasting foreign exchange rates.

Design/methodology/approach

In the theoretical inquiry, the effect of the accumulated absolute deviation was simulated based on mathematical propositions. In the empirical study, the data from 2004 to 2011 were provided by Nikkei, which holds the “Nikkei Yen Derby” competition. In total, 3,657 groups forecasted the foreign exchange rate, and the first prediction was done in early May to forecast the rate at the end of May. The second round took place in June in a similar manner.

Findings

The average absolute deviation in May was smaller than that in June. The first round of prediction was more accurate than the second round one. Predictors were affected by the observable real exchange rate, such that they modified their forecasts by referring to the actual data in early June. An actuality bias existed when the participants lost their diverse prospects. Since the standard deviations of the June forecasts were smaller than those of May, the fact-convergence effect was supported.

Originality/value

This article reports novel findings that affect the wisdom of crowd effect—referred to as actuality bias and fact-convergence effect. The former refers to a forecasting bias toward the observable rate near the forecasting date. The latter implies that predictors, as a whole, indicate smaller forecast deviations by observing the realized foreign exchange rate.

Details

Review of Behavioral Finance, vol. 15 no. 5
Type: Research Article
ISSN: 1940-5979

Keywords

Open Access
Article
Publication date: 21 August 2023

Yue Zhou, Xiaobei Shen and Yugang Yu

This study examines the relationship between demand forecasting error and retail inventory management in an uncertain supplier yield context. Replenishment is segmented into…

1662

Abstract

Purpose

This study examines the relationship between demand forecasting error and retail inventory management in an uncertain supplier yield context. Replenishment is segmented into off-season and peak-season, with the former characterized by longer lead times and higher supply uncertainty. In contrast, the latter incurs higher acquisition costs but ensures certain supply, with the retailer's purchase volume aligning with the acquired volume. Retailers can replenish in both phases, receiving goods before the sales season. This paper focuses on the impact of the retailer's demand forecasting bias on their sales period profits for both phases.

Design/methodology/approach

This study adopts a data-driven research approach by drawing inspiration from real data provided by a cooperating enterprise to address research problems. Mathematical modeling is employed to solve the problems, and the resulting optimal strategies are tested and validated in real-world scenarios. Furthermore, the applicability of the optimal strategies is enhanced by incorporating numerical simulations under other general distributions.

Findings

The study's findings reveal that a greater disparity between predicted and actual demand distributions can significantly reduce the profits that a retailer-supplier system can earn, with the optimal purchase volume also being affected. Moreover, the paper shows that the mean of the forecasting error has a more substantial impact on system revenue than the variance of the forecasting error. Specifically, the larger the absolute difference between the predicted and actual means, the lower the system revenue. As a result, managers should focus on improving the quality of demand forecasting, especially the accuracy of mean forecasting, when making replenishment decisions.

Practical implications

This study established a two-stage inventory optimization model that simultaneously considers random yield and demand forecast quality, and provides explicit expressions for optimal strategies under two specific demand distributions. Furthermore, the authors focused on how forecast error affects the optimal inventory strategy and obtained interesting properties of the optimal solution. In particular, the property that the optimal procurement quantity no longer changes with increasing forecast error under certain conditions is noteworthy, and has not been previously noted by scholars. Therefore, the study fills a gap in the literature.

Originality/value

This study established a two-stage inventory optimization model that simultaneously considers random yield and demand forecast quality, and provides explicit expressions for optimal strategies under two specific demand distributions. Furthermore, the authors focused on how forecast error affects the optimal inventory strategy and obtained interesting properties of the optimal solution. In particular, the property that the optimal procurement quantity no longer changes with increasing forecast error under certain conditions is noteworthy, and has not been previously noted by scholars. Therefore, the study fills a gap in the literature.

Details

Modern Supply Chain Research and Applications, vol. 5 no. 2
Type: Research Article
ISSN: 2631-3871

Keywords

Open Access
Article
Publication date: 20 February 2023

Nuh Keleş

This study aims to apply new modifications by changing the nonlinear logarithmic calculation steps in the method based on the removal effects of criteria (MEREC) method. Geometric…

Abstract

Purpose

This study aims to apply new modifications by changing the nonlinear logarithmic calculation steps in the method based on the removal effects of criteria (MEREC) method. Geometric and harmonic mean from multiplicative functions is used for the modifications made while extracting the effects of the criteria on the overall performance one by one. Instead of the nonlinear logarithmic measure used in the MEREC method, it is desired to obtain results that are closer to the mean and have a lower standard deviation.

Design/methodology/approach

The MEREC method is based on the removal effects of the criteria on the overall performance. The method uses a logarithmic measure with a nonlinear function. MEREC-G using geometric mean and MEREC-H using harmonic mean are introduced in this study. The authors compared the MEREC method, its modifications and some other objective weight determination methods.

Findings

MEREC-G and MEREC-H variants, which are modifications of the MEREC method, are shown to be effective in determining the objective weights of the criteria. Findings of the MEREC-G and MEREC-H variants are more convenient, simpler, more reasonable, closer to the mean and have fewer deviations. It was determined that the MEREC-G variant gave more compatible findings with the entropy method.

Practical implications

Decision-making can occur at any time in any area of life. There are various criteria and alternatives for decision-making. In multi-criteria decision-making (MCDM) models, it is a very important distinction to determine the criteria weights for the selection/ranking of the alternatives. The MEREC method can be used to find more reasonable or average results than other weight determination methods such as entropy. It can be expected that the MEREC method will be more used in daily life problems and various areas.

Originality/value

Objective weight determination methods evaluate the weights of the criteria according to the scores of the determined alternatives. In this study, the MEREC method, which is an objective weight determination method, has been expanded. Although a nonlinear measurement model is used in the literature, the contribution was made in this study by using multiplicative functions. As an important originality, the authors demonstrated the effect of removing criteria in the MEREC method in a sensitivity analysis by actually removing the alternatives one by one from the model.

Details

International Journal of Industrial Engineering and Operations Management, vol. 5 no. 3
Type: Research Article
ISSN: 2690-6090

Keywords

Open Access
Article
Publication date: 17 December 2019

Yin Kedong, Shiwei Zhou and Tongtong Xu

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The…

1324

Abstract

Purpose

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The purpose of this paper is to provide theoretical guidance and reference standards for the indicator system design process, laying a solid foundation for the application of the indicator system, by systematically exploring the expert evaluation method to optimize the index system to enhance its credibility and reliability, to improve its resolution and accuracy and reduce its objectivity and randomness.

Design/methodology/approach

The paper is based on system theory and statistics, and it designs the main line of “relevant theoretical analysis – identification of indicators – expert assignment and quality inspection” to achieve the design and optimization of the indicator system. First, the theoretical basis analysis, relevant factor analysis and physical process description are used to clarify the comprehensive evaluation problem and the correlation mechanism. Second, the system structure analysis, hierarchical decomposition and indicator set identification are used to complete the initial establishment of the indicator system. Third, based on expert assignment method, such as Delphi assignments, statistical analysis, t-test and non-parametric test are used to complete the expert assignment quality diagnosis of a single index, the reliability and validity test is used to perform single-index assignment correction and consistency test is used for KENDALL coordination coefficient and F-test multi-indicator expert assignment quality diagnosis.

Findings

Compared with the traditional index system construction method, the optimization process used in the study standardizes the process of index establishment, reduces subjectivity and randomness, and enhances objectivity and scientificity.

Originality/value

The innovation point and value of the paper are embodied in three aspects. First, the system design process of the combined indicator system, the multi-dimensional index screening and system optimization are carried out to ensure that the index system is scientific, reasonable and comprehensive. Second, the experts’ background is comprehensively evaluated. The objectivity and reliability of experts’ assignment are analyzed and improved on the basis of traditional methods. Third, aim at the quality of expert assignment, conduct t-test, non-parametric test of single index, and multi-optimal test of coordination and importance of multiple indicators, enhance experts the practicality of assignment and ensures the quality of expert assignment.

Details

Marine Economics and Management, vol. 2 no. 1
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 13 April 2022

Florian Schuberth, Manuel E. Rademaker and Jörg Henseler

This study aims to examine the role of an overall model fit assessment in the context of partial least squares path modeling (PLS-PM). In doing so, it will explain when it is…

5941

Abstract

Purpose

This study aims to examine the role of an overall model fit assessment in the context of partial least squares path modeling (PLS-PM). In doing so, it will explain when it is important to assess the overall model fit and provides ways of assessing the fit of composite models. Moreover, it will resolve major concerns about model fit assessment that have been raised in the literature on PLS-PM.

Design/methodology/approach

This paper explains when and how to assess the fit of PLS path models. Furthermore, it discusses the concerns raised in the PLS-PM literature about the overall model fit assessment and provides concise guidelines on assessing the overall fit of composite models.

Findings

This study explains that the model fit assessment is as important for composite models as it is for common factor models. To assess the overall fit of composite models, researchers can use a statistical test and several fit indices known through structural equation modeling (SEM) with latent variables.

Research limitations/implications

Researchers who use PLS-PM to assess composite models that aim to understand the mechanism of an underlying population and draw statistical inferences should take the concept of the overall model fit seriously.

Practical implications

To facilitate the overall fit assessment of composite models, this study presents a two-step procedure adopted from the literature on SEM with latent variables.

Originality/value

This paper clarifies that the necessity to assess model fit is not a question of which estimator will be used (PLS-PM, maximum likelihood, etc). but of the purpose of statistical modeling. Whereas, the model fit assessment is paramount in explanatory modeling, it is not imperative in predictive modeling.

Details

European Journal of Marketing, vol. 57 no. 6
Type: Research Article
ISSN: 0309-0566

Keywords

Open Access
Article
Publication date: 29 January 2024

Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…

Abstract

Purpose

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.

Design/methodology/approach

This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.

Findings

The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.

Originality/value

A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 17 January 2020

Erkki Kalervo Laitinen

The purpose of this study is to introduce a matching function approach to analyze matching in financial reporting.

7349

Abstract

Purpose

The purpose of this study is to introduce a matching function approach to analyze matching in financial reporting.

Design/methodology/approach

The matching function is first analyzed analytically. It is specified as a multiplicative Cobb-Douglas-type function of three categories of expenses (labor expense, material expense and depreciation). The specified matching function is solved by the generalized reduced gradient method (GRG) for 10-year time series from 8,226 Finnish firms. The coefficient of determination of the logarithmic model (CODL) is compared with the linear revenue-expense correlation coefficient (REC) that is generally used in previous studies.

Findings

Empirical evidence showed that REC is outperformed by CODL. CODL was found independent of or weakly negatively dependent on the matching elasticity of labor expense, positively dependent on the material expense elasticity and negatively dependent on depreciation elasticity. Therefore, the differences in matching accuracy between industries emphasizing different expense categories are significant.

Research limitations/implications

The matching function is a general approach to assess the matching accuracy but it is in this study specified multiplicatively for three categories of expenses. Moreover, only one algorithm is tested in the empirical estimation of the function. The analysis is concentrated on ten-year time-series of a limited sample of Finnish firms.

Practical implications

The matching function approach provides a large set of important information for considering the matching process in practice. It can prove a useful method also to accounting standard-setters and other specialists such as managers, consultants and auditors.

Originality/value

This study is the first study to apply the new matching function approach.

Details

Journal of Financial Reporting and Accounting, vol. 18 no. 1
Type: Research Article
ISSN: 1985-2517

Keywords

Open Access
Article
Publication date: 4 November 2020

Mahmoud Alsaid, Rania M. Kamal and Mahmoud M. Rashwan

This paper presents economic and economic–statistical designs of the adaptive exponentially weighted moving average (AEWMA) control chart for monitoring the process mean. It also…

1048

Abstract

Purpose

This paper presents economic and economic–statistical designs of the adaptive exponentially weighted moving average (AEWMA) control chart for monitoring the process mean. It also aims to compare the effect of estimated process parameters on the economic performance of three charts, which are Shewhart, exponentially weighted moving average and AEWMA control charts with economic–statistical design.

Design/methodology/approach

The optimal parameters of the control charts are obtained by applying the Lorenzen and Vance’s (1986) cost function. Comparisons between the economic–statistical and economic designs of the AEWMA control chart in terms of expected cost and statistical measures are performed. Also, comparisons are made between the economic performance of the three competing charts in terms of the average expected cost and standard deviation of expected cost.

Findings

This paper concludes that taking into account the economic factors and statistical properties in designing the AEWMA control chart leads to a slight increase in cost but in return the improvement in the statistical performance is substantial. In addition, under the estimated parameters case, the comparisons reveal that from the economic point of view the AEWMA chart is the most efficient chart when detecting shifts of different sizes.

Originality/value

The importance of the study stems from designing the AEWMA chart from both economic and statistical points of view because it has not been tackled before. In addition, this paper contributes to the literature by studying the effect of the estimated parameters on the performance of control charts with economic–statistical design.

Details

Review of Economics and Political Science, vol. 6 no. 2
Type: Research Article
ISSN: 2356-9980

Keywords

Open Access
Article
Publication date: 13 March 2018

Teik-Kheong Tan and Merouane Lakehal-Ayat

The impact of volatility crush can be devastating to an option buyer and results in a substantial capital loss, even with a directionally correct strategy. As a result, most…

2005

Abstract

Purpose

The impact of volatility crush can be devastating to an option buyer and results in a substantial capital loss, even with a directionally correct strategy. As a result, most volatility plays are for option sellers, but the profit they can achieve is limited and the sellers carry unlimited risk. This paper aims to demonstrate the dynamics of implied volatility (IV) as being influenced by effects of persistence, leverage, market sentiment and liquidity. From the exploratory factor analysis (EFA), they extract four constructs and the results from the confirmatory factor analysis (CFA) indicated a good model fit for the constructs.

Design/methodology/approach

This section describes the methodology used for conducting the study. This includes the study area, study approach, sources of data, sampling technique and the method of data analysis.

Findings

Although there is extensive literature on methods for estimating IV dynamics during earnings announcement, few researchers have looked at the impact of expected market maker move, IV differential and IV Rank on the IV path after the earnings announcement. One reason for this research gap is because of the recent introduction of weekly options for equities by the Chicago Board of Options Exchange (CBOE) back in late 2010. Even then, the CBOE only released weekly options four individual equities – Bank of America (BAC.N), Apple (AAPL.O), Citigroup (C.N) and US-listed shares of BP (BP.L) (BP.N). The introduction of weekly options provided more trading flexibility and precision timing from shorter durations. This automatically expanded expiration choices, which in turned offered greater access and flexibility from the perspective of trading volatility during earnings announcement. This study has demonstrated the impact of including market sentiment and liquidity into the forecasting model for IV during earnings. This understanding in turn helps traders to formulate strategies that can circumvent the undefined risk associated with trading options strategies such as writing strangles.

Research limitations/implications

The first limitation of the study is that the firms included in the study are relatively large, and the results of the study can therefore not be generalized to medium sized and small firms. The second limitation lies in the current sample size, which in many cases was not enough to be able to draw reliable conclusions on. Scaling the sample size up is only a function of time and effort. This is easily overcome and should not be a limitation in the future. The third limitation concerns the measurement of the variables. Under the assumption of a normal distribution of returns (i.e. stock prices follow a random walk process), which means that the distribution of returns is symmetrical, one can estimate the probabilities of potential gains or losses associated with each amount. This means the standard deviation of securities returns, which is called historical volatility and is usually calculated as a moving average, can be used as a risk indicator. The prices used for the calculations are usually the closing prices, but Parkinson (1980) suggests that the day’s high and low prices would provide a better estimate of real volatility. One can also refine the analysis with high-frequency data. Such data enable the avoidance of the bias stemming from the use of closing (or opening) prices, but they have only been available for a relatively short time. The length of the observation period is another topic that is still under debate. There are no criteria that enable one to conclude that volatility calculated in relation to mean returns over 20 trading days (or one month) and then annualized is any more or less representative than volatility calculated over 130 trading days (or six months) and then annualized, or even than volatility measured directly over 260 trading days (one year). Nonetheless, the guidelines adopted in this study represent the best practices of researchers thus far.

Practical implications

This study has indicated that an earnings announcement can provide a volatility mispricing opportunity to allow an investor to profit from a sudden, sharp drop in IV. More specifically, the methodology developed by Tan and Bing is now well supported both empirically and theoretically in terms of qualifying opportunities that can be profitable because of the volatility crush. Conventionally, the option strategy of shorting strangles carries unlimited theoretical risk; however, the methodology has demonstrated that this risk can be substantially reduced if followed judiciously. This profitable strategy relies on a set of qualifying parameters including liquidity, premium collection, volatility differential, expected market move and market sentiment. Building upon this framework, the understanding of the effects of persistence and leverage resulted in further reducing the risk associated with trading options during earnings announcements. As a guideline, the sentiment and liquidity variables help to qualify a trade and the effects of persistence and leverage help to close the qualified trade.

Social implications

The authors find a positive association between the effects of market sentiment, liquidity, persistence and leverage in the dynamics of IV during earnings announcement. These findings substantiate further the four factors that influence IV dynamics during earnings announcement and conclude that just looking at persistence and leverage alone will not generate profitable trading opportunities.

Originality/value

The impact of volatility crush can be devastating to the option buyer with substantial capital loss, even for a directionally correct strategy. As a result, most volatility plays are for option sellers; however, the profit is limited and the sellers carry unlimited risk. The authors demonstrate the dynamics of IV as being influenced by effects of persistence, leverage, market sentiment and liquidity. From the EFA, they extracted four constructs and the results from the CFA indicated a good model fit for the constructs. Using EFA, CFA and Bayesian analysis, how this model can help investors formulate the right strategy to achieve the best risk/reward mix is demonstrated. Using Bayesian estimation and IV differential to proxy for differences of opinion about term structures in option pricing, the authors find a positive association among the effects of market sentiment, liquidity, persistence and leverage in the dynamics of IV during earnings announcement.

Details

PSU Research Review, vol. 2 no. 1
Type: Research Article
ISSN: 2399-1747

Keywords

1 – 10 of 320