Search results

1 – 10 of over 19000
Article
Publication date: 30 March 2010

Ricardo de A. Araújo

The purpose of this paper is to present a new quantum‐inspired evolutionary hybrid intelligent (QIEHI) approach, in order to overcome the random walk dilemma for stock market…

1565

Abstract

Purpose

The purpose of this paper is to present a new quantum‐inspired evolutionary hybrid intelligent (QIEHI) approach, in order to overcome the random walk dilemma for stock market prediction.

Design/methodology/approach

The proposed QIEHI method is inspired by the Takens' theorem and performs a quantum‐inspired evolutionary search for the minimum necessary dimension (time lags) embedded in the problem for determining the characteristic phase space that generates the financial time series phenomenon. The approach presented in this paper consists of a quantum‐inspired intelligent model composed of an artificial neural network (ANN) with a modified quantum‐inspired evolutionary algorithm (MQIEA), which is able to evolve the complete ANN architecture and parameters (pruning process), the ANN training algorithm (used to further improve the ANN parameters supplied by the MQIEA), and the most suitable time lags, to better describe the time series phenomenon.

Findings

This paper finds that, initially, the proposed QIEHI method chooses the better prediction model, then it performs a behavioral statistical test to adjust time phase distortions that appear in financial time series. Also, an experimental analysis is conducted with the proposed approach using six real‐word stock market times series, and the obtained results are discussed and compared, according to a group of relevant performance metrics, to results found with multilayer perceptron networks and the previously introduced time‐delay added evolutionary forecasting method.

Originality/value

The paper usefully demonstrates how the proposed QIEHI method chooses the best prediction model for the times series representation and performs a behavioral statistical test to adjust time phase distortions that frequently appear in financial time series.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 November 2019

Amitava Choudhury, Tanmay Konnur, P.P. Chattopadhyay and Snehanshu Pal

The purpose of this paper, is to predict the various phases and crystal structure from multi-component alloys. Nowadays, the concept and strategies of the development of…

Abstract

Purpose

The purpose of this paper, is to predict the various phases and crystal structure from multi-component alloys. Nowadays, the concept and strategies of the development of multi-principal element alloys (MPEAs) significantly increase the count of the potential candidate of alloy systems, which demand proper screening of large number of alloy systems based on the nature of their phase and structure. Experimentally obtained data linking elemental properties and their resulting phases for MPEAs is profused; hence, there is a strong scope for categorization/classification of MPEAs based on structural features of the resultant phase along with distinctive connections between elemental properties and phases.

Design/methodology/approach

In this paper, several machine-learning algorithms have been used to recognize the underlying data pattern using data sets to design MPEAs and classify them based on structural features of their resultant phase such as single-phase solid solution, amorphous and intermetallic compounds. Further classification of MPEAs having single-phase solid solution is performed based on crystal structure using an ensemble-based machine-learning algorithm known as random-forest algorithm.

Findings

The model developed by implementing random-forest algorithm has resulted in an accuracy of 91 per cent for phase prediction and 93 per cent for crystal structure prediction for single-phase solid solution class of MPEAs. Five input parameters are used in the prediction model namely, valence electron concentration, difference in the pauling negativeness, atomic size difference, mixing enthalpy and mixing entropy. It has been found that the valence electron concentration is the most important feature with respect to prediction of phases. To avoid overfitting problem, fivefold cross-validation has been performed. To understand the comparative performance, different algorithms such as K-nearest Neighbor, support vector machine, logistic regression, naïve-based approach, decision tree and neural network have been used in the data set.

Originality/value

In this paper, the authors described the phase selection and crystal structure prediction mechanism in MPEA data set and have achieved better accuracy using machine learning.

Details

Engineering Computations, vol. 37 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 16 October 2018

Nandkumar Mishra and Santosh B. Rane

The purpose of this technical paper is to explore the application of analytics and Six Sigma in the manufacturing processes for iron foundries. This study aims to establish a…

Abstract

Purpose

The purpose of this technical paper is to explore the application of analytics and Six Sigma in the manufacturing processes for iron foundries. This study aims to establish a causal relationship between chemical composition and the quality of the iron casting to achieve the global benchmark quality level.

Design/methodology/approach

The case study-based exploratory research design is used in this study. The problem discovery is done through the literature survey and Delphi method-based expert opinions. The prediction model is built and deployed in 11 cases to validate the research hypothesis. The analytics helps in achieving the statistically significant business goals. The design includes Six Sigma DMAIC (Define – Measure – Analyze – Improve and Control) approach, benchmarking, historical data analysis, literature survey and experiments for the data collection. The data analysis is done through stratification and process capability analysis. The logistic regression-based analytics helps in prediction model building and simulations.

Findings

The application of prediction model helped in quick root cause analysis and reduction of rejection by over 99 per cent saving over INR6.6m per year. This has also enhanced the reliability of the production line and supply chain with on-time delivery of 99.78 per cent, which earlier was 80 per cent. The analytics with Six Sigma DMAIC approach can quickly and easily be applied in manufacturing domain as well.

Research limitations implications

The limitation of the present analytics model is that it provides the point estimates. The model can further be enhanced incorporating range estimates through Monte Carlo simulation.

Practical implications

The increasing use of prediction model in the near future is likely to enhance predictability and efficiencies of the various manufacturing process with sensors and Internet of Things.

Originality/value

The researchers have used design of experiments, artificial neural network and the technical simulations to optimise either chemical composition or mould properties or melt shop parameters. However, this work is based on comprehensive historical data-based analytics. It considers multiple human and temporal factors, sand and mould properties and melt shop parameters along with their relative weight, which is unique. The prediction model is useful to the practitioners for parameter simulation and quality enhancements. The researchers can use similar analytics models with structured Six Sigma DMAIC approach in other manufacturing processes for the simulation and optimisations.

Details

International Journal of Lean Six Sigma, vol. 10 no. 1
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 19 July 2022

Harish Kundra, Sudhir Sharma, P. Nancy and Dasari Kalyani

Bitcoin has indeed been universally acknowledged as an investment asset in recent decades, after the boom-and-bust of cryptocurrency values. Because of its extreme volatility, it…

Abstract

Purpose

Bitcoin has indeed been universally acknowledged as an investment asset in recent decades, after the boom-and-bust of cryptocurrency values. Because of its extreme volatility, it requires accurate forecasts to build economic decisions. Although prior research has utilized machine learning to improve Bitcoin price prediction accuracy, few have looked into the plausibility of using multiple modeling approaches on datasets containing varying data types and volumetric attributes. Thus, this paper aims to propose a bitcoin price prediction model.

Design/methodology/approach

In this research work, a bitcoin price prediction model is introduced by following three major phases: Data collection, feature extraction and price prediction. Initially, the collected Bitcoin time-series data will be preprocessed and the original features will be extracted. To make this work good-fit with a high level of accuracy, we have been extracting the second order technical indicator based features like average true range (ATR), modified-exponential moving average (M-EMA), relative strength index and rate of change and proposed decomposed inter-day difference. Subsequently, these extracted features along with the original features will be subjected to prediction phase, where the prediction of bitcoin price value is attained precisely from the constructed two-level ensemble classifier. The two-level ensemble classifier will be the amalgamation of two fabulous classifiers: optimized convolutional neural network (CNN) and bidirectional long/short-term memory (BiLSTM). To cope up with the volatility characteristics of bitcoin prices, it is planned to fine-tune the weight parameter of CNN by a new hybrid optimization model. The proposed hybrid optimization model referred as black widow updated rain optimization (BWURO) model will be conceptual blended of rain optimization algorithm and black widow optimization algorithm.

Findings

The proposed work is compared over the existing models in terms of convergence, MAE, MAPE, MARE, MSE, MSPE, MRSE, Root Mean Square Error (RMSE), RMSPE and RMSRE, respectively. These evaluations have been conducted for both algorithmic performance as well as classifier performance. At LP = 50, the MAE of the proposed work is 0.023372, which is 59.8%, 72.2%, 62.14% and 64.08% better than BWURO + Bi-LSTM, CNN + BWURO, NN + BWURO and SVM + BWURO, respectively.

Originality/value

In this research work, a new modified EMA feature is extracted, which makes the bitcoin price prediction more efficient. In this research work, a two-level ensemble classifier is constructed in the price prediction phase by blending the Bi-LSTM and optimized CNN, respectively. To deal with the volatility of bitcoin values, a novel hybrid optimization model is used to fine-tune the weight parameter of CNN.

Details

Kybernetes, vol. 52 no. 11
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 26 February 2024

Chong Wu, Xiaofang Chen and Yongjie Jiang

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of…

Abstract

Purpose

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of enterprises and also jeopardizes the interests of investors. Therefore, it is important to understand how to accurately and reasonably predict the financial distress of enterprises.

Design/methodology/approach

In the present study, ensemble feature selection (EFS) and improved stacking were used for financial distress prediction (FDP). Mutual information, analysis of variance (ANOVA), random forest (RF), genetic algorithms, and recursive feature elimination (RFE) were chosen for EFS to select features. Since there may be missing information when feeding the results of the base learner directly into the meta-learner, the features with high importance were fed into the meta-learner together. A screening layer was added to select the meta-learner with better performance. Finally, Optima hyperparameters were used for parameter tuning by the learners.

Findings

An empirical study was conducted with a sample of A-share listed companies in China. The F1-score of the model constructed using the features screened by EFS reached 84.55%, representing an improvement of 4.37% compared to the original features. To verify the effectiveness of improved stacking, benchmark model comparison experiments were conducted. Compared to the original stacking model, the accuracy of the improved stacking model was improved by 0.44%, and the F1-score was improved by 0.51%. In addition, the improved stacking model had the highest area under the curve (AUC) value (0.905) among all the compared models.

Originality/value

Compared to previous models, the proposed FDP model has better performance, thus bridging the research gap of feature selection. The present study provides new ideas for stacking improvement research and a reference for subsequent research in this field.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 13 November 2007

Tun Lin Moe, Fritz Gehbauer, Stefan Senitz and Marc Mueller

With the recognition of the necessity for effectively and successfully managing natural disaster projects for saving human lives and preventing and minimizing the impacts of…

5224

Abstract

Purpose

With the recognition of the necessity for effectively and successfully managing natural disaster projects for saving human lives and preventing and minimizing the impacts of disasters on socio‐economic developmental progress, this paper seeks to propose a balanced scorecard (BSC) approach in order to maximize the possibilities of desired outcomes from projects.

Design/methodology/approach

The BSC approach, which has been widely accepted and used in business organizations, can be adapted for natural disaster management projects. An application of this BSC approach to disaster management projects is discussed with a real flood disaster management project.

Findings

In the BSC approach, performance measures should be established in four areas: donors' perspective; the target beneficiaries' perspective; the internal process perspective; and the learning and innovation perspectives. Measures for four areas in each of the five generic phases of managing natural disasters (i.e. preparedness, early warning, emergency relief, rehabilitation and recovery) allow project managers to identify problem areas and areas for further improvements. Ensuring success in one phase will lead to success in the subsequent phase because success in one phase will be the input for the following phase.

Research limitations/implications

In general, this study demonstrates an application of the balanced scorecard approach to natural disaster management projects and, in particular, to a real flood disaster management in Hat Yai Municipality, Southern Thailand. Future research might focus on other types of natural disaster.

Practical implications

Using the balanced scorecard, project managers can understand problem areas as well as areas for improvement in current projects, which would enhance their abilities to take corrective actions that ensure and maximize the possibilities of successful outcomes from implemented projects.

Originality/value

This paper proposes the BSC approach for successfully managing natural disaster projects. This management approach can be applied to various natural disaster management projects.

Details

Disaster Prevention and Management: An International Journal, vol. 16 no. 5
Type: Research Article
ISSN: 0965-3562

Keywords

Article
Publication date: 1 May 2006

Tun Lin Moe and Pairote Pathranarakul

With an aim to develop an integrated approach for effectively managing natural disasters, this paper has three research objectives. First, it provides a framework for effective…

17060

Abstract

Purpose

With an aim to develop an integrated approach for effectively managing natural disasters, this paper has three research objectives. First, it provides a framework for effective natural disaster management from a public project management perspective. Second, it proposes an integrated approach for successfully and effectively managing disaster crisis. Third, it specifies a set of critical success factors for managing disaster related public projects.

Design/methodology/approach

A detailed case study of the tsunami was carried out to identify specific problems associated with managing natural disaster in Thailand.

Findings

The investigations reveal that the country lacked a master plan for natural disaster management including prediction, warning, mitigation and preparedness, unspecified responsible governmental authority, unclear line of authority, ineffective collaboration among institutions in different levels, lack of encouragement for participation of local and international NGOs, lack of education and knowledge for tsunami in potential disaster effected communities, and lack of information management or database system.

Research limitations/implications

This study identifies the specific problems associated with natural disasters management based on a detailed case study of managing tsunami disaster in Thailand in 2004.

Practical implications

The proposed integrated approach which includes both proactive and reactive strategies can be applied to managing natural disasters successfully in Thailand.

Originality/value

This paper highlights the importance of having proactive and reactive strategies for natural disaster management.

Details

Disaster Prevention and Management: An International Journal, vol. 15 no. 3
Type: Research Article
ISSN: 0965-3562

Keywords

Article
Publication date: 8 June 2015

Shye-Nee Low, Shahrul Kamaruddin and Ishak Abdul Azid

The purpose of this paper is to investigate multiple criteria decision-making (MCDM) processes within a flow-line production-improvement activity. Investigation can lead to…

Abstract

Purpose

The purpose of this paper is to investigate multiple criteria decision-making (MCDM) processes within a flow-line production-improvement activity. Investigation can lead to understanding of how a process improvement framework influences the decision and fulfillment of the potential to successfully change the operation process.

Design/methodology/approach

The improvement process selection (IPS) framework is built systematically by incorporating all related decision criteria with suitable tools required to select improvement alternatives. The process consists of three phases: identification, prediction, and selection. The IPS framework is validated through a case study of a company that was carrying out a flow-line production-improvement project.

Findings

The developed framework is used to prioritize the problem scope and select the solutions from various options. The case study illustrates the process through which the developed framework provided a systematic approach in identifying the solutions and achieving the desired performance improvement. Prediction result analysis shows the framework achieved sustainable process improvement changes and prevents management levels from higher risks in failure improvement. The feedback of the case study has verified the robustness of the framework.

Practical implications

Quantitative improvement tools, such as MCDM employed in the IPS framework are vital for better understanding of the improvement impact of changes. Thus, the improvement solution alternatives can be analyzed in more comprehensive ways by considering numerous performance metrics in order to select the best improvement alternatives.

Originality/value

The IPS framework can assist the company in determining optimal decisions in relation to selection of improvement alternatives. As a result, production performance can be affected positively.

Details

International Journal of Productivity and Performance Management, vol. 64 no. 5
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 23 August 2018

Murtaza Nasir, Carole South-Winter, Srini Ragothaman and Ali Dag

The purpose of this paper is to formulate a framework to construct a patient-specific risk score and therefore to classify these patients into various risk groups that can be used…

Abstract

Purpose

The purpose of this paper is to formulate a framework to construct a patient-specific risk score and therefore to classify these patients into various risk groups that can be used as a decision support mechanism by the medical decision makers to augment their decision-making process, allowing them to optimally use the limited resources available.

Design/methodology/approach

A conventional statistical model (logistic regression) and two machine learning-based (i.e. artificial neural networks (ANNs) and support vector machines) data mining models were employed by also using five-fold cross-validation in the classification phase. In order to overcome the data imbalance problem, random undersampling technique was utilized. After constructing the patient-specific risk score, k-means clustering algorithm was employed to group these patients into risk groups.

Findings

Results showed that the ANN model achieved the best results with an area under the curve score of 0.867, while the sensitivity and specificity were 0.715 and 0.892, respectively. Also, the construction of patient-specific risk scores offer useful insights to the medical experts, by helping them find a trade-off between risks, costs and resources.

Originality/value

The study contributes to the existing body of knowledge by constructing a framework that can be utilized to determine the risk level of the targeted patient, by employing data mining-based predictive approach.

Details

Industrial Management & Data Systems, vol. 119 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 3 November 2021

Anteneh Ayanso, Mingshan Han and Morteza Zihayat

This paper aims to propose an automated mobile app labeling framework based on a novel app classification scheme that is aligned with users’ primary motivations for using…

Abstract

Purpose

This paper aims to propose an automated mobile app labeling framework based on a novel app classification scheme that is aligned with users’ primary motivations for using smartphones. The study addresses the gaps in incorporating the needs of users and other context information in app classification as well as recommendation systems.

Design/methodology/approach

Based on a corpus of mobile app descriptions collected from Google Play store, this study applies extensive text analytics and topic modeling procedures to profile mobile apps within the categories of the classification scheme. Sufficient number of representative and labeled app descriptions are then used to train a classifier using machine learning algorithms, such as rule-based, decision tree and artificial neural network.

Findings

Experimental results of the classifiers show high accuracy in automatically labeling new apps based on their descriptions. The accuracy of the classification results suggests a feasible direction in facilitating app searching and retrieval in different Web-based usage environments.

Research limitations/implications

As a common challenge in textual data projects, the problem of data size and data quality issues exists throughout the multiple phases of experiments. Future research will extend the data collection scope in many aspects to address the issues that constrained the current experiments.

Practical implications

These empirical experiments demonstrate the feasibility of textual data analysis in profiling apps and user context information. This study also benefits app developers by improving app descriptions through a better understanding of user needs and context information. Finally, the classification framework can also guide practitioners in customizing products and services beyond mobile apps where context information and user needs play an important role.

Social implications

Given the widespread usage and applications of smartphones today, the proposed app classification framework will have broader implications to different Web-based application environments.

Originality/value

While there have been other classification approaches in the literature, to the best of the authors’ knowledge, this framework is the first study on building an automated app labeling framework based on primary motivations of smartphone usage.

1 – 10 of over 19000