Search results

1 – 10 of over 126000
To view the access options for this content please click here
Article
Publication date: 15 March 2011

Yi‐Hui Liang

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

Abstract

Purpose

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

Design/methodology/approach

This study employs NHPP to model the failure data. Initially, Nelson's graph method is employed to estimate the mean number of repairs and the MCRF value for the repairable system. Second, the time series decomposition approach is employed to predict the mean number of repairs and MCRF values.

Findings

The proposed method can analyze and predict the reliability for repairable systems. It can analyze the combined effect of trend‐cycle components and the seasonal component of the failure data.

Research limitations/implications

This study only adopts simulated data to verify the proposed method. Future research may use other real products' failure data to verify the proposed method. The proposed method is superior to ARIMA and neural network model prediction techniques in the reliability of repairable systems.

Practical implications

Results in this study can provide a valuable reference for engineers when constructing quality feedback systems for assessing current quality conditions, providing logistical support, correcting product design, facilitating optimal component‐replacement and maintenance strategies, and ensuring that products meet quality requirements.

Originality/value

The time series decomposition approach was used to model and analyze software aging and software failure in 2007. However, the time series decomposition approach was rarely used for modeling and analyzing the failure data for repairable systems. This study proposes the time series decomposition approach to analyze and predict the failure data of the repairable systems and the proposed method is better than the ARIMA model and neural networks in predictive accuracy.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article
Publication date: 19 July 2019

Hossein Abbasimehr and Mostafa Shabani

The purpose of this paper is to propose a new methodology that handles the issue of the dynamic behavior of customers over time.

Abstract

Purpose

The purpose of this paper is to propose a new methodology that handles the issue of the dynamic behavior of customers over time.

Design/methodology/approach

A new methodology is presented based on time series clustering to extract dominant behavioral patterns of customers over time. This methodology is implemented using bank customers’ transactions data which are in the form of time series data. The data include the recency (R), frequency (F) and monetary (M) attributes of businesses that are using the point-of-sale (POS) data of a bank. This data were obtained from the data analysis department of the bank.

Findings

After carrying out an empirical study on the acquired transaction data of 2,531 business customers that are using POS devices of the bank, the dominant trends of behavior are discovered using the proposed methodology. The obtained trends were analyzed from the marketing viewpoint. Based on the analysis of the monetary attribute, customers were divided into four main segments, including high-value growing customers, middle-value growing customers, prone to churn and churners. For each resulted group of customers with a distinctive trend, effective and practical marketing recommendations were devised to improve the bank relationship with that group. The prone-to-churn segment contains most of the customers; therefore, the bank should conduct interesting promotions to retain this segment.

Practical implications

The discovered trends of customer behavior and proposed marketing recommendations can be helpful for banks in devising segment-specific marketing strategies as they illustrate the dynamic behavior of customers over time. The obtained trends are visualized so that they can be easily interpreted and used by banks. This paper contributes to the literature on customer relationship management (CRM) as the proposed methodology can be effectively applied to different businesses to reveal trends in customer behavior.

Originality/value

In the current business condition, customer behavior is changing continually over time and customers are churning due to the reduced switching costs. Therefore, choosing an effective customer segmentation methodology which can consider the dynamic behaviors of customers is essential for every business. This paper proposes a new methodology to capture customer dynamic behavior using time series clustering on time-ordered data. This is an improvement over previous studies, in which static segmentation approaches have often been adopted. To the best of the authors’ knowledge, this is the first study that combines the recency, frequency, and monetary model and time series clustering to reveal trends in customer behavior.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 11 January 2021

Kamalpreet Singh Bhangu, Jasminder Sandhu and Luxmi Sapra

This study analyses the prevalent coronavirus disease (COVID-19) epidemic using machine learning algorithms. The data set used is an API data provided by the John Hopkins…

Abstract

Purpose

This study analyses the prevalent coronavirus disease (COVID-19) epidemic using machine learning algorithms. The data set used is an API data provided by the John Hopkins University resource centre and used the Web crawler to gather all the data features such as confirmed, recovered and death cases. Because of the unavailability of any COVID-19 drug at the moment, the unvarnished truth is that this outbreak is not expected to end in the near future, so the number of cases of this study would be very date specific. The analysis demonstrated in this paper focuses on the monthly analysis of confirmed, recovered and death cases, which assists to identify the trend and seasonality in the data. The purpose of this study is to explore the essential concepts of time series algorithms and use those concepts to perform time series analysis on the infected cases worldwide and forecast the spread of the virus in the next two weeks and thus aid in health-care services. Lower obtained mean absolute percentage error results of the forecasting time interval validate the model’s credibility.

Design/methodology/approach

In this study, the time series analysis of this outbreak forecast was done using the auto-regressive integrated moving average (ARIMA) model and also seasonal auto-regressive integrated moving averages with exogenous regressor (SARIMAX) and optimized to achieve better results.

Findings

The inferences of time series forecasting models ARIMA and SARIMAX were efficient to produce exact approximate results. The forecasting results indicate that an increasing trend is observed and there is a high rise in COVID-19 cases in many regions and countries that might face one of its worst days unless and until measures are taken to curb the spread of this disease quickly. The pattern of the rise of the spread of the virus in such countries is exactly mimicking some of the countries of early COVID-19 adoption such as Italy and the USA. Further, the obtained numbers of the models are date specific so the most recent execution of the model would return more recent results. The future scope of the study involves analysis with other models such as long short-term memory and then comparison with time series models.

Originality/value

A time series is a time-stamped data set in which each data point corresponds to a set of observations made at a particular time instance. This work is novel and addresses the COVID-19 with the help of time series analysis. The inferences of time series forecasting models ARIMA and SARIMAX were efficient to produce exact approximate results.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

To view the access options for this content please click here

Abstract

Details

Messy Data
Type: Book
ISBN: 978-0-76230-303-8

Abstract

Details

Nonlinear Time Series Analysis of Business Cycles
Type: Book
ISBN: 978-0-44451-838-5

To view the access options for this content please click here
Book part
Publication date: 1 January 2004

Jessica Lin and Eamonn Keogh

Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences has received much attention. In this work we make a…

Abstract

Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is completely meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work.

Details

Applications of Artificial Intelligence in Finance and Economics
Type: Book
ISBN: 978-1-84950-303-7

To view the access options for this content please click here
Article
Publication date: 2 February 2015

Songhao Shang

The purpose of this paper is to propose a new temporal disaggregation method for time series based on the accumulated and inverse accumulated generating operations in grey…

Abstract

Purpose

The purpose of this paper is to propose a new temporal disaggregation method for time series based on the accumulated and inverse accumulated generating operations in grey modeling and the interpolation method.

Design/methodology/approach

This disaggregation method includes three main steps, including accumulation, interpolation, and differentiation (AID). First, a low frequency flow series is transformed to the corresponding stock series through accumulated generating operation. Then, values of the stock series at unobserved time is estimated through appropriate interpolation method. And finally, the disaggregated stock series is transformed back to high frequency flow series through inverse accumulated generating operation.

Findings

The AID method is tested with a sales series. Results shows that the disaggregated sales data are satisfactory and reliable compared with the original data and disaggregated data using a time series model. The AID method is applicable to both long time series and grey series with insufficient information.

Practical implications

The AID method can be easily used to disaggregate low frequency flow series.

Originality/value

The AID method is a combination of grey modeling technique and interpolation method. Compared with other disaggregation methods, the AID method is simple, and does not require auxiliary information or plausible minimizing criterion required by other disaggregation methods.

Details

Grey Systems: Theory and Application, vol. 5 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Content available
Article
Publication date: 18 November 2019

Muhammad Zahir Khan and Muhammad Farid Khan

A significant number of studies have been conducted to analyze and understand the relationship between gas emissions and global temperature using conventional statistical…

Abstract

Purpose

A significant number of studies have been conducted to analyze and understand the relationship between gas emissions and global temperature using conventional statistical approaches. However, these techniques follow assumptions of probabilistic modeling, where results can be associated with large errors. Furthermore, such traditional techniques cannot be applied to imprecise data. The purpose of this paper is to avoid strict assumptions when studying the complex relationships between variables by using the three innovative, up-to-date, statistical modeling tools: adaptive neuro-fuzzy inference systems (ANFIS), artificial neural networks (ANNs) and fuzzy time series models.

Design/methodology/approach

These three approaches enabled us to effectively represent the relationship between global carbon dioxide (CO2) emissions from the energy sector (oil, gas and coal) and the average global temperature increase. Temperature was used in this study (1900-2012). Investigations were conducted into the predictive power and performance of different fuzzy techniques against conventional methods and among the fuzzy techniques themselves.

Findings

A performance comparison of the ANFIS model against conventional techniques showed that the root means square error (RMSE) of ANFIS and conventional techniques were found to be 0.1157 and 0.1915, respectively. On the other hand, the correlation coefficients of ANN and the conventional technique were computed to be 0.93 and 0.69, respectively. Furthermore, the fuzzy-based time series analysis of CO2 emissions and average global temperature using three fuzzy time series modeling techniques (Singh, Abbasov–Mamedova and NFTS) showed that the RMSE of fuzzy and conventional time series models were 110.51 and 1237.10, respectively.

Social implications

The paper provides more awareness about fuzzy techniques application in CO2 emissions studies.

Originality/value

These techniques can be extended to other models to assess the impact of CO2 emission from other sectors.

Details

International Journal of Climate Change Strategies and Management, vol. 11 no. 5
Type: Research Article
ISSN: 1756-8692

Keywords

To view the access options for this content please click here
Article
Publication date: 30 March 2010

Ricardo de A. Araújo

The purpose of this paper is to present a new quantum‐inspired evolutionary hybrid intelligent (QIEHI) approach, in order to overcome the random walk dilemma for stock…

Abstract

Purpose

The purpose of this paper is to present a new quantum‐inspired evolutionary hybrid intelligent (QIEHI) approach, in order to overcome the random walk dilemma for stock market prediction.

Design/methodology/approach

The proposed QIEHI method is inspired by the Takens' theorem and performs a quantum‐inspired evolutionary search for the minimum necessary dimension (time lags) embedded in the problem for determining the characteristic phase space that generates the financial time series phenomenon. The approach presented in this paper consists of a quantum‐inspired intelligent model composed of an artificial neural network (ANN) with a modified quantum‐inspired evolutionary algorithm (MQIEA), which is able to evolve the complete ANN architecture and parameters (pruning process), the ANN training algorithm (used to further improve the ANN parameters supplied by the MQIEA), and the most suitable time lags, to better describe the time series phenomenon.

Findings

This paper finds that, initially, the proposed QIEHI method chooses the better prediction model, then it performs a behavioral statistical test to adjust time phase distortions that appear in financial time series. Also, an experimental analysis is conducted with the proposed approach using six real‐word stock market times series, and the obtained results are discussed and compared, according to a group of relevant performance metrics, to results found with multilayer perceptron networks and the previously introduced time‐delay added evolutionary forecasting method.

Originality/value

The paper usefully demonstrates how the proposed QIEHI method chooses the best prediction model for the times series representation and performs a behavioral statistical test to adjust time phase distortions that frequently appear in financial time series.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 6 November 2017

Chaw Thet Zan and Hayato Yamana

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized…

Abstract

Purpose

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments. Each segment is represented by its mean value and mapped with an alphabet, where the number of adopted symbols is called alphabet size. Both parameters control data compression ratio and accuracy of time series mining tasks. Besides, optimal parameters selection highly depends on different application and data sets. In fact, these parameters are iteratively selected by analyzing entire data sets, which limits handling of the huge amount of time series and reduces the applicability of SAX.

Design/methodology/approach

The segment size is estimated based on Shannon sampling theorem (autoSAXSD_S) and adaptive hierarchical segmentation (autoSAXSD_M). As for the alphabet size, it is focused on how mean values of all the segments are distributed. The small number of alphabet size is set for large distribution to easily distinguish the difference among segments.

Findings

Experimental evaluation using University of California Riverside (UCR) data sets shows that the proposed schemes are able to select the parameters well with high classification accuracy and show comparable efficiency in comparison with state-of-the-art methods, SAX and auto_iSAX.

Originality/value

The originality of this paper is the way to find out the optimal parameters of SAX using the proposed estimation schemes. The first parameter segment size is automatically estimated on two approaches and the second parameter alphabet size is estimated on the most frequent average (mean) value among segments.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 126000