Search results

1 – 10 of over 127000
Article
Publication date: 15 March 2011

Yi‐Hui Liang

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

1418

Abstract

Purpose

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

Design/methodology/approach

This study employs NHPP to model the failure data. Initially, Nelson's graph method is employed to estimate the mean number of repairs and the MCRF value for the repairable system. Second, the time series decomposition approach is employed to predict the mean number of repairs and MCRF values.

Findings

The proposed method can analyze and predict the reliability for repairable systems. It can analyze the combined effect of trend‐cycle components and the seasonal component of the failure data.

Research limitations/implications

This study only adopts simulated data to verify the proposed method. Future research may use other real products' failure data to verify the proposed method. The proposed method is superior to ARIMA and neural network model prediction techniques in the reliability of repairable systems.

Practical implications

Results in this study can provide a valuable reference for engineers when constructing quality feedback systems for assessing current quality conditions, providing logistical support, correcting product design, facilitating optimal component‐replacement and maintenance strategies, and ensuring that products meet quality requirements.

Originality/value

The time series decomposition approach was used to model and analyze software aging and software failure in 2007. However, the time series decomposition approach was rarely used for modeling and analyzing the failure data for repairable systems. This study proposes the time series decomposition approach to analyze and predict the failure data of the repairable systems and the proposed method is better than the ARIMA model and neural networks in predictive accuracy.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 30 July 2019

Hossein Abbasimehr and Mostafa Shabani

The purpose of this paper is to propose a new methodology that handles the issue of the dynamic behavior of customers over time.

1404

Abstract

Purpose

The purpose of this paper is to propose a new methodology that handles the issue of the dynamic behavior of customers over time.

Design/methodology/approach

A new methodology is presented based on time series clustering to extract dominant behavioral patterns of customers over time. This methodology is implemented using bank customers’ transactions data which are in the form of time series data. The data include the recency (R), frequency (F) and monetary (M) attributes of businesses that are using the point-of-sale (POS) data of a bank. This data were obtained from the data analysis department of the bank.

Findings

After carrying out an empirical study on the acquired transaction data of 2,531 business customers that are using POS devices of the bank, the dominant trends of behavior are discovered using the proposed methodology. The obtained trends were analyzed from the marketing viewpoint. Based on the analysis of the monetary attribute, customers were divided into four main segments, including high-value growing customers, middle-value growing customers, prone to churn and churners. For each resulted group of customers with a distinctive trend, effective and practical marketing recommendations were devised to improve the bank relationship with that group. The prone-to-churn segment contains most of the customers; therefore, the bank should conduct interesting promotions to retain this segment.

Practical implications

The discovered trends of customer behavior and proposed marketing recommendations can be helpful for banks in devising segment-specific marketing strategies as they illustrate the dynamic behavior of customers over time. The obtained trends are visualized so that they can be easily interpreted and used by banks. This paper contributes to the literature on customer relationship management (CRM) as the proposed methodology can be effectively applied to different businesses to reveal trends in customer behavior.

Originality/value

In the current business condition, customer behavior is changing continually over time and customers are churning due to the reduced switching costs. Therefore, choosing an effective customer segmentation methodology which can consider the dynamic behaviors of customers is essential for every business. This paper proposes a new methodology to capture customer dynamic behavior using time series clustering on time-ordered data. This is an improvement over previous studies, in which static segmentation approaches have often been adopted. To the best of the authors’ knowledge, this is the first study that combines the recency, frequency, and monetary model and time series clustering to reveal trends in customer behavior.

Article
Publication date: 2 February 2015

Songhao Shang

The purpose of this paper is to propose a new temporal disaggregation method for time series based on the accumulated and inverse accumulated generating operations in grey…

Abstract

Purpose

The purpose of this paper is to propose a new temporal disaggregation method for time series based on the accumulated and inverse accumulated generating operations in grey modeling and the interpolation method.

Design/methodology/approach

This disaggregation method includes three main steps, including accumulation, interpolation, and differentiation (AID). First, a low frequency flow series is transformed to the corresponding stock series through accumulated generating operation. Then, values of the stock series at unobserved time is estimated through appropriate interpolation method. And finally, the disaggregated stock series is transformed back to high frequency flow series through inverse accumulated generating operation.

Findings

The AID method is tested with a sales series. Results shows that the disaggregated sales data are satisfactory and reliable compared with the original data and disaggregated data using a time series model. The AID method is applicable to both long time series and grey series with insufficient information.

Practical implications

The AID method can be easily used to disaggregate low frequency flow series.

Originality/value

The AID method is a combination of grey modeling technique and interpolation method. Compared with other disaggregation methods, the AID method is simple, and does not require auxiliary information or plausible minimizing criterion required by other disaggregation methods.

Details

Grey Systems: Theory and Application, vol. 5 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Open Access
Article
Publication date: 14 November 2022

Simarjeet Singh, Nidhi Walia, Stelios Bekiros, Arushi Gupta, Jigyasu Kumar and Amar Kumar Mishra

This research study aims to design a novel risk-managed time-series momentum approach. The present study also examines the time-series momentum effect in the Indian equity market…

1286

Abstract

Purpose

This research study aims to design a novel risk-managed time-series momentum approach. The present study also examines the time-series momentum effect in the Indian equity market. Apart from this, the study also proposes a novel risk-managed time-series momentum approach.

Design/methodology/approach

The study considers the adjusted monthly closing prices of the stocks listed on the Bombay Stock Exchange from January 1996 to December 2020 to formulate long-short portfolios. Newey–West t statistics were used to test the significance of momentum returns. The present research has considered standard risk factors, i.e. market, size and value, to evaluate the risk-adjusted performance of time-series momentum portfolios.

Findings

The present research reports a substantial absolute momentum effect in the Indian equity market. However, absolute momentum strategies are exposed to occasional severe losses. The proposed time-series momentum approach not only yields 2.5 times higher return than the standard time-series momentum approach but also causes substantial enhancement in downside risks and higher-order moments.

Practical implications

The study's outcomes offer valuable insights for professional investors, capital market regulators and asset management companies.

Originality/value

This study is one of the pioneers attempting to test the time-series momentum effect in emerging economies. Besides, current research contributes to the escalating literature on risk-managed momentum by suggesting a novel revised time-series momentum approach.

Details

Journal of Economics, Finance and Administrative Science, vol. 27 no. 54
Type: Research Article
ISSN: 2218-0648

Keywords

Article
Publication date: 30 March 2010

Ricardo de A. Araújo

The purpose of this paper is to present a new quantum‐inspired evolutionary hybrid intelligent (QIEHI) approach, in order to overcome the random walk dilemma for stock market…

1565

Abstract

Purpose

The purpose of this paper is to present a new quantum‐inspired evolutionary hybrid intelligent (QIEHI) approach, in order to overcome the random walk dilemma for stock market prediction.

Design/methodology/approach

The proposed QIEHI method is inspired by the Takens' theorem and performs a quantum‐inspired evolutionary search for the minimum necessary dimension (time lags) embedded in the problem for determining the characteristic phase space that generates the financial time series phenomenon. The approach presented in this paper consists of a quantum‐inspired intelligent model composed of an artificial neural network (ANN) with a modified quantum‐inspired evolutionary algorithm (MQIEA), which is able to evolve the complete ANN architecture and parameters (pruning process), the ANN training algorithm (used to further improve the ANN parameters supplied by the MQIEA), and the most suitable time lags, to better describe the time series phenomenon.

Findings

This paper finds that, initially, the proposed QIEHI method chooses the better prediction model, then it performs a behavioral statistical test to adjust time phase distortions that appear in financial time series. Also, an experimental analysis is conducted with the proposed approach using six real‐word stock market times series, and the obtained results are discussed and compared, according to a group of relevant performance metrics, to results found with multilayer perceptron networks and the previously introduced time‐delay added evolutionary forecasting method.

Originality/value

The paper usefully demonstrates how the proposed QIEHI method chooses the best prediction model for the times series representation and performs a behavioral statistical test to adjust time phase distortions that frequently appear in financial time series.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 11 January 2021

Kamalpreet Singh Bhangu, Jasminder Kaur Sandhu and Luxmi Sapra

This study analyses the prevalent coronavirus disease (COVID-19) epidemic using machine learning algorithms. The data set used is an API data provided by the John Hopkins…

Abstract

Purpose

This study analyses the prevalent coronavirus disease (COVID-19) epidemic using machine learning algorithms. The data set used is an API data provided by the John Hopkins University resource centre and used the Web crawler to gather all the data features such as confirmed, recovered and death cases. Because of the unavailability of any COVID-19 drug at the moment, the unvarnished truth is that this outbreak is not expected to end in the near future, so the number of cases of this study would be very date specific. The analysis demonstrated in this paper focuses on the monthly analysis of confirmed, recovered and death cases, which assists to identify the trend and seasonality in the data. The purpose of this study is to explore the essential concepts of time series algorithms and use those concepts to perform time series analysis on the infected cases worldwide and forecast the spread of the virus in the next two weeks and thus aid in health-care services. Lower obtained mean absolute percentage error results of the forecasting time interval validate the model’s credibility.

Design/methodology/approach

In this study, the time series analysis of this outbreak forecast was done using the auto-regressive integrated moving average (ARIMA) model and also seasonal auto-regressive integrated moving averages with exogenous regressor (SARIMAX) and optimized to achieve better results.

Findings

The inferences of time series forecasting models ARIMA and SARIMAX were efficient to produce exact approximate results. The forecasting results indicate that an increasing trend is observed and there is a high rise in COVID-19 cases in many regions and countries that might face one of its worst days unless and until measures are taken to curb the spread of this disease quickly. The pattern of the rise of the spread of the virus in such countries is exactly mimicking some of the countries of early COVID-19 adoption such as Italy and the USA. Further, the obtained numbers of the models are date specific so the most recent execution of the model would return more recent results. The future scope of the study involves analysis with other models such as long short-term memory and then comparison with time series models.

Originality/value

A time series is a time-stamped data set in which each data point corresponds to a set of observations made at a particular time instance. This work is novel and addresses the COVID-19 with the help of time series analysis. The inferences of time series forecasting models ARIMA and SARIMAX were efficient to produce exact approximate results.

Details

World Journal of Engineering, vol. 19 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 6 November 2017

Chaw Thet Zan and Hayato Yamana

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments…

299

Abstract

Purpose

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments. Each segment is represented by its mean value and mapped with an alphabet, where the number of adopted symbols is called alphabet size. Both parameters control data compression ratio and accuracy of time series mining tasks. Besides, optimal parameters selection highly depends on different application and data sets. In fact, these parameters are iteratively selected by analyzing entire data sets, which limits handling of the huge amount of time series and reduces the applicability of SAX.

Design/methodology/approach

The segment size is estimated based on Shannon sampling theorem (autoSAXSD_S) and adaptive hierarchical segmentation (autoSAXSD_M). As for the alphabet size, it is focused on how mean values of all the segments are distributed. The small number of alphabet size is set for large distribution to easily distinguish the difference among segments.

Findings

Experimental evaluation using University of California Riverside (UCR) data sets shows that the proposed schemes are able to select the parameters well with high classification accuracy and show comparable efficiency in comparison with state-of-the-art methods, SAX and auto_iSAX.

Originality/value

The originality of this paper is the way to find out the optimal parameters of SAX using the proposed estimation schemes. The first parameter segment size is automatically estimated on two approaches and the second parameter alphabet size is estimated on the most frequent average (mean) value among segments.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 14 June 2013

Bojan Božić and Werner Winiwarter

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community…

Abstract

Purpose

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community building by the use of a dedicated language.

Design/methodology/approach

The authors have developed a new semantic time series processing language and prepared showcases to demonstrate its functionality. The assumption is an environmental setting with data measurements from different sensors to be distributed to different groups of interest. The data are represented as time series for water and air quality, while the user groups are, among others, the environmental agency, companies from the industrial sector and legal authorities.

Findings

A language for time series processing and several tools to enrich the time series with meta‐data and for community building have been implemented in Python and Java. Also a GUI for demonstration purposes has been developed in PyQt4. In addition, an ontology for validation has been designed and a knowledge base for data storage and inference was set up. Some important features are: dynamic integration of ontologies, time series annotation, and semantic filtering.

Research limitations/implications

This paper focuses on the showcases of time series semantic language (TSSL), but also covers technical aspects and user interface issues. The authors are planning to develop TSSL further and evaluate it within further research projects and validation scenarios.

Practical implications

The research has a high practical impact on time series processing and provides new data sources for semantic web applications. It can also be used in social web platforms (especially for researchers) to provide a time series centric tagging and processing framework.

Originality/value

The paper presents an extended version of the paper presented at iiWAS2012.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 21 June 2019

Muhammad Zahir Khan and Muhammad Farid Khan

A significant number of studies have been conducted to analyze and understand the relationship between gas emissions and global temperature using conventional statistical…

3148

Abstract

Purpose

A significant number of studies have been conducted to analyze and understand the relationship between gas emissions and global temperature using conventional statistical approaches. However, these techniques follow assumptions of probabilistic modeling, where results can be associated with large errors. Furthermore, such traditional techniques cannot be applied to imprecise data. The purpose of this paper is to avoid strict assumptions when studying the complex relationships between variables by using the three innovative, up-to-date, statistical modeling tools: adaptive neuro-fuzzy inference systems (ANFIS), artificial neural networks (ANNs) and fuzzy time series models.

Design/methodology/approach

These three approaches enabled us to effectively represent the relationship between global carbon dioxide (CO2) emissions from the energy sector (oil, gas and coal) and the average global temperature increase. Temperature was used in this study (1900-2012). Investigations were conducted into the predictive power and performance of different fuzzy techniques against conventional methods and among the fuzzy techniques themselves.

Findings

A performance comparison of the ANFIS model against conventional techniques showed that the root means square error (RMSE) of ANFIS and conventional techniques were found to be 0.1157 and 0.1915, respectively. On the other hand, the correlation coefficients of ANN and the conventional technique were computed to be 0.93 and 0.69, respectively. Furthermore, the fuzzy-based time series analysis of CO2 emissions and average global temperature using three fuzzy time series modeling techniques (Singh, Abbasov–Mamedova and NFTS) showed that the RMSE of fuzzy and conventional time series models were 110.51 and 1237.10, respectively.

Social implications

The paper provides more awareness about fuzzy techniques application in CO2 emissions studies.

Originality/value

These techniques can be extended to other models to assess the impact of CO2 emission from other sectors.

Details

International Journal of Climate Change Strategies and Management, vol. 11 no. 5
Type: Research Article
ISSN: 1756-8692

Keywords

Article
Publication date: 12 June 2017

Kehe Wu, Yayun Zhu, Quan Li and Ziwei Wu

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities…

Abstract

Purpose

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand.

Design/methodology/approach

First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework.

Findings

Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable.

Originality/value

This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 127000