Search results

1 – 10 of over 124000
Article
Publication date: 25 July 2018

Ke Yi Zhou and Shaolin Hu

The similarity measurement of time series is an important research in time series detection, which is a basic work of time series clustering, anomaly discovery, prediction and…

Abstract

Purpose

The similarity measurement of time series is an important research in time series detection, which is a basic work of time series clustering, anomaly discovery, prediction and many other data mining problems. The purpose of this paper is to design a new similarity measurement algorithm to improve the performance of the original similarity measurement algorithm. The subsequence morphological information is taken into account by the proposed algorithm, and time series is represented by a pattern, so the similarity measurement algorithm is more accurate.

Design/methodology/approach

Following some previous researches on similarity measurement, an improved method is presented. This new method combines morphological representation and dynamic time warping (DTW) technique to measure the similarities of time series. After the segmentation of time series data into segments, three parameter values of median, point number and slope are introduced into the improved distance measurement formula. The effectiveness of the morphological weighted DTW algorithm (MW-DTW) is demonstrated by the example of momentum wheel data of an aircraft attitude control system.

Findings

The improved method is insensitive to the distortion and expansion of time axis and can be used to detect the morphological changes of time series data. Simulation results confirm that this method proposed in this paper has a high accuracy of similarity measurement.

Practical implications

This improved method has been used to solve the problem of similarity measurement in time series, which is widely emerged in different fields of science and engineering, such as the field of control, measurement, monitoring, process signal processing and economic analysis.

Originality/value

In the similarity measurement of time series, the distance between sequences is often used as the only detection index. The results of similarity measurement should not be affected by the longitudinal or transverse stretching and translation changes of the sequence, so it is necessary to incorporate the morphological changes of the sequence into similarity measurement. The MW-DTW is more suitable for the actual situation. At the same time, the MW-DTW algorithm reduces the computational complexity by transforming the computational object to subsequences.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 March 2011

Yi‐Hui Liang

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

1418

Abstract

Purpose

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

Design/methodology/approach

This study employs NHPP to model the failure data. Initially, Nelson's graph method is employed to estimate the mean number of repairs and the MCRF value for the repairable system. Second, the time series decomposition approach is employed to predict the mean number of repairs and MCRF values.

Findings

The proposed method can analyze and predict the reliability for repairable systems. It can analyze the combined effect of trend‐cycle components and the seasonal component of the failure data.

Research limitations/implications

This study only adopts simulated data to verify the proposed method. Future research may use other real products' failure data to verify the proposed method. The proposed method is superior to ARIMA and neural network model prediction techniques in the reliability of repairable systems.

Practical implications

Results in this study can provide a valuable reference for engineers when constructing quality feedback systems for assessing current quality conditions, providing logistical support, correcting product design, facilitating optimal component‐replacement and maintenance strategies, and ensuring that products meet quality requirements.

Originality/value

The time series decomposition approach was used to model and analyze software aging and software failure in 2007. However, the time series decomposition approach was rarely used for modeling and analyzing the failure data for repairable systems. This study proposes the time series decomposition approach to analyze and predict the failure data of the repairable systems and the proposed method is better than the ARIMA model and neural networks in predictive accuracy.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 11 January 2021

Kamalpreet Singh Bhangu, Jasminder Kaur Sandhu and Luxmi Sapra

This study analyses the prevalent coronavirus disease (COVID-19) epidemic using machine learning algorithms. The data set used is an API data provided by the John Hopkins…

Abstract

Purpose

This study analyses the prevalent coronavirus disease (COVID-19) epidemic using machine learning algorithms. The data set used is an API data provided by the John Hopkins University resource centre and used the Web crawler to gather all the data features such as confirmed, recovered and death cases. Because of the unavailability of any COVID-19 drug at the moment, the unvarnished truth is that this outbreak is not expected to end in the near future, so the number of cases of this study would be very date specific. The analysis demonstrated in this paper focuses on the monthly analysis of confirmed, recovered and death cases, which assists to identify the trend and seasonality in the data. The purpose of this study is to explore the essential concepts of time series algorithms and use those concepts to perform time series analysis on the infected cases worldwide and forecast the spread of the virus in the next two weeks and thus aid in health-care services. Lower obtained mean absolute percentage error results of the forecasting time interval validate the model’s credibility.

Design/methodology/approach

In this study, the time series analysis of this outbreak forecast was done using the auto-regressive integrated moving average (ARIMA) model and also seasonal auto-regressive integrated moving averages with exogenous regressor (SARIMAX) and optimized to achieve better results.

Findings

The inferences of time series forecasting models ARIMA and SARIMAX were efficient to produce exact approximate results. The forecasting results indicate that an increasing trend is observed and there is a high rise in COVID-19 cases in many regions and countries that might face one of its worst days unless and until measures are taken to curb the spread of this disease quickly. The pattern of the rise of the spread of the virus in such countries is exactly mimicking some of the countries of early COVID-19 adoption such as Italy and the USA. Further, the obtained numbers of the models are date specific so the most recent execution of the model would return more recent results. The future scope of the study involves analysis with other models such as long short-term memory and then comparison with time series models.

Originality/value

A time series is a time-stamped data set in which each data point corresponds to a set of observations made at a particular time instance. This work is novel and addresses the COVID-19 with the help of time series analysis. The inferences of time series forecasting models ARIMA and SARIMAX were efficient to produce exact approximate results.

Details

World Journal of Engineering, vol. 19 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 20 November 2020

Lydie Myriam Marcelle Amelot, Ushad Subadar Agathee and Yuvraj Sunecher

This study constructs time series model, artificial neural networks (ANNs) and statistical topologies to examine the volatility and forecast foreign exchange rates. The Mauritian…

Abstract

Purpose

This study constructs time series model, artificial neural networks (ANNs) and statistical topologies to examine the volatility and forecast foreign exchange rates. The Mauritian forex market has been utilized as a case study, and daily data for nominal spot rate (during a time period of five years spanning from 2014 to 2018) for EUR/MUR, GBP/MUR, CAD/MUR and AUD/MUR have been applied for the predictions.

Design/methodology/approach

Autoregressive integrated moving average (ARIMA) and generalized autoregressive conditional heteroskedasticity (GARCH) models are used as a basis for time series modelling for the analysis, along with the non-linear autoregressive network with exogenous inputs (NARX) neural network backpropagation algorithm utilizing different training functions, namely, Levenberg–Marquardt (LM), Bayesian regularization and scaled conjugate gradient (SCG) algorithms. The study also features a hybrid kernel principal component analysis (KPCA) using the support vector regression (SVR) algorithm as an additional statistical tool to conduct financial market forecasting modelling. Mean squared error (MSE) and root mean square error (RMSE) are employed as indicators for the performance of the models.

Findings

The results demonstrated that the GARCH model performed better in terms of volatility clustering and prediction compared to the ARIMA model. On the other hand, the NARX model indicated that LM and Bayesian regularization training algorithms are the most appropriate method of forecasting the different currency exchange rates as the MSE and RMSE seemed to be the lowest error compared to the other training functions. Meanwhile, the results reported that NARX and KPCA–SVR topologies outperformed the linear time series models due to the theory based on the structural risk minimization principle. Finally, the comparison between the NARX model and KPCA–SVR illustrated that the NARX model outperformed the statistical prediction model. Overall, the study deduced that the NARX topology achieves better prediction performance results compared to time series and statistical parameters.

Research limitations/implications

The foreign exchange market is considered to be instable owing to uncertainties in the economic environment of any country and thus, accurate forecasting of foreign exchange rates is crucial for any foreign exchange activity. The study has an important economic implication as it will help researchers, investors, traders, speculators and financial analysts, users of financial news in banking and financial institutions, money changers, non-banking financial companies and stock exchange institutions in Mauritius to take investment decisions in terms of international portfolios. Moreover, currency rates instability might raise transaction costs and diminish the returns in terms of international trade. Exchange rate volatility raises the need to implement a highly organized risk management measures so as to disclose future trend and movement of the foreign currencies which could act as an essential guidance for foreign exchange participants. By this way, they will be more alert before conducting any forex transactions including hedging, asset pricing or any speculation activity, take corrective actions, thus preventing them from making any potential losses in the future and gain more profit.

Originality/value

This is one of the first studies applying artificial intelligence (AI) while making use of time series modelling, the NARX neural network backpropagation algorithm and hybrid KPCA–SVR to predict forex using multiple currencies in the foreign exchange market in Mauritius.

Details

African Journal of Economic and Management Studies, vol. 12 no. 1
Type: Research Article
ISSN: 2040-0705

Keywords

Article
Publication date: 30 July 2019

Hossein Abbasimehr and Mostafa Shabani

The purpose of this paper is to propose a new methodology that handles the issue of the dynamic behavior of customers over time.

1419

Abstract

Purpose

The purpose of this paper is to propose a new methodology that handles the issue of the dynamic behavior of customers over time.

Design/methodology/approach

A new methodology is presented based on time series clustering to extract dominant behavioral patterns of customers over time. This methodology is implemented using bank customers’ transactions data which are in the form of time series data. The data include the recency (R), frequency (F) and monetary (M) attributes of businesses that are using the point-of-sale (POS) data of a bank. This data were obtained from the data analysis department of the bank.

Findings

After carrying out an empirical study on the acquired transaction data of 2,531 business customers that are using POS devices of the bank, the dominant trends of behavior are discovered using the proposed methodology. The obtained trends were analyzed from the marketing viewpoint. Based on the analysis of the monetary attribute, customers were divided into four main segments, including high-value growing customers, middle-value growing customers, prone to churn and churners. For each resulted group of customers with a distinctive trend, effective and practical marketing recommendations were devised to improve the bank relationship with that group. The prone-to-churn segment contains most of the customers; therefore, the bank should conduct interesting promotions to retain this segment.

Practical implications

The discovered trends of customer behavior and proposed marketing recommendations can be helpful for banks in devising segment-specific marketing strategies as they illustrate the dynamic behavior of customers over time. The obtained trends are visualized so that they can be easily interpreted and used by banks. This paper contributes to the literature on customer relationship management (CRM) as the proposed methodology can be effectively applied to different businesses to reveal trends in customer behavior.

Originality/value

In the current business condition, customer behavior is changing continually over time and customers are churning due to the reduced switching costs. Therefore, choosing an effective customer segmentation methodology which can consider the dynamic behaviors of customers is essential for every business. This paper proposes a new methodology to capture customer dynamic behavior using time series clustering on time-ordered data. This is an improvement over previous studies, in which static segmentation approaches have often been adopted. To the best of the authors’ knowledge, this is the first study that combines the recency, frequency, and monetary model and time series clustering to reveal trends in customer behavior.

Article
Publication date: 14 June 2013

Bojan Božić and Werner Winiwarter

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community…

Abstract

Purpose

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community building by the use of a dedicated language.

Design/methodology/approach

The authors have developed a new semantic time series processing language and prepared showcases to demonstrate its functionality. The assumption is an environmental setting with data measurements from different sensors to be distributed to different groups of interest. The data are represented as time series for water and air quality, while the user groups are, among others, the environmental agency, companies from the industrial sector and legal authorities.

Findings

A language for time series processing and several tools to enrich the time series with meta‐data and for community building have been implemented in Python and Java. Also a GUI for demonstration purposes has been developed in PyQt4. In addition, an ontology for validation has been designed and a knowledge base for data storage and inference was set up. Some important features are: dynamic integration of ontologies, time series annotation, and semantic filtering.

Research limitations/implications

This paper focuses on the showcases of time series semantic language (TSSL), but also covers technical aspects and user interface issues. The authors are planning to develop TSSL further and evaluate it within further research projects and validation scenarios.

Practical implications

The research has a high practical impact on time series processing and provides new data sources for semantic web applications. It can also be used in social web platforms (especially for researchers) to provide a time series centric tagging and processing framework.

Originality/value

The paper presents an extended version of the paper presented at iiWAS2012.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 12 June 2017

Kehe Wu, Yayun Zhu, Quan Li and Ziwei Wu

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities…

Abstract

Purpose

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand.

Design/methodology/approach

First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework.

Findings

Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable.

Originality/value

This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 26 October 2017

Okan Duru and Matthew Butler

In the last few decades, there has been growing interest in forecasting with computer intelligence, and both fuzzy time series (FTS) and artificial neural networks (ANNs) have…

Abstract

In the last few decades, there has been growing interest in forecasting with computer intelligence, and both fuzzy time series (FTS) and artificial neural networks (ANNs) have gained particular popularity, among others. Rather than the conventional methods (e.g., econometrics), FTS and ANN are usually thought to be immune to fundamental concepts such as stationarity, theoretical causality, post-sample control, among others. On the other hand, a number of studies significantly indicated that these fundamental controls are required in terms of the theory of forecasting, and even application of such essential procedures substantially improves the forecasting accuracy. The aim of this paper is to fill the existing gap on modeling and forecasting in the FTS and ANN methods and figure out the fundamental concepts in a comprehensive work through merits and common failures in the literature. In addition to these merits, this paper may also be a guideline for eliminating unethical empirical settings in the forecasting studies.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78743-069-3

Keywords

Book part
Publication date: 4 July 2019

Utku Kose

It is possible to see effective use of Artificial Intelligence-based systems in many fields because it easily outperforms traditional solutions or provides solutions for the…

Abstract

It is possible to see effective use of Artificial Intelligence-based systems in many fields because it easily outperforms traditional solutions or provides solutions for the problems not previously solved. Prediction applications are a widely used mechanism in research because they allow for forecasting of future states. Logical inference mechanisms in the field of Artificial Intelligence allow for faster and more accurate and powerful computation. Machine Learning, which is a sub-field of Artificial Intelligence, has been used as a tool for creating effective solutions for prediction problems.

In this chapter the authors will focus on employing Machine Learning techniques for predicting data for future states of economic using techniques which include Artificial Neural Networks, Adaptive Neuro-Fuzzy Inference System, Dynamic Boltzmann Machine, Support Vector Machine, Hidden Markov Model, Bayesian Learning on Gaussian process model, Autoregressive Integrated Moving Average, Autoregressive Model (Poggi, Muselli, Notton, Cristofari, & Louche, 2003), and K-Nearest Neighbor Algorithm. Findings revealed positive results in terms of predicting economic data.

Book part
Publication date: 18 April 2018

Mohammed Quddus

PurposeTime-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents…

Abstract

PurposeTime-series regression models are applied to analyse transport safety data for three purposes: (1) to develop a relationship between transport accidents (or incidents) and various time-varying factors, with the aim of identifying the most important factors; (2) to develop a time-series accident model in forecasting future accidents for the given values of future time-varying factors and (3) to evaluate the impact of a system-wide policy, education or engineering intervention on accident counts. Regression models for analysing transport safety data are well established, especially in analysing cross-sectional and panel datasets. There is, however, a dearth of research relating to time-series regression models in the transport safety literature. The purpose of this chapter is to examine existing literature with the aim of identifying time-series regression models that have been employed in safety analysis in relation to wider applications. The aim is to identify time-series regression models that are applicable in analysing disaggregated accident counts.

Methodology/Approach – There are two main issues in modelling time-series accident counts: (1) a flexible approach in addressing serial autocorrelation inherent in time-series processes of accident counts and (2) the fact that the conditional distribution (conditioned on past observations and covariates) of accident counts follow a Poisson-type distribution. Various time-series regression models are explored to identify the models most suitable for analysing disaggregated time-series accident datasets. A recently developed time-series regression model – the generalised linear autoregressive and moving average (GLARMA) – has been identified as the best model to analyse safety data.

Findings – The GLARMA model was applied to a time-series dataset of airproxes (aircraft proximity) that indicate airspace safety in the United Kingdom. The aim was to evaluate the impact of an airspace intervention (i.e., the introduction of reduced vertical separation minima, RVSM) on airspace safety while controlling for other factors, such as air transport movements (ATMs) and seasonality. The results indicate that the GLARMA model is more appropriate than a generalised linear model (e.g., Poisson or Poisson-Gamma), and it has been found that the introduction of RVSM has reduced the airprox events by 15%. In addition, it was found that a 1% increase in ATMs within UK airspace would lead to a 1.83% increase in monthly airproxes in UK airspace.

Practical applications – The methodology developed in this chapter is applicable to many time-series processes of accident counts. The models recommended in this chapter could be used to identify different time-varying factors and to evaluate the effectiveness of various policy and engineering interventions on transport safety or similar data (e.g., crimes).

Originality/value of paper – The GLARMA model has not been properly explored in modelling time-series safety data. This new class of model has been applied to a dataset in evaluating the effectiveness of an intervention. The model recommended in this chapter would greatly benefit researchers and analysts working with time-series data.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

1 – 10 of over 124000