Search results

1 – 10 of over 15000
Article
Publication date: 14 June 2013

Bojan Božić and Werner Winiwarter

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community…

Abstract

Purpose

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community building by the use of a dedicated language.

Design/methodology/approach

The authors have developed a new semantic time series processing language and prepared showcases to demonstrate its functionality. The assumption is an environmental setting with data measurements from different sensors to be distributed to different groups of interest. The data are represented as time series for water and air quality, while the user groups are, among others, the environmental agency, companies from the industrial sector and legal authorities.

Findings

A language for time series processing and several tools to enrich the time series with meta‐data and for community building have been implemented in Python and Java. Also a GUI for demonstration purposes has been developed in PyQt4. In addition, an ontology for validation has been designed and a knowledge base for data storage and inference was set up. Some important features are: dynamic integration of ontologies, time series annotation, and semantic filtering.

Research limitations/implications

This paper focuses on the showcases of time series semantic language (TSSL), but also covers technical aspects and user interface issues. The authors are planning to develop TSSL further and evaluate it within further research projects and validation scenarios.

Practical implications

The research has a high practical impact on time series processing and provides new data sources for semantic web applications. It can also be used in social web platforms (especially for researchers) to provide a time series centric tagging and processing framework.

Originality/value

The paper presents an extended version of the paper presented at iiWAS2012.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 23 August 2023

Guo Huafeng, Xiang Changcheng and Chen Shiqiang

This study aims to reduce data bias during human activity and increase the accuracy of activity recognition.

Abstract

Purpose

This study aims to reduce data bias during human activity and increase the accuracy of activity recognition.

Design/methodology/approach

A convolutional neural network and a bidirectional long short-term memory model are used to automatically capture feature information of time series from raw sensor data and use a self-attention mechanism to learn select potential relationships of essential time points. The proposed model has been evaluated on six publicly available data sets and verified that the performance is significantly improved by combining the self-attentive mechanism with deep convolutional networks and recursive layers.

Findings

The proposed method significantly improves accuracy over the state-of-the-art method between different data sets, demonstrating the superiority of the proposed method in intelligent sensor systems.

Originality/value

Using deep learning frameworks, especially activity recognition using self-attention mechanisms, greatly improves recognition accuracy.

Details

Sensor Review, vol. 43 no. 5/6
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 17 January 2020

Wei Feng, Yuqin Wu and Yexian Fan

The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations (NSS). Because the conventional methods for the…

Abstract

Purpose

The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations (NSS). Because the conventional methods for the prediction of NSS, such as support vector machine, particle swarm optimization, etc., lack accuracy, robustness and efficiency, in this study, the authors propose a new method for the prediction of NSS based on recurrent neural network (RNN) with gated recurrent unit.

Design/methodology/approach

This method extracts internal and external information features from the original time-series network data for the first time. Then, the extracted features are applied to the deep RNN model for training and validation. After iteration and optimization, the accuracy of predictions of NSS will be obtained by the well-trained model, and the model is robust for the unstable network data.

Findings

Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models. Although the deep RNN models need more time consumption for training, they guarantee the accuracy and robustness of prediction in return for validation.

Originality/value

In the prediction of NSS time-series data, the proposed internal and external information features are well described the original data, and the employment of deep RNN model will outperform the state-of-the-arts models.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 5 March 2021

Xuan Ji, Jiachen Wang and Zhijun Yan

Stock price prediction is a hot topic and traditional prediction methods are usually based on statistical and econometric models. However, these models are difficult to deal with…

16370

Abstract

Purpose

Stock price prediction is a hot topic and traditional prediction methods are usually based on statistical and econometric models. However, these models are difficult to deal with nonstationary time series data. With the rapid development of the internet and the increasing popularity of social media, online news and comments often reflect investors’ emotions and attitudes toward stocks, which contains a lot of important information for predicting stock price. This paper aims to develop a stock price prediction method by taking full advantage of social media data.

Design/methodology/approach

This study proposes a new prediction method based on deep learning technology, which integrates traditional stock financial index variables and social media text features as inputs of the prediction model. This study uses Doc2Vec to build long text feature vectors from social media and then reduce the dimensions of the text feature vectors by stacked auto-encoder to balance the dimensions between text feature variables and stock financial index variables. Meanwhile, based on wavelet transform, the time series data of stock price is decomposed to eliminate the random noise caused by stock market fluctuation. Finally, this study uses long short-term memory model to predict the stock price.

Findings

The experiment results show that the method performs better than all three benchmark models in all kinds of evaluation indicators and can effectively predict stock price.

Originality/value

In this paper, this study proposes a new stock price prediction model that incorporates traditional financial features and social media text features which are derived from social media based on deep learning technology.

Details

International Journal of Crowd Science, vol. 5 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 13 May 2022

Qiang Zhang, Zijian Ye, Siyu Shao, Tianlin Niu and Yuwei Zhao

The current studies on remaining useful life (RUL) prediction mainly rely on convolutional neural networks (CNNs) and long short-term memories (LSTMs) and do not take full…

Abstract

Purpose

The current studies on remaining useful life (RUL) prediction mainly rely on convolutional neural networks (CNNs) and long short-term memories (LSTMs) and do not take full advantage of the attention mechanism, resulting in lack of prediction accuracy. To further improve the performance of the above models, this study aims to propose a novel end-to-end RUL prediction framework, called convolutional recurrent attention network (CRAN) to achieve high accuracy.

Design/methodology/approach

The proposed CRAN is a CNN-LSTM-based model that effectively combines the powerful feature extraction ability of CNN and sequential processing capability of LSTM. The channel attention mechanism, spatial attention mechanism and LSTM attention mechanism are incorporated in CRAN, assigning different attention coefficients to CNN and LSTM. First, features of the bearing vibration data are extracted from both time and frequency domain. Next, the training and testing set are constructed. Then, the CRAN is trained offline using the training set. Finally, online RUL estimation is performed by applying data from the testing set to the trained CRAN.

Findings

CNN-LSTM-based models have higher RUL prediction accuracy than CNN-based and LSTM-based models. Using a combination of max pooling and average pooling can reduce the loss of feature information, and in addition, the structure of the serial attention mechanism is superior to the parallel attention structure. Comparing the proposed CRAN with six different state-of-the-art methods, for the predicted results of two testing bearings, the proposed CRAN has an average reduction in the root mean square error of 57.07/80.25%, an average reduction in the mean absolute error of 62.27/85.87% and an average improvement in score of 12.65/6.57%.

Originality/value

This article provides a novel end-to-end rolling bearing RUL prediction framework, which can provide a reference for the formulation of bearing maintenance programs in the industry.

Details

Assembly Automation, vol. 42 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 29 April 2014

Ahmed Mosallam, Kamal Medjaher and Noureddine Zerhouni

The developments of complex systems have increased the demand for condition monitoring techniques so as to maximize operational availability and safety while decreasing the costs…

Abstract

Purpose

The developments of complex systems have increased the demand for condition monitoring techniques so as to maximize operational availability and safety while decreasing the costs. Signal analysis is one of the methods used to develop condition monitoring in order to extract important information contained in the sensory signals, which can be used for health assessment. However, extraction of such information from collected data in a practical working environment is always a great challenge as sensory signals are usually multi-dimensional and obscured by noise. The paper aims to discuss this issue.

Design/methodology/approach

This paper presents a method for trends extraction from multi-dimensional sensory data, which are then used for machinery health monitoring and maintenance needs. The proposed method is based on extracting successive features from machinery sensory signals. Then, unsupervised feature selection on the features domain is applied without making any assumptions concerning the source of the signals and the number of the extracted features. Finally, empirical mode decomposition (EMD) algorithm is applied on the projected features with the purpose of following the evolution of data in a compact representation over time.

Findings

The method is demonstrated on accelerated degradation data set of bearings acquired from PRONOSTIA experimental platform and a second data set acquired form NASA repository.

Originality/value

The method showed that it is able to extract interesting signal trends which can be used for health monitoring and remaining useful life prediction.

Details

Journal of Manufacturing Technology Management, vol. 25 no. 4
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 12 October 2023

R.L. Manogna and Aayush Anand

Deep learning (DL) is a new and relatively unexplored field that finds immense applications in many industries, especially ones that must make detailed observations, inferences…

Abstract

Purpose

Deep learning (DL) is a new and relatively unexplored field that finds immense applications in many industries, especially ones that must make detailed observations, inferences and predictions based on extensive and scattered datasets. The purpose of this paper is to answer the following questions: (1) To what extent has DL penetrated the research being done in finance? (2) What areas of financial research have applications of DL, and what quality of work has been done in the niches? (3) What areas still need to be explored and have scope for future research?

Design/methodology/approach

This paper employs bibliometric analysis, a potent yet simple methodology with numerous applications in literature reviews. This paper focuses on citation analysis, author impacts, relevant and vital journals, co-citation analysis, bibliometric coupling and co-occurrence analysis. The authors collected 693 articles published in 2000–2022 from journals indexed in the Scopus database. Multiple software (VOSviewer, RStudio (biblioshiny) and Excel) were employed to analyze the data.

Findings

The findings reveal significant and renowned authors' impact in the field. The analysis indicated that the application of DL in finance has been on an upward track since 2017. The authors find four broad research areas (neural networks and stock market simulations; portfolio optimization and risk management; time series analysis and forecasting; high-frequency trading) with different degrees of intertwining and emerging research topics with the application of DL in finance. This article contributes to the literature by providing a systematic overview of the DL developments, trajectories, objectives and potential future research topics in finance.

Research limitations/implications

The findings of this paper act as a guide for literature review for anyone interested in doing research in the intersection of finance and DL. The article also explores multiple areas of research that have yet to be studied to a great extent and have abundant scope.

Originality/value

Very few studies have explored the applications of machine learning (ML), namely, DL in finance, which is a much more specialized subset of ML. The authors look at the problem from the aspect of different techniques in DL that have been used in finance. This is the first qualitative (content analysis) and quantitative (bibliometric analysis) assessment of current research on DL in finance.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Book part
Publication date: 11 November 2019

Manoj Kumar Jena and Brajaballav Kar

Data, either in primary or secondary form, represent the core strength of quantitative research. However, there is significant difference between collected data and the final…

Abstract

Data, either in primary or secondary form, represent the core strength of quantitative research. However, there is significant difference between collected data and the final researchable data. The data collection is driven by objectives of the research. The data also could be in various formats at different sources. The collected data in its original form may contain systematic and random errors. Such errors need to be cleaned from the data which is termed as data cleaning process.

The present chapter discusses about the different methodologies and steps that may be helpful for fine tuning the data into researchable format. The discussions are instantiated with the applications of methodologies on a set of financial data of companies listed in Bombay Stock Exchange. Various steps involved in transformation of collected data to researchable data are presented. A schematic model including data collection, data cleaning, working with variables, outlier treatment, testing the assumption of statistical test, normality, and heteroscedasticity is presented for the benefit of research scholars. Beyond this generic model, this paper focuses exclusively on financial data of listed companies in the Bombay Stock Exchange. The challenges involved in various sources, data gathering and other pre-analysis stages are also considered. This is also applicable for research based on secondary data sources in other fields as well.

Details

Methodological Issues in Management Research: Advances, Challenges, and the Way Ahead
Type: Book
ISBN: 978-1-78973-973-2

Keywords

Article
Publication date: 5 November 2018

Iskandar Iskandar, Roger Willett and Shuxiang Xu

Government cash forecasting is central to achieving effective government cash management but research in this area is scarce. The purpose of this paper is to address this…

Abstract

Purpose

Government cash forecasting is central to achieving effective government cash management but research in this area is scarce. The purpose of this paper is to address this shortcoming by developing a government cash forecasting model with an accuracy acceptable to the cash manager in emerging economies.

Design/methodology/approach

The paper follows “top-down” approach to develop a government cash forecasting model. It uses the Indonesian Government expenditure data from 2008 to 2015 as an illustration. The study utilises ARIMA, neural network and hybrid models to investigate the best procedure for predicting government expenditure.

Findings

The results show that the best method to build a government cash forecasting model is subject to forecasting performance measurement tool and the data used.

Research limitations/implications

The study uses the data from one government only as its sample, which may limit the ability to generalise the results to a wider population.

Originality/value

This paper is novel in developing a government cash forecasting model in the context of emerging economies.

Details

Journal of Public Budgeting, Accounting & Financial Management, vol. 30 no. 4
Type: Research Article
ISSN: 1096-3367

Keywords

Book part
Publication date: 29 March 2006

Kajal Lahiri and Fushang Liu

We develop a theoretical model to compare forecast uncertainty estimated from time-series models to those available from survey density forecasts. The sum of the average variance…

Abstract

We develop a theoretical model to compare forecast uncertainty estimated from time-series models to those available from survey density forecasts. The sum of the average variance of individual densities and the disagreement is shown to approximate the predictive uncertainty from well-specified time-series models when the variance of the aggregate shocks is relatively small compared to that of the idiosyncratic shocks. Due to grouping error problems and compositional heterogeneity in the panel, individual densities are used to estimate aggregate forecast uncertainty. During periods of regime change and structural break, ARCH estimates tend to diverge from survey measures.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-0-76231-274-0

1 – 10 of over 15000