Search results

1 – 10 of over 136000
Article
Publication date: 14 June 2013

Bojan Božić and Werner Winiwarter

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community…

Abstract

Purpose

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community building by the use of a dedicated language.

Design/methodology/approach

The authors have developed a new semantic time series processing language and prepared showcases to demonstrate its functionality. The assumption is an environmental setting with data measurements from different sensors to be distributed to different groups of interest. The data are represented as time series for water and air quality, while the user groups are, among others, the environmental agency, companies from the industrial sector and legal authorities.

Findings

A language for time series processing and several tools to enrich the time series with meta‐data and for community building have been implemented in Python and Java. Also a GUI for demonstration purposes has been developed in PyQt4. In addition, an ontology for validation has been designed and a knowledge base for data storage and inference was set up. Some important features are: dynamic integration of ontologies, time series annotation, and semantic filtering.

Research limitations/implications

This paper focuses on the showcases of time series semantic language (TSSL), but also covers technical aspects and user interface issues. The authors are planning to develop TSSL further and evaluate it within further research projects and validation scenarios.

Practical implications

The research has a high practical impact on time series processing and provides new data sources for semantic web applications. It can also be used in social web platforms (especially for researchers) to provide a time series centric tagging and processing framework.

Originality/value

The paper presents an extended version of the paper presented at iiWAS2012.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 12 June 2017

Kehe Wu, Yayun Zhu, Quan Li and Ziwei Wu

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities…

Abstract

Purpose

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand.

Design/methodology/approach

First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework.

Findings

Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable.

Originality/value

This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 26 October 2017

Okan Duru and Matthew Butler

In the last few decades, there has been growing interest in forecasting with computer intelligence, and both fuzzy time series (FTS) and artificial neural networks (ANNs) have…

Abstract

In the last few decades, there has been growing interest in forecasting with computer intelligence, and both fuzzy time series (FTS) and artificial neural networks (ANNs) have gained particular popularity, among others. Rather than the conventional methods (e.g., econometrics), FTS and ANN are usually thought to be immune to fundamental concepts such as stationarity, theoretical causality, post-sample control, among others. On the other hand, a number of studies significantly indicated that these fundamental controls are required in terms of the theory of forecasting, and even application of such essential procedures substantially improves the forecasting accuracy. The aim of this paper is to fill the existing gap on modeling and forecasting in the FTS and ANN methods and figure out the fundamental concepts in a comprehensive work through merits and common failures in the literature. In addition to these merits, this paper may also be a guideline for eliminating unethical empirical settings in the forecasting studies.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-1-78743-069-3

Keywords

Open Access
Article
Publication date: 20 May 2022

Noemi Manara, Lorenzo Rosset, Francesco Zambelli, Andrea Zanola and America Califano

In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme…

537

Abstract

Purpose

In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme importance. In particular, in many cases, the knowledge of the outdoor/indoor microclimate may support the decision process in conservation and preservation matters of historic buildings. This knowledge is often gained by implementing long and time-consuming monitoring campaigns that allow collecting atmospheric and climatic data.

Design/methodology/approach

Sometimes the collected time series may be corrupted, incomplete and/or subjected to the sensors' errors because of the remoteness of the historic building location, the natural aging of the sensor or the lack of a continuous check of the data downloading process. For this reason, in this work, an innovative approach about reconstructing the indoor microclimate into heritage buildings, just knowing the outdoor one, is proposed. This methodology is based on using machine learning tools known as variational auto encoders (VAEs), that are able to reconstruct time series and/or to fill data gaps.

Findings

The proposed approach is implemented using data collected in Ringebu Stave Church, a Norwegian medieval wooden heritage building. Reconstructing a realistic time series, for the vast majority of the year period, of the natural internal climate of the Church has been successfully implemented.

Originality/value

The novelty of this work is discussed in the framework of the existing literature. The work explores the potentials of machine learning tools compared to traditional ones, providing a method that is able to reliably fill missing data in time series.

Details

International Journal of Building Pathology and Adaptation, vol. 42 no. 1
Type: Research Article
ISSN: 2398-4708

Keywords

Article
Publication date: 28 June 2021

Mingyan Zhang, Xu Du, Kerry Rice, Jui-Long Hung and Hao Li

This study aims to propose a learning pattern analysis method which can improve a predictive model’s performance, as well as discover hidden insights into micro-level learning…

Abstract

Purpose

This study aims to propose a learning pattern analysis method which can improve a predictive model’s performance, as well as discover hidden insights into micro-level learning pattern. Analyzing student’s learning patterns can help instructors understand how their course design or activities shape learning behaviors; depict students’ beliefs about learning and their motivation; and predict learning performance by analyzing individual students’ learning patterns. Although time-series analysis is one of the most feasible predictive methods for learning pattern analysis, literature-indicated current approaches cannot provide holistic insights about learning patterns for personalized intervention. This study identified at-risk students by micro-level learning pattern analysis and detected pattern types, especially at-risk patterns that existed in the case study. The connections among students’ learning patterns, corresponding self-regulated learning (SRL) strategies and learning performance were finally revealed.

Design/methodology/approach

The method used long short-term memory (LSTM)-encoder to process micro-level behavioral patterns for feature extraction and compression, thus the students’ behavior pattern information were saved into encoded series. The encoded time-series data were then used for pattern analysis and performance prediction. Time series clustering were performed to interpret the unique strength of proposed method.

Findings

Successful students showed consistent participation levels and balanced behavioral frequency distributions. The successful students also adjusted learning behaviors to meet with course requirements accordingly. The three at-risk patten types showed the low-engagement (R1) the low-interaction (R2) and the non-persistent characteristics (R3). Successful students showed more complete SRL strategies than failed students. Political Science had higher at-risk chances in all three at-risk types. Computer Science, Earth Science and Economics showed higher chances of having R3 students.

Research limitations/implications

The study identified multiple learning patterns which can lead to the at-risk situation. However, more studies are needed to validate whether the same at-risk types can be found in other educational settings. In addition, this case study found the distributions of at-risk types were vary in different subjects. The relationship between subjects and at-risk types is worth further investigation.

Originality/value

This study found the proposed method can effectively extract micro-level behavioral information to generate better prediction outcomes and depict student’s SRL learning strategies in online learning. The authors confirm that the research in their work is original, and that all the data given in the paper are real and authentic. The study has not been submitted to peer review and not has been accepted for publishing in another journal.

Details

Information Discovery and Delivery, vol. 50 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 1 June 1996

Steven E. Moss and Howard C. Schneider

Tests for correlation between the NCREIF (NC) Index and EREIT Index. A multiple time series methodology is used to control for spurious correlation, allow for leading and lagging…

1042

Abstract

Tests for correlation between the NCREIF (NC) Index and EREIT Index. A multiple time series methodology is used to control for spurious correlation, allow for leading and lagging relationships, and to control for autoregressive moving average processes found in the time series. The underlying variables generating returns for the investor, current cash flow and capital appreciation, are analysed separately. Significant correlation is found between the NC cash flows and EREIT dividends. Significant correlation is not observed between the NC portfolio and EREIT when capital values are analysed. Suggests that one or both series are not a good measure of real estate returns.

Details

Journal of Property Finance, vol. 7 no. 2
Type: Research Article
ISSN: 0958-868X

Keywords

Article
Publication date: 8 February 2016

Xinxia Liu, Anbing Zhang, Hefeng Wang and Haixin Liu

This paper aims to develope an integrated image processing method to investigate the spatiotemporal dynamics of Phragmites invasion in the Detroit River International Wildlife…

Abstract

Purpose

This paper aims to develope an integrated image processing method to investigate the spatiotemporal dynamics of Phragmites invasion in the Detroit River International Wildlife Refuge on the basis of publically available sources.

Design/methodology/approach

This new approach integrates the standard time-series analysis of Landsat images with USDA National Agriculture Imagery Program (NAIP) imagery and USGS Digital Orthophoto Quarter Quads (DOQQ) datasets, which are either classified or manually interpreted with the aid of ground control points. Three different types of spatiotemporal dimensions are designed to test this integrated time-series image analysis method: the selected sites and selected time-points with high spatial resolution and sufficient validation data points, the intermediate time-series with continued yearly images and periodical validation data, and the long time-series with periodical images without enough validation data. The support vector machine (SVM) method was used to classify the Landast TM sequence images to detect the Phragmites invasion.

Findings

The habitat map produced by NAIP images and field collection data shows that the total Phragmites area of DRIWR in 2010 is 4221.87 acres without treatment areas and similar with the removed non-vegetation method. It is confirmed that the pre-classification method can obtain more accurate results.

Originality value

The test results show that the Landsat-5 data can be used for long-term environmental management and monitoring of Phragmites invasion and can achieve rehabilitation of invasion areas.

Details

World Journal of Engineering, vol. 13 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Book part
Publication date: 10 February 2012

Wiesław Pietruszkiewicz

Purpose — The chapter presents the practical applications of web search statistics analysis. The process description highlights the potential use of search queries and statistical…

Abstract

Purpose — The chapter presents the practical applications of web search statistics analysis. The process description highlights the potential use of search queries and statistical data and how they could be used in various forecasting situations. The presented case is an example of applied computational intelligence and the main focus is oriented towards the decision support offered by the software mechanism and its capabilities to automatically gather, process and analyse data.

Methodology/approach — The statistics of the search queries as a source of prognostic information are analysed in a step-by-step process, starting from their content and scope, their processing and applications, and concluding with usage in a software-based intelligent framework.

Research implications — The analysis of search engine trends offers a great opportunity for many areas of research. Into the future, deploying this information in the prognosis will further develop intelligent data processing.

Practical implications — This functionality offers a unique possibility, impossible until now, to observe, estimate and predict various processes using wide, precise and accurate behaviour observations. The scope and quality of data allow practitioners to successfully use it in various prognostic problems (i.e. political, medical, or economic).

Originality/value of paper — The chapter presents practical implications of technology. The chapter then highlights potential areas that would benefit from the analysis of queries statistics. Moreover, it introduces ‘WebPerceiver’, an intelligent platform, built to make the analysis and usage of search trends easier and more generally available to a wide audience, including non-skilled users.

Article
Publication date: 6 October 2022

Xu Wang, Xin Feng and Yuan Guo

The research on social media-based academic communication has made great progress with the development of the mobile Internet era, and while a large number of research results…

Abstract

Purpose

The research on social media-based academic communication has made great progress with the development of the mobile Internet era, and while a large number of research results have emerged, clarifying the topology of the knowledge label network (KLN) in this field and showing the development of its knowledge labels and related concepts is one of the issues that must be faced. This study aims to discuss the aforementioned issue.

Design/methodology/approach

From a bibliometric perspective, 5,217 research papers in this field from CNKI from 2011 to 2021 are selected, and the title and abstract of each paper are subjected to subword processing and topic model analysis, and the extended labels are obtained by taking the merged set with the original keywords, so as to construct a conceptually expanded KLN. At the same time, appropriate time window slicing is performed to observe the temporal evolution of the network topology. Specifically, the basic network topological parameters and the complex modal structure are analyzed empirically to explore the evolution pattern and inner mechanism of the KLN in this domain. In addition, the ARIMA time series prediction model is used to further predict and compare the changing trend of network structure among different disciplines, so as to compare the differences among different disciplines.

Findings

The results show that the degree sequence distribution of the KLN is power-law distributed during the growth process, and it performs better in the mature stage of network development, and the network shows more stable scale-free characteristics. At the same time, the network has the characteristics of “short path and high clustering” throughout the time series, which is a typical small-world network. The KLN consists of a small number of hub nodes occupying the core position of the network, while a large number of label nodes are distributed at the periphery of the network and formed around these hub nodes, and its knowledge expansion pattern has a certain retrospective nature. More knowledge label nodes expand from the center to the periphery and have a gradual and stable trend. In addition, there are certain differences between different disciplines, and the research direction or topic of library and information science (LIS) is more refined and deeper than that of journalism and media and computer science. The LIS discipline has shown better development momentum in this field.

Originality/value

KLN is constructed by using extended labels and empirically analyzed by using network frontier conceptual motifs, which reflects the innovation of the study to a certain extent. In future research, the influence of larger-scale network motifs on the structural features and evolutionary mechanisms of KLNs will be further explored.

Details

Aslib Journal of Information Management, vol. 75 no. 6
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 17 October 2008

Ralf Östermark

To demonstrate the scalability of the genetic hybrid algorithm (GHA) in monitoring a local neural network algorithm for difficult non‐linear/chaotic time series problems.

Abstract

Purpose

To demonstrate the scalability of the genetic hybrid algorithm (GHA) in monitoring a local neural network algorithm for difficult non‐linear/chaotic time series problems.

Design/methodology/approach

GHA is a general‐purpose algorithm, spanning several areas of mathematical problem solving. If needed, GHA invokes an accelerator function at key stages of the solution process, providing it with the current population of solution vectors in the argument list of the function. The user has control over the computational stage (generation of a new population, crossover, mutation etc) and can modify the population of solution vectors, e.g. by invoking special purpose algorithms through the accelerator channel. If needed, the steps of GHA can be partly or completely superseded by the special purpose mathematical/artificial intelligence‐based algorithm. The system can be used as a package for classical mathematical programming with the genetic sub‐block deactivated. On the other hand, the algorithm can be turned into a machinery for stochastic analysis (e.g. for Monte Carlo simulation, time series modelling or neural networks), where the mathematical programming and genetic computing facilities are deactivated or appropropriately adjusted. Finally, pure evolutionary computation may be activated for studying genetic phenomena. GHA contains a flexible generic multi‐computer framework based on MPI, allowing implementations of a wide range of parallel models.

Findings

The results indicate that GHA is scalable, yet due to the inherent stochasticity of neural networks and the genetic algorithm, the scalability evidence put forth in this paper is only indicative. The scalability of GHA follows from maximal node intelligence allowing minimal internodal communication in problems with independent computational blocks.

Originality/value

The paper shows that GHA can be effectively run on both sequential and parallel platforms. The multicomputer layout is based on maximizing the intelligence of the nodes – all nodes are provided with the same program and the available computational support libraries – and minimizing internodal communication, hence GHA does not limit the size of the mesh in problems with independent computational tasks.

Details

Kybernetes, vol. 37 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 136000