Search results
1 – 10 of 80The discrete Fourier transform (dft) of a fractional process is studied. An exact representation of the dft is given in terms of the component data, leading to the frequency…
Abstract
The discrete Fourier transform (dft) of a fractional process is studied. An exact representation of the dft is given in terms of the component data, leading to the frequency domain form of the model for a fractional process. This representation is particularly useful in analyzing the asymptotic behavior of the dft and periodogram in the nonstationary case when the memory parameter
Details
Keywords
Vasileios Stamatis, Michail Salampasis and Konstantinos Diamantaras
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the…
Abstract
Purpose
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the results merging process. In this work, the authors apply machine learning methods for results merging in federated patent search. Even though several methods for results merging have been developed, none of them were tested on patent data nor considered several machine learning models. Thus, the authors experiment with state-of-the-art methods using patent data and they propose two new methods for results merging that use machine learning models.
Design/methodology/approach
The methods are based on a centralized index containing samples of documents from all the remote resources, and they implement machine learning models to estimate comparable scores for the documents retrieved by different resources. The authors examine the new methods in cooperative and uncooperative settings where document scores from the remote search engines are available and not, respectively. In uncooperative environments, they propose two methods for assigning document scores.
Findings
The effectiveness of the new results merging methods was measured against state-of-the-art models and found to be superior to them in many cases with significant improvements. The random forest model achieves the best results in comparison to all other models and presents new insights for the results merging problem.
Originality/value
In this article the authors prove that machine learning models can substitute other standard methods and models that used for results merging for many years. Our methods outperformed state-of-the-art estimation methods for results merging, and they proved that they are more effective for federated patent search.
Details
Keywords
Assad Mehmood, Kashif Zia, Arshad Muhammad and Dinesh Kumar Saini
Participatory wireless sensor networks (PWSN) is an emerging paradigm that leverages existing sensing and communication infrastructures for the sensing task. Various environmental…
Abstract
Purpose
Participatory wireless sensor networks (PWSN) is an emerging paradigm that leverages existing sensing and communication infrastructures for the sensing task. Various environmental phenomenon – P monitoring applications dealing with noise pollution, road traffic, requiring spatio-temporal data samples of P (to capture its variations and its profile construction) in the region of interest – can be enabled using PWSN. Because of irregular distribution and uncontrollable mobility of people (with mobile phones), and their willingness to participate, complete spatio-temporal (CST) coverage of P may not be ensured. Therefore, unobserved data values must be estimated for CST profile construction of P and presented in this paper.
Design/methodology/approach
In this paper, the estimation of these missing data samples both in spatial and temporal dimension is being discussed, and the paper shows that non-parametric technique – Kernel Regression – provides better estimation compared to parametric regression techniques in PWSN context for spatial estimation. Furthermore, the preliminary results for estimation in temporal dimension have been provided. The deterministic and stochastic approaches toward estimation in the context of PWSN have also been discussed.
Findings
For the task of spatial profile reconstruction, it is shown that non-parametric estimation technique (kernel regression) gives a better estimation of the unobserved data points. In case of temporal estimation, few preliminary techniques have been studied and have shown that further investigations are required to find out best estimation technique(s) which may approximate the missing observations (temporally) with considerably less error.
Originality/value
This study addresses the environmental informatics issues related to deterministic and stochastic approaches using PWSN.
Details
Keywords
Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen
Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation…
Abstract
Purpose
Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.
Design/methodology/approach
In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.
Findings
The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.
Research limitations/implications
Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.
Practical implications
This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.
Social implications
The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.
Originality/value
Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.
Details
Keywords
Pedro Brinca, Nikolay Iskrev and Francesca Loria
Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of…
Abstract
Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of such exercises and to methodological departures from the baseline methodology. Little attention has been paid to identification issues within these classes of models. In this chapter, the authors investigate whether such issues are of concern in the original methodology and in an extension proposed by Šustek (2011) called Monetary Business Cycle Accounting. The authors resort to two types of identification tests in population. One concerns strict identification as theorized by Komunjer and Ng (2011) while the other deals both with strict and weak identification as in Iskrev (2010). Most importantly, the authors explore the extent to which these weak identification problems affect the main economic takeaways and find that the identification deficiencies are not relevant for the standard BCA model. Finally, the authors compute some statistics of interest to practitioners of the BCA methodology.
Details