Search results

1 – 10 of 913
Open Access
Article
Publication date: 2 December 2016

Taylor Boyd, Grace Docken and John Ruggiero

The purpose of this paper is to improve the estimation of the production frontier in cases where outliers exist. We focus on the case when outliers appear above the true frontier…

2649

Abstract

Purpose

The purpose of this paper is to improve the estimation of the production frontier in cases where outliers exist. We focus on the case when outliers appear above the true frontier due to measurement error.

Design/methodology/approach

The authors use stochastic data envelopment analysis (SDEA) to allow observed points above the frontier. They supplement SDEA with assumptions on the efficiency and show that the true frontier in the presence of outliers can be derived.

Findings

This paper finds that the authors’ maximum likelihood approach outperforms super-efficiency measures. Using simulations, this paper shows that SDEA is a useful model for outlier detection.

Originality/value

The model developed in this paper is original; the authors add distributional assumptions to derive the optimal quantile with SDEA to remove outliers. The authors believe that the value of the paper will lead to many citations because real-world data are often subject to outliers.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 13 October 2017

Ümit Erol

The purpose of this paper is to show that major reversals of an index (specifically BIST-30 index) can be detected uniquely on the date of reversal by checking the extreme outliers

Abstract

Purpose

The purpose of this paper is to show that major reversals of an index (specifically BIST-30 index) can be detected uniquely on the date of reversal by checking the extreme outliers in the rate of change series using daily closing prices.

Design/methodology/approach

The extreme outliers are determined by checking if either the rate of change series or the volatility of the rate of change series displays more than two standard deviations on the date of reversal. Furthermore; wavelet analysis is also utilized for this purpose by checking the extreme outlier characteristics of the A1 (approximation level 1) and D3 (detail level 3) wavelet components.

Findings

Paper investigates ten major reversals of BIST-30 index during a five year period. It conclusively shows that all these major reversals are characterized by extreme outliers mentioned above. The paper also checks if these major reversals are unique in the sense of being observed only on the date of reversal but not before. The empirical results confirm the uniqueness. The paper also demonstrates empirically the fact that extreme outliers are associated only with major reversals but not minor ones.

Practical implications

The results are important for fund managers for whom the timely identification of the initial phase of a major bullish or bearish trend is crucial. Such timely identification of the major reversals is also important for the hedging applications since a major issue in the practical implementation of the stock index futures as a hedging instrument is the correct timing of derivatives positions.

Originality/value

To the best of the author’ knowledge; this is the first study dealing with the issue of major reversal identification. This is evidently so for the BIST-30 index and the use of extreme outliers for this purpose is also a novelty in the sense that neither the use of rate of change extremity nor the use of wavelet decomposition for this purpose was addressed before in the international literature.

Details

Journal of Capital Markets Studies, vol. 1 no. 1
Type: Research Article
ISSN: 2514-4774

Keywords

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation…

2301

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 26 April 2024

Xue Xin, Yuepeng Jiao, Yunfeng Zhang, Ming Liang and Zhanyong Yao

This study aims to ensure reliable analysis of dynamic responses in asphalt pavement structures. It investigates noise reduction and data mining techniques for pavement dynamic…

Abstract

Purpose

This study aims to ensure reliable analysis of dynamic responses in asphalt pavement structures. It investigates noise reduction and data mining techniques for pavement dynamic response signals.

Design/methodology/approach

The paper conducts time-frequency analysis on signals of pavement dynamic response initially. It also uses two common noise reduction methods, namely, low-pass filtering and wavelet decomposition reconstruction, to evaluate their effectiveness in reducing noise in these signals. Furthermore, as these signals are generated in response to vehicle loading, they contain a substantial amount of data and are prone to environmental interference, potentially resulting in outliers. Hence, it becomes crucial to extract dynamic strain response features (e.g. peaks and peak intervals) in real-time and efficiently.

Findings

The study introduces an improved density-based spatial clustering of applications with Noise (DBSCAN) algorithm for identifying outliers in denoised data. The results demonstrate that low-pass filtering is highly effective in reducing noise in pavement dynamic response signals within specified frequency ranges. The improved DBSCAN algorithm effectively identifies outliers in these signals through testing. Furthermore, the peak detection process, using the enhanced findpeaks function, consistently achieves excellent performance in identifying peak values, even when complex multi-axle heavy-duty truck strain signals are present.

Originality/value

The authors identified a suitable frequency domain range for low-pass filtering in asphalt road dynamic response signals, revealing minimal amplitude loss and effective strain information reflection between road layers. Furthermore, the authors introduced the DBSCAN-based anomaly data detection method and enhancements to the Matlab findpeaks function, enabling the detection of anomalies in road sensor data and automated peak identification.

Details

Smart and Resilient Transportation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 11 June 2019

Taylor N. Allbright, Julie A. Marsh, Kate E. Kennedy, Heather J. Hough and Susan McKibben

There is a growing consensus in education that schools can and should attend to students’ social-emotional development. Emerging research and popular texts indicate that students’…

14461

Abstract

Purpose

There is a growing consensus in education that schools can and should attend to students’ social-emotional development. Emerging research and popular texts indicate that students’ mindsets, beliefs, dispositions, emotions and behaviors can advance outcomes, such as college readiness, career success, mental health and relationships. Despite this growing awareness, many districts and schools are still struggling to implement strategies that develop students’ social-emotional skills. The purpose of this paper is to fill this gap by examining the social-emotional learning (SEL) practices in ten middle schools with strong student-reported data on SEL outcomes, particularly for African American and Latinx students.

Design/methodology/approach

Case study methods, including interviews, observations and document analysis, were employed.

Findings

The authors identify six categories of common SEL practices: strategies that promote positive school climate and relationships, supporting positive behavior, use of elective courses and extracurricular activities, SEL-specific classroom practices and curricula, personnel strategies and measurement and data use. Absence of a common definition of SEL and lack of alignment among SEL practices were two challenges cited by respondents.

Originality/value

This is the first study to analyze SEL practices in outlier schools, with a focus on successful practices with schools that have a majority of African American and/or Latinx students.

Details

Journal of Research in Innovative Teaching & Learning, vol. 12 no. 1
Type: Research Article
ISSN: 2397-7604

Keywords

Open Access
Article
Publication date: 1 December 2020

Sena Kimm Gnangnon

This paper investigates the effect of the volatility of resource revenue on the volatility of non-resource revenue.

Abstract

Purpose

This paper investigates the effect of the volatility of resource revenue on the volatility of non-resource revenue.

Design/methodology/approach

The empirical analysis has utilized an unbalanced panel data set comprising 54 countries over the period 1980–2015. The two-step system generalized methods of moments (GMM) is the main economic approach used to carry out the empirical analysis.

Findings

Results show that resource revenue volatility generates lower non-resource revenue volatility only when the share of resource revenue in total public revenue is lower than 18%. Otherwise, higher resource revenue volatility would result in a rise in non-resource revenue volatility.

Research limitations/implications

In light of the adverse effect of volatility of non-resource revenue on public spending, and hence on economic growth and development prospects, countries whose total public revenue is highly dependent on resource revenue should adopt appropriate policies to ensure the rise in non-resource revenue, as well as the stability of the latter.

Practical implications

Economic diversification in resource-rich countries (particularly in developing countries among them) could contribute to reducing the dependence of economies on natural resources, and hence the dependence of public revenue on resource revenue. Therefore, policies in favour of economic diversification would contribute to stabilizing non-resource revenue, which is essential for financing development needs.

Originality/value

To the best of our knowledge, this topic has not been addressed in the literature.

Details

Journal of Economics and Development, vol. 23 no. 2
Type: Research Article
ISSN: 1859-0020

Keywords

Open Access
Article
Publication date: 28 July 2020

Prabhat Pokharel, Roshan Pokhrel and Basanta Joshi

Analysis of log message is very important for the identification of a suspicious system and network activity. This analysis requires the correct extraction of variable entities…

1076

Abstract

Analysis of log message is very important for the identification of a suspicious system and network activity. This analysis requires the correct extraction of variable entities. The variable entities are extracted by comparing the logs messages against the log patterns. Each of these log patterns can be represented in the form of a log signature. In this paper, we present a hybrid approach for log signature extraction. The approach consists of two modules. The first module identifies log patterns by generating log clusters. The second module uses Named Entity Recognition (NER) to extract signatures by using the extracted log clusters. Experiments were performed on event logs from Windows Operating System, Exchange and Unix and validation of the result was done by comparing the signatures and the variable entities against the standard log documentation. The outcome of the experiments was that extracted signatures were ready to be used with a high degree of accuracy.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 5 September 2016

Qingyuan Wu, Changchen Zhan, Fu Lee Wang, Siyang Wang and Zeping Tang

The quick growth of web-based and mobile e-learning applications such as massive open online courses have created a large volume of online learning resources. Confronting such a…

3518

Abstract

Purpose

The quick growth of web-based and mobile e-learning applications such as massive open online courses have created a large volume of online learning resources. Confronting such a large amount of learning data, it is important to develop effective clustering approaches for user group modeling and intelligent tutoring. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, a minimum spanning tree based approach is proposed for clustering of online learning resources. The novel clustering approach has two main stages, namely, elimination stage and construction stage. During the elimination stage, the Euclidean distance is adopted as a metrics formula to measure density of learning resources. Resources with quite low densities are identified as outliers and therefore removed. During the construction stage, a minimum spanning tree is built by initializing the centroids according to the degree of freedom of the resources. Online learning resources are subsequently partitioned into clusters by exploiting the structure of minimum spanning tree.

Findings

Conventional clustering algorithms have a number of shortcomings such that they cannot handle online learning resources effectively. On the one hand, extant partitional clustering methods use a randomly assigned centroid for each cluster, which usually cause the problem of ineffective clustering results. On the other hand, classical density-based clustering methods are very computationally expensive and time-consuming. Experimental results indicate that the algorithm proposed outperforms the traditional clustering algorithms for online learning resources.

Originality/value

The effectiveness of the proposed algorithms has been validated by using several data sets. Moreover, the proposed clustering algorithm has great potential in e-learning applications. It has been demonstrated how the novel technique can be integrated in various e-learning systems. For example, the clustering technique can classify learners into groups so that homogeneous grouping can improve the effectiveness of learning. Moreover, clustering of online learning resources is valuable to decision making in terms of tutorial strategies and instructional design for intelligent tutoring. Lastly, a number of directions for future research have been identified in the study.

Details

Asian Association of Open Universities Journal, vol. 11 no. 2
Type: Research Article
ISSN: 1858-3431

Keywords

Open Access
Article
Publication date: 13 June 2023

Blendi Gerdoçi, Nertila Busho, Daniela Lena and Marco Cucculelli

This paper explores the relationships between firm absorptive capacity, novel business model design (NBMD), product differentiation strategy and performance in a transition…

1698

Abstract

Purpose

This paper explores the relationships between firm absorptive capacity, novel business model design (NBMD), product differentiation strategy and performance in a transition economy.

Design/methodology/approach

The study uses structural equation modeling (SEM) to analyze firm-level data from a unique sample of Albanian manufacturing and service firms.

Findings

The study shows that absorptive capacity enables and shapes the NBMD that, in turn, leads to performance gains. The authors also find that the NBMD capacity mediates the impact of realized absorptive capacity on performance, whereas product differentiation strategy moderates the relationship between new business model and performance.

Research limitations/implications

All variables were measured based on a self-assessed scale leading to potential method bias. Also, based on relevant literature, the study focuses on only one type of business model (BM) design.

Practical implications

Since dynamic capabilities are the foundation of NBMD, firms should invest carefully in developing such capabilities. Thus, the study results provide an integrative framework for understanding the role of absorptive capacity in NBMD adoption and for explaining the relationship between NBMD adoption and performance, an aspect that helps organizations in a dynamic environment.

Originality/value

This study strives to investigate the relationships between absorptive capacity, business model design, product strategies and performance by answering the call of Teece (2018) to “flesh out the details” of such relationships.

Details

European Journal of Innovation Management, vol. 26 no. 7
Type: Research Article
ISSN: 1460-1060

Keywords

Open Access
Article
Publication date: 20 July 2020

E.N. Osegi

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting…

Abstract

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting (STLF). A HTM Spatial Pooler (HTM-SP) stage is used to continually form sparse distributed representations (SDRs) from a univariate load time series data, a temporal aggregator is used to transform the SDRs into a sequential bivariate representation space and an overlap classifier makes temporal classifications from the bivariate SDRs through time. The comparative performance of HTM on several daily electrical load time series data including the Eunite competition dataset and the Polish power system dataset from 2002 to 2004 are presented. The robustness performance of HTM is also further validated using hourly load data from three more recent electricity markets. The results obtained from experimenting with the Eunite and Polish dataset indicated that HTM will perform better than the existing techniques reported in the literature. In general, the robustness test also shows that the error distribution performance of the proposed HTM technique is positively skewed for most of the years considered and with kurtosis values mostly lower than a base value of 3 indicating a reasonable level of outlier rejections.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 913