Search results

1 – 10 of 247
Content available
Book part
Publication date: 22 June 2021

John N. Moye

Abstract

Details

The Psychophysics of Learning
Type: Book
ISBN: 978-1-80117-113-7

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation…

2302

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 9 August 2023

Jie Zhang, Yuwei Wu, Jianyong Gao, Guangjun Gao and Zhigang Yang

This study aims to explore the formation mechanism of aerodynamic noise of a high-speed maglev train and understand the characteristics of dipole and quadrupole sound sources of…

361

Abstract

Purpose

This study aims to explore the formation mechanism of aerodynamic noise of a high-speed maglev train and understand the characteristics of dipole and quadrupole sound sources of the maglev train at different speed levels.

Design/methodology/approach

Based on large eddy simulation (LES) method and Kirchhoff–Ffowcs Williams and Hawkings (K-FWH) equations, the characteristics of dipole and quadrupole sound sources of maglev trains at different speed levels were simulated and analyzed by constructing reasonable penetrable integral surface.

Findings

The spatial disturbance resulting from the separation of the boundary layer in the streamlined area of the tail car is the source of aerodynamic sound of the maglev train. The dipole sources of the train are mainly distributed around the radio terminals of the head and tail cars of the maglev train, the bottom of the arms of the streamlined parts of the head and tail cars and the nose tip area of the streamlined part of the tail car, and the quadrupole sources are mainly distributed in the wake area. When the train runs at three speed levels of 400, 500 and 600 km·h−1, respectively, the radiated energy of quadrupole source is 62.4%, 63.3% and 71.7%, respectively, which exceeds that of dipole sources.

Originality/value

This study can help understand the aerodynamic noise characteristics generated by the high-speed maglev train and provide a reference for the optimization design of its aerodynamic shape.

Details

Railway Sciences, vol. 2 no. 3
Type: Research Article
ISSN: 2755-0907

Keywords

Open Access
Article
Publication date: 3 December 2020

Tobias Otterbring, Christina Bodin Danielsson and Jörg Pareigis

This study aims to examine the links between office types (cellular, shared-room, small and medium-sized open-plan) and employees' subjective well-being regarding cognitive and…

2712

Abstract

Purpose

This study aims to examine the links between office types (cellular, shared-room, small and medium-sized open-plan) and employees' subjective well-being regarding cognitive and affective evaluations and the role perceived noise levels at work has on the aforementioned associations.

Design/methodology/approach

A survey with measures of office types, perceived noise levels at work and the investigated facets of subjective well-being (cognitive vs affective) was distributed to employees working as real estate agents in Sweden. In total, 271 useable surveys were returned and were analyzed using analyses of variance (ANOVAs) and a regression-based model mirroring a test of moderated mediation.

Findings

A significant difference was found between office types on the well-being dimension related to cognitive, but not affective, evaluations. Employees working in cellular and shared-room offices reported significantly higher ratings on this dimension than employees working in open-plan offices, and employees in medium-sized open-plan offices reported significantly lower cognitive evaluation scores than employees working in all other office types. This pattern of results was mediated by perceived noise levels at work, with employees in open-plan (vs cellular and shared-room) offices reporting less satisfactory noise perceptions and, in turn, lower well-being scores, especially regarding the cognitive (vs affective) dimension.

Originality/value

This is one of the first studies to compare the relative impact of office types on both cognitive and affective well-being dimensions while simultaneously testing and providing empirical support for the presumed process explaining the link between such aspects.

Open Access
Article
Publication date: 29 July 2020

Abdullah Alharbi, Wajdi Alhakami, Sami Bourouis, Fatma Najar and Nizar Bouguila

We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is…

Abstract

We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is extremely challenging and important for many applications. The proposed approach involves developing new probabilistic support vector machines (SVMs) kernels from a flexible generative statistical model named “bounded generalized Gaussian mixture model”. The developed learning framework has the advantage to combine properly the benefits of both discriminative and generative models and to include prior knowledge about the nature of data. It can effectively recognize if an image is a tampered one and also to identify both forged and authentic images. The obtained results confirmed that the developed framework has good performance under numerous inpainted images.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 4 December 2020

Fangli Mou and Dan Wu

In recent years, owing to the rapidly increasing labor costs, the demand for robots in daily services and industrial operations has been increased significantly. For further…

1146

Abstract

Purpose

In recent years, owing to the rapidly increasing labor costs, the demand for robots in daily services and industrial operations has been increased significantly. For further applications and human–robot interaction in an unstructured open environment, fast and accurate tracking and strong disturbance rejection ability are required. However, utilizing a conventional controller can make it difficult for the robot to meet these demands, and when a robot is required to perform at a high-speed and large range of motion, conventional controllers may not perform effectively or even lead to the instability.

Design/methodology/approach

The main idea is to develop the control law by combining the SMC feedback with the ADRC control architecture to improve the robustness and control quality of a conventional SMC controller. The problem is formulated and solved in the framework of ADRC. For better estimation and control performance, a generalized proportional integral observer (GPIO) technique is employed to estimate and compensate for unmodeled dynamics and other unknown time-varying disturbances. And benefiting from the usage of GPIO, a new SMC law can be designed by synthesizing the estimation and its history.

Findings

The employed methodology introduced a significant improvement in handling the uncertainties of the system parameters without compromising the nominal system control quality and intuitiveness of the conventional ADRC design. First, the proposed method combines the advantages of the ADRC and SMC method, which achieved the best tracking performance among these controllers. Second, the proposed controller is sufficiently robust to various disturbances and results in smaller tracking errors. Third, the proposed control method is insensitive to control parameters which indicates a good application potential.

Originality/value

High-performance robot tracking control is the basis for further robot applications in open environments and human–robot interfaces, which require high tracking accuracy and strong disturbance rejection. However, both the varied dynamics of the system and rapidly changing nonlinear coupling characteristic significantly increase the control difficulty. The proposed method gives a new replacement of PID controller in robot systems, which does not require an accurate dynamic system model, is insensitive to control parameters and can perform promisingly for response rapidity and steady-state accuracy, as well as in the presence of strong unknown disturbances.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 1 no. 1
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Open Access
Article
Publication date: 1 April 2021

Arunit Maity, P. Prakasam and Sarthak Bhargava

Due to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is…

1296

Abstract

Purpose

Due to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is most significant.

Design/methodology/approach

A novel machine learning-based approach to detect DTMF tones affected by noise, frequency and time variations by employing the k-nearest neighbour (KNN) algorithm is proposed. The features required for training the proposed KNN classifier are extracted using Goertzel's algorithm that estimates the absolute discrete Fourier transform (DFT) coefficient values for the fundamental DTMF frequencies with or without considering their second harmonic frequencies. The proposed KNN classifier model is configured in four different manners which differ in being trained with or without augmented data, as well as, with or without the inclusion of second harmonic frequency DFT coefficient values as features.

Findings

It is found that the model which is trained using the augmented data set and additionally includes the absolute DFT values of the second harmonic frequency values for the eight fundamental DTMF frequencies as the features, achieved the best performance with a macro classification F1 score of 0.980835, a five-fold stratified cross-validation accuracy of 98.47% and test data set detection accuracy of 98.1053%.

Originality/value

The generated DTMF signal has been classified and detected using the proposed KNN classifier which utilizes the DFT coefficient along with second harmonic frequencies for better classification. Additionally, the proposed KNN classifier has been compared with existing models to ascertain its superiority and proclaim its state-of-the-art performance.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Book part
Publication date: 31 January 2015

Abstract

Details

Bounded Rational Choice Behaviour: Applications in Transport
Type: Book
ISBN: 978-1-78441-071-1

Open Access
Article
Publication date: 9 July 2021

Jianran Liu, Bing Liang and Wen Ji

Artificial intelligence is gradually penetrating into human society. In the network era, the interaction between human and artificial intelligence, even between artificial…

Abstract

Purpose

Artificial intelligence is gradually penetrating into human society. In the network era, the interaction between human and artificial intelligence, even between artificial intelligence, becomes more and more complex. Therefore, it is necessary to describe and intervene the evolution of crowd intelligence network dynamically. This paper aims to detect the abnormal agents at the early stage of intelligent evolution.

Design/methodology/approach

In this paper, differential evolution (DE) and K-means clustering are used to detect the crowd intelligence with abnormal evolutionary trend.

Findings

This study abstracts the evolution process of crowd intelligence into the solution process of DE and use K-means clustering to identify individuals who are not conducive to evolution in the early stage of intelligent evolution.

Practical implications

Experiments show that the method we proposed are able to find out individual intelligence without evolutionary trend as early as possible, even in the complex crowd intelligent interactive environment of practical application. As a result, it can avoid the waste of time and computing resources.

Originality/value

In this paper, DE and K-means clustering are combined to analyze the evolution of crowd intelligent interaction.

Details

International Journal of Crowd Science, vol. 5 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

1 – 10 of 247