Search results

1 – 10 of 218
Open Access
Article
Publication date: 20 October 2022

Chongjun Wu, Dengdeng Shu, Hu Zhou and Zuchao Fu

In order to improve the robustness to noise in point cloud plane fitting, a combined model of improved Cook’s distance (ICOOK) and WTLS is proposed by setting a modified Cook’s…

Abstract

Purpose

In order to improve the robustness to noise in point cloud plane fitting, a combined model of improved Cook’s distance (ICOOK) and WTLS is proposed by setting a modified Cook’s increment, which could help adaptively remove the noise points that exceeds the threshold.

Design/methodology/approach

This paper proposes a robust point cloud plane fitting method based on ICOOK and WTLS to improve the robustness to noise in point cloud fitting. The ICOOK to denoise the initial point cloud was set and verified with experiments. In the meanwhile, weighted total least squares method (WTLS) was adopted to perform plane fitting on the denoised point cloud set to obtain the plane equation.

Findings

(a) A threshold-adaptive Cook’s distance method is designed, which can automatically match a suitable threshold. (b) The ICOOK is fused with the WTLS method, and the simulation experiments and the actual fitting of the surface of the DD motor are carried out to verify the actual application. (c) The results shows that the plane fitting accuracy and unit weight variance of the algorithm in this paper are substantially enhanced.

Originality/value

The existing point cloud plane fitting methods are not robust to noise, so a robust point cloud plane fitting method based on a combined model of ICOOK and WTLS is proposed. The existing point cloud plane fitting methods are not robust to noise, so a robust point cloud plane fitting method based on a combined model of ICOOK and WTLS is proposed.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 3 no. 2
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 25 February 2020

Zsolt Tibor Kosztyán, Tibor Csizmadia, Zoltán Kovács and István Mihálcz

The purpose of this paper is to generalize the traditional risk evaluation methods and to specify a multi-level risk evaluation framework, in order to prepare customized risk…

3636

Abstract

Purpose

The purpose of this paper is to generalize the traditional risk evaluation methods and to specify a multi-level risk evaluation framework, in order to prepare customized risk evaluation and to enable effectively integrating the elements of risk evaluation.

Design/methodology/approach

A real case study of an electric motor manufacturing company is presented to illustrate the advantages of this new framework compared to the traditional and fuzzy failure mode and effect analysis (FMEA) approaches.

Findings

The essence of the proposed total risk evaluation framework (TREF) is its flexible approach that enables the effective integration of firms’ individual requirements by developing tailor-made organizational risk evaluation.

Originality/value

Increasing product/service complexity has led to increasingly complex yet unique organizational operations; as a result, their risk evaluation is a very challenging task. Distinct structures, characteristics and processes within and between organizations require a flexible yet robust approach of evaluating risks efficiently. Most recent risk evaluation approaches are considered to be inadequate due to the lack of flexibility and an inappropriate structure for addressing the unique organizational demands and contextual factors. To address this challenge effectively, taking a crucial step toward customization of risk evaluation.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 7 August 2019

Jinbao Zhang, Yongqiang Zhao, Ming Liu and Lingxian Kong

A generalized distribution with wide range of skewness and elongation will be suitable for the data mining and compatible for the misspecification of the distribution. Hence, the…

2271

Abstract

Purpose

A generalized distribution with wide range of skewness and elongation will be suitable for the data mining and compatible for the misspecification of the distribution. Hence, the purpose of this paper is to present a distribution-based approach for estimating degradation reliability considering these conditions.

Design/methodology/approach

Tukey’s g-and-h distribution with the quantile expression is introduced to fit the degradation paths of the population over time. The Newton–Raphson algorithm is used to approximately evaluate the reliability. Simulation verification for parameter estimation with particle swarm optimization (PSO) is carried out. The effectiveness and validity of the proposed approach for degradation reliability is verified by the two-stage verification and the comparison with others’ work.

Findings

Simulation studies have proved the effectiveness of PSO in the parameter estimation. Two degradation datasets of GaAs laser devices and crack growth are performed by the proposed approach. The results show that it can well match the initial failure time and be more compatible than the normal distribution and the Weibull distribution.

Originality/value

Tukey’s g-and-h distribution is first proposed to investigate the influence of the tail and the skewness on the degradation reliability. In addition, the parameters of the Tukey’s g-and-h distribution is estimated by PSO with root-mean-square error as the object function.

Details

Engineering Computations, vol. 36 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 14 August 2017

Xiu Susie Fang, Quan Z. Sheng, Xianzhi Wang, Anne H.H. Ngu and Yihong Zhang

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

2053

Abstract

Purpose

This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.

Design/methodology/approach

In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.

Findings

Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.

Originality/value

To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.

Details

PSU Research Review, vol. 1 no. 2
Type: Research Article
ISSN: 2399-1747

Keywords

Open Access
Article
Publication date: 29 February 2024

Olfa Ben Salah and Anis Jarboui

The objective of this paper is to investigate the direction of the causal relationship between dividend policy (DP) and earnings management (EM).

Abstract

Purpose

The objective of this paper is to investigate the direction of the causal relationship between dividend policy (DP) and earnings management (EM).

Design/methodology/approach

This research utilizes the panel data analysis to investigate the causal relationship between EM and DP. It provides empirical insights based on a sample of 280 French nonfinancial companies listed on the CAC All-Tradable index during the period of 2008–2015. The study initiates with a Granger causality examination on the unbalanced panel data and employs a dynamic panel approach with the generalized method of moments (GMM). It further estimates the empirical models simultaneously using the three-stage least squares (3SLS) method and the iterative triple least squares (iterative 3SLS) method.

Findings

The estimation of our various empirical models confirms the presence of a bidirectional causal relationship between DP and EM.

Practical implications

Our study highlights the prevalence of EM in the French context, particularly within DP. It underscores the need for regulatory bodies, the Ministry of Finance, external auditors and stock exchange organizers to prioritize governance mechanisms for improving the quality of financial information disclosed by companies.

Originality/value

This research is, to the best of our knowledge, the first is to extensively investigate the reciprocal causal relationship between DP and EM in France. Previous studies have not placed a significant emphasis on exploring this bidirectional link between these two variables.

Details

Journal of Economics, Finance and Administrative Science, vol. 29 no. 57
Type: Research Article
ISSN: 2077-1886

Keywords

Open Access
Article
Publication date: 14 November 2023

Leiting Zhao, Kan Liu, Donghui Liu and Zheming Jin

This study aims to improve the availability of regenerative braking for urban metro vehicles by introducing a sensorless operational temperature estimation method for the braking…

Abstract

Purpose

This study aims to improve the availability of regenerative braking for urban metro vehicles by introducing a sensorless operational temperature estimation method for the braking resistor (BR) onboard the vehicle, which overcomes the vulnerability of having conventional temperature sensor.

Design/methodology/approach

In this study, the energy model based sensorless estimation method is developed. By analyzing the structure and the convection dissipation process of the BR onboard the vehicle, the energy-based operational temperature model of the BR and its cooling domain is established. By adopting Newton's law of cooling and the law of conservation of energy, the energy and temperature dynamic of the BR can be stated. To minimize the use of all kinds of sensors (including both thermal and electrical), a novel regenerative braking power calculation method is proposed, which involves only the voltage of DC traction network and the duty cycle of the chopping circuit; both of them are available for the traction control unit (TCU) of the vehicle. By utilizing a real-time iterative calculation and updating the parameter of the energy model, the operational temperature of the BR can be obtained and monitored in a sensorless manner.

Findings

In this study, a sensorless estimation/monitoring method of the operational temperature of BR is proposed. The results show that it is possible to utilize the existing electrical sensors that is mandatory for the traction unit’s operation to estimate the operational temperature of BR, instead of adding dedicated thermal sensors. The results also validate the effectiveness of the proposal is acceptable for the engineering practical.

Originality/value

The proposal of this study provides novel concepts for the sensorless operational temperature monitoring of BR onboard rolling stocks. The proposed method only involves quasi-global electrical variable and the internal control signal within the TCU.

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation…

2327

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 17 December 2019

Yin Kedong, Shiwei Zhou and Tongtong Xu

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The…

1327

Abstract

Purpose

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The purpose of this paper is to provide theoretical guidance and reference standards for the indicator system design process, laying a solid foundation for the application of the indicator system, by systematically exploring the expert evaluation method to optimize the index system to enhance its credibility and reliability, to improve its resolution and accuracy and reduce its objectivity and randomness.

Design/methodology/approach

The paper is based on system theory and statistics, and it designs the main line of “relevant theoretical analysis – identification of indicators – expert assignment and quality inspection” to achieve the design and optimization of the indicator system. First, the theoretical basis analysis, relevant factor analysis and physical process description are used to clarify the comprehensive evaluation problem and the correlation mechanism. Second, the system structure analysis, hierarchical decomposition and indicator set identification are used to complete the initial establishment of the indicator system. Third, based on expert assignment method, such as Delphi assignments, statistical analysis, t-test and non-parametric test are used to complete the expert assignment quality diagnosis of a single index, the reliability and validity test is used to perform single-index assignment correction and consistency test is used for KENDALL coordination coefficient and F-test multi-indicator expert assignment quality diagnosis.

Findings

Compared with the traditional index system construction method, the optimization process used in the study standardizes the process of index establishment, reduces subjectivity and randomness, and enhances objectivity and scientificity.

Originality/value

The innovation point and value of the paper are embodied in three aspects. First, the system design process of the combined indicator system, the multi-dimensional index screening and system optimization are carried out to ensure that the index system is scientific, reasonable and comprehensive. Second, the experts’ background is comprehensively evaluated. The objectivity and reliability of experts’ assignment are analyzed and improved on the basis of traditional methods. Third, aim at the quality of expert assignment, conduct t-test, non-parametric test of single index, and multi-optimal test of coordination and importance of multiple indicators, enhance experts the practicality of assignment and ensures the quality of expert assignment.

Details

Marine Economics and Management, vol. 2 no. 1
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 18 October 2022

Ramy Shaheen, Suhail Mahfud and Ali Kassem

This paper aims to study Irreversible conversion processes, which examine the spread of a one way change of state (from state 0 to state 1) through a specified society (the spread…

474

Abstract

Purpose

This paper aims to study Irreversible conversion processes, which examine the spread of a one way change of state (from state 0 to state 1) through a specified society (the spread of disease through populations, the spread of opinion through social networks, etc.) where the conversion rule is determined at the beginning of the study. These processes can be modeled into graph theoretical models where the vertex set V(G) represents the set of individuals on which the conversion is spreading.

Design/methodology/approach

The irreversible k-threshold conversion process on a graph G=(V,E) is an iterative process which starts by choosing a set S_0?V, and for each step t (t = 1, 2,…,), S_t is obtained from S_(t−1) by adjoining all vertices that have at least k neighbors in S_(t−1). S_0 is called the seed set of the k-threshold conversion process and is called an irreversible k-threshold conversion set (IkCS) of G if S_t = V(G) for some t = 0. The minimum cardinality of all the IkCSs of G is referred to as the irreversible k-threshold conversion number of G and is denoted by C_k (G).

Findings

In this paper the authors determine C_k (G) for generalized Jahangir graph J_(s,m) for 1 < k = m and s, m are arbitraries. The authors also determine C_k (G) for strong grids P_2? P_n when k = 4, 5. Finally, the authors determine C_2 (G) for P_n? P_n when n is arbitrary.

Originality/value

This work is 100% original and has important use in real life problems like Anti-Bioterrorism.

Details

Arab Journal of Mathematical Sciences, vol. 30 no. 1
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 18 April 2023

Worapan Kusakunniran, Pairash Saiviroonporn, Thanongchai Siriapisith, Trongtum Tongdee, Amphai Uraiverotchanakorn, Suphawan Leesakul, Penpitcha Thongnarintr, Apichaya Kuama and Pakorn Yodprom

The cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart…

2694

Abstract

Purpose

The cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart and a transverse dimension of chest. The cardiomegaly is identified when the ratio is larger than a cut-off threshold. This paper aims to propose a solution to calculate the ratio for classifying the cardiomegaly in chest x-ray images.

Design/methodology/approach

The proposed method begins with constructing lung and heart segmentation models based on U-Net architecture using the publicly available datasets with the groundtruth of heart and lung masks. The ratio is then calculated using the sizes of segmented lung and heart areas. In addition, Progressive Growing of GANs (PGAN) is adopted here for constructing the new dataset containing chest x-ray images of three classes including male normal, female normal and cardiomegaly classes. This dataset is then used for evaluating the proposed solution. Also, the proposed solution is used to evaluate the quality of chest x-ray images generated from PGAN.

Findings

In the experiments, the trained models are applied to segment regions of heart and lung in chest x-ray images on the self-collected dataset. The calculated CTR values are compared with the values that are manually measured by human experts. The average error is 3.08%. Then, the models are also applied to segment regions of heart and lung for the CTR calculation, on the dataset computed by PGAN. Then, the cardiomegaly is determined using various attempts of different cut-off threshold values. With the standard cut-off at 0.50, the proposed method achieves 94.61% accuracy, 88.31% sensitivity and 94.20% specificity.

Originality/value

The proposed solution is demonstrated to be robust across unseen datasets for the segmentation, CTR calculation and cardiomegaly classification, including the dataset generated from PGAN. The cut-off value can be adjusted to be lower than 0.50 for increasing the sensitivity. For example, the sensitivity of 97.04% can be achieved at the cut-off of 0.45. However, the specificity is decreased from 94.20% to 79.78%.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 218