Search results

1 – 10 of over 1000
Article
Publication date: 4 April 2024

Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…

Abstract

Purpose

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.

Design/methodology/approach

The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.

Findings

The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.

Originality/value

The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 28 February 2023

Meltem Aksoy, Seda Yanık and Mehmet Fatih Amasyali

When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals…

Abstract

Purpose

When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals are primarily based on manual matching of similar topics, discipline areas and keywords declared by project applicants. When the number of proposals increases, this task becomes complex and requires excessive time. This paper aims to demonstrate how to effectively use the rich information in the titles and abstracts of Turkish project proposals to group them automatically.

Design/methodology/approach

This study proposes a model that effectively groups Turkish project proposals by combining word embedding, clustering and classification techniques. The proposed model uses FastText, BERT and term frequency/inverse document frequency (TF/IDF) word-embedding techniques to extract terms from the titles and abstracts of project proposals in Turkish. The extracted terms were grouped using both the clustering and classification techniques. Natural groups contained within the corpus were discovered using k-means, k-means++, k-medoids and agglomerative clustering algorithms. Additionally, this study employs classification approaches to predict the target class for each document in the corpus. To classify project proposals, various classifiers, including k-nearest neighbors (KNN), support vector machines (SVM), artificial neural networks (ANN), classification and regression trees (CART) and random forest (RF), are used. Empirical experiments were conducted to validate the effectiveness of the proposed method by using real data from the Istanbul Development Agency.

Findings

The results show that the generated word embeddings can effectively represent proposal texts as vectors, and can be used as inputs for clustering or classification algorithms. Using clustering algorithms, the document corpus is divided into five groups. In addition, the results demonstrate that the proposals can easily be categorized into predefined categories using classification algorithms. SVM-Linear achieved the highest prediction accuracy (89.2%) with the FastText word embedding method. A comparison of manual grouping with automatic classification and clustering results revealed that both classification and clustering techniques have a high success rate.

Research limitations/implications

The proposed model automatically benefits from the rich information in project proposals and significantly reduces numerous time-consuming tasks that managers must perform manually. Thus, it eliminates the drawbacks of the current manual methods and yields significantly more accurate results. In the future, additional experiments should be conducted to validate the proposed method using data from other funding organizations.

Originality/value

This study presents the application of word embedding methods to effectively use the rich information in the titles and abstracts of Turkish project proposals. Existing research studies focus on the automatic grouping of proposals; traditional frequency-based word embedding methods are used for feature extraction methods to represent project proposals. Unlike previous research, this study employs two outperforming neural network-based textual feature extraction techniques to obtain terms representing the proposals: BERT as a contextual word embedding method and FastText as a static word embedding method. Moreover, to the best of our knowledge, there has been no research conducted on the grouping of project proposals in Turkish.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 12 October 2023

Xiaoyu Liu, Feng Xu, Zhipeng Zhang and Kaiyu Sun

Fall accidents can cause casualties and economic losses in the construction industry. Fall portents, such as loss of balance (LOB) and sudden sways, can result in fatal, nonfatal…

Abstract

Purpose

Fall accidents can cause casualties and economic losses in the construction industry. Fall portents, such as loss of balance (LOB) and sudden sways, can result in fatal, nonfatal or attempted fall accidents. All of them are worthy of studying to take measures to prevent future accidents. Detecting fall portents can proactively and comprehensively help managers assess the risk to workers as well as in the construction environment and further prevent fall accidents.

Design/methodology/approach

This study focused on the postures of workers and aimed to directly detect fall portents using a computer vision (CV)-based noncontact approach. Firstly, a joint coordinate matrix generated from a three-dimensional pose estimation model is employed, and then the matrix is preprocessed by principal component analysis, K-means and pre-experiments. Finally, a modified fusion K-nearest neighbor-based machine learning model is built to fuse information from the x, y and z axes and output the worker's pose status into three stages.

Findings

The proposed model can output the worker's pose status into three stages (steady–unsteady–fallen) and provide corresponding confidence probabilities for each category. Experiments conducted to evaluate the approach show that the model accuracy reaches 85.02% with threshold-based postprocessing. The proposed fall-portent detection approach can extract the fall risk of workers in the both pre- and post-event phases based on noncontact approach.

Research limitations/implications

First, three-dimensional (3D) pose estimation needs sufficient information, which means it may not perform well when applied in complicated environments or when the shooting distance is extremely large. Second, solely focusing on fall-related factors may not be comprehensive enough. Future studies can incorporate the results of this research as an indicator into the risk assessment system to achieve a more comprehensive and accurate evaluation of worker and site risk.

Practical implications

The proposed machine learning model determines whether the worker is in a status of steady, unsteady or fallen using a CV-based approach. From the perspective of construction management, when detecting fall-related actions on construction sites, the noncontact approach based on CV has irreplaceable advantages of no interruption to workers and low cost. It can make use of the surveillance cameras on construction sites to recognize both preceding events and happened accidents. The detection of fall portents can help worker risk assessment and safety management.

Originality/value

Existing studies using sensor-based approaches are high-cost and invasive for construction workers, and others using CV-based approaches either oversimplify by binary classification of the non-entire fall process or indirectly achieve fall-portent detection. Instead, this study aims to detect fall portents directly by worker's posture and divide the entire fall process into three stages using a CV-based noncontact approach. It can help managers carry out more comprehensive risk assessment and develop preventive measures.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 21 June 2023

Sudhaman Parthasarathy and S.T. Padmapriya

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias…

Abstract

Purpose

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias in AI-enabled ERP software customization. Although algorithmic bias in machine learning models has uneven, unfair and unjust impacts, research on it is mostly anecdotal and scattered.

Design/methodology/approach

As guided by the previous research (Akter et al., 2022), this study presents the possible design bias (model, data and method) one may experience with enterprise resource planning (ERP) software customization algorithm. This study then presents the artificial intelligence (AI) version of ERP customization algorithm using k-nearest neighbours algorithm.

Findings

This study illustrates the possible bias when the prioritized requirements customization estimation (PRCE) algorithm available in the ERP literature is executed without any AI. Then, the authors present their newly developed AI version of the PRCE algorithm that uses ML techniques. The authors then discuss its adjoining algorithmic bias with an illustration. Further, the authors also draw a roadmap for managing algorithmic bias during ERP customization in practice.

Originality/value

To the best of the authors’ knowledge, no prior research has attempted to understand the algorithmic bias that occurs during the execution of the ERP customization algorithm (with or without AI).

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 3 no. 2
Type: Research Article
ISSN: 2633-7436

Keywords

Article
Publication date: 28 February 2024

Magdalena Saldana-Perez, Giovanni Guzmán, Carolina Palma-Preciado, Amadeo Argüelles-Cruz and Marco Moreno-Ibarra

Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the…

Abstract

Purpose

Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the United Nations, only a few cities have been planned taking into account the climate changes indices. This paper aims to study climatic variations, how climate conditions might change in the future and how these changes will affect the activities and living conditions in cities, specifically focusing on Mexico city.

Design/methodology/approach

In this approach, two distinct machine learning regression models, k-Nearest Neighbors and Support Vector Regression, were used to predict variations in climate change indices within select urban areas of Mexico city. The calculated indices are based on maximum, minimum and average temperature data collected from the National Water Commission in Mexico and the Scientific Research Center of Ensenada. The methodology involves pre-processing temperature data to create a training data set for regression algorithms. It then computes predictions for each temperature parameter and ultimately assesses the performance of these algorithms based on precision metrics scores.

Findings

This paper combines a geospatial perspective with computational tools and machine learning algorithms. Among the two regression algorithms used, it was observed that k-Nearest Neighbors produced superior results, achieving an R2 score of 0.99, in contrast to Support Vector Regression, which yielded an R2 score of 0.74.

Originality/value

The full potential of machine learning algorithms has not been fully harnessed for predicting climate indices. This paper also identifies the strengths and weaknesses of each algorithm and how the generated estimations can then be considered in the decision-making process.

Details

Transforming Government: People, Process and Policy, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 22 March 2024

Shahin Alipour Bonab, Alireza Sadeghi and Mohammad Yazdani-Asrami

The ionization of the air surrounding the phase conductor in high-voltage transmission lines results in a phenomenon known as the Corona effect. To avoid this, Corona rings are…

Abstract

Purpose

The ionization of the air surrounding the phase conductor in high-voltage transmission lines results in a phenomenon known as the Corona effect. To avoid this, Corona rings are used to dampen the electric field imposed on the insulator. The purpose of this study is to present a fast and intelligent surrogate model for determination of the electric field imposed on the surface of a 120 kV composite insulator, in presence of the Corona ring.

Design/methodology/approach

Usually, the structural design parameters of the Corona ring are selected through an optimization procedure combined with some numerical simulations such as finite element method (FEM). These methods are slow and computationally expensive and thus, extremely reducing the speed of optimization problems. In this paper, a novel surrogate model was proposed that could calculate the maximum electric field imposed on a ceramic insulator in a 120 kV line. The surrogate model was created based on the different scenarios of height, radius and inner radius of the Corona ring, as the inputs of the model, while the maximum electric field on the body of the insulator was considered as the output.

Findings

The proposed model was based on artificial intelligence techniques that have high accuracy and low computational time. Three methods were used here to develop the AI-based surrogate model, namely, Cascade forward neural network (CFNN), support vector regression and K-nearest neighbors regression. The results indicated that the CFNN has the highest accuracy among these methods with 99.81% R-squared and only 0.045468 root mean squared error while the testing time is less than 10 ms.

Originality/value

To the best of the authors’ knowledge, for the first time, a surrogate method is proposed for the prediction of the maximum electric field imposed on the high voltage insulators in the presence Corona ring which is faster than any conventional finite element method.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 9 November 2022

Meryem Uluskan and Merve Gizem Karşı

This study aims to emphasize utilization of Predictive Six Sigma to achieve process improvements based on machine learning (ML) techniques embedded in define, measure, analyze…

Abstract

Purpose

This study aims to emphasize utilization of Predictive Six Sigma to achieve process improvements based on machine learning (ML) techniques embedded in define, measure, analyze, improve, control (DMAIC). With this aim, this study presents selection and utilization of ML techniques, including multiple linear regression (MLR), artificial neural network (ANN), random forests (RF), gradient boosting machines (GBM) and k-nearest neighbors (k-NN) in the analyze and improve phases of Six Sigma DMAIC.

Design/methodology/approach

A data set containing 320 observations with nine input and one output variables is used. To achieve the objective which was to decrease the number of fabric defects, five ML techniques were compared in terms of prediction performance and best tools were selected. Next, most important causes of defects were determined via these tools. Finally, parameter optimization was conducted for minimum number of defects.

Findings

Among five ML tools, ANN, GBM and RF are found to be the best predictors. Out of nine potential causes, “machine speed” and “fabric width” are determined as the most important variables by using these tools. Then, optimum values for “machine speed” and “fabric width” for fabric defect minimization are determined both via regression response optimizer and ANN surface optimization. Ultimately, average defect number was decreased from 13/roll to 3/roll, which is a considerable decrease attained through utilization of ML techniques in Six Sigma.

Originality/value

Addressing an important gap in Six Sigma literature, in this study, certain ML techniques (i.e. MLR, ANN, RF, GBM and k-NN) are compared and the ones possessing best performances are used in the analyze and improve phases of Six Sigma DMAIC.

Article
Publication date: 23 June 2023

Rubel, Bijay Prasad Kushwaha and Md Helal Miah

This study aims to highlight the inconsistency between conventional knowledge push judgements and the price of knowledge push. Also, a three-way decision-based relevant knowledge…

Abstract

Purpose

This study aims to highlight the inconsistency between conventional knowledge push judgements and the price of knowledge push. Also, a three-way decision-based relevant knowledge push algorithm was proposed.

Design/methodology/approach

Using a ratio of 80–20%, the experiment randomly splits the data into a training set and a test set. Each video is used as a knowledge unit (structure) in the research, and the category is used as a knowledge attribute. The limit is then determined using the user’s overall rating. To calculate the pertinent information obtained through experiments, the fusion coefficient is needed. The impact of the push model is then examined in comparison to the conventional push model. In the experiment, relevant knowledge is compared using three push models, two push models based on conventional International classification functioning (ICF), and three push models based on traditional ICF. The average push cost accuracy rate, recall rate and coverage rate are metrics used to assess the push effect.

Findings

The three-way knowledge push models perform better on average than the other push models in this research in terms of push cost, accuracy rate and recall rate. However, the three-way knowledge push models suggested in this study have a lower coverage rate than the two-way push model. So three-way knowledge push models condense the knowledge push and forfeit a particular coverage rate. As a result, improving knowledge results in higher accuracy rates and lower push costs.

Practical implications

This research has practical ramifications for the quick expansion of knowledge and its hegemonic status in value creation as the main methodology for knowledge services.

Originality/value

To the best of the authors’ knowledge, this is the first theory developed on the three-way decision-making process of knowledge push services to increase organizational effectiveness and efficiency.

Details

VINE Journal of Information and Knowledge Management Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5891

Keywords

Book part
Publication date: 15 May 2023

Birol Yıldız and Şafak Ağdeniz

Purpose: The main aim of the study is to provide a tool for non-financial information in decision-making. We analysed the non-financial data in the annual reports in order to show…

Abstract

Purpose: The main aim of the study is to provide a tool for non-financial information in decision-making. We analysed the non-financial data in the annual reports in order to show the usage of this information in financial decision processes.

Need for the Study: Main financial reports such as balance sheets and income statements can be analysed by statistical methods. However, an expanded financial reporting framework needs new analysing methods due to unstructured and big data. The study offers a solution to the analysis problem that comes with non-financial reporting, which is an essential communication tool in corporate reporting.

Methodology: Text mining analysis of annual reports is conducted using software named R. To simplify the problem, we try to predict the companies’ corporate governance qualifications using text mining. K Nearest Neighbor, Naive Bayes and Decision Tree machine learning algorithms were used.

Findings: Our analysis illustrates that K Nearest Neighbor has classified the highest number of correct classifications by 85%, compared to 50% for the random walk. The empirical evidence suggests that text mining can be used by all stakeholders as a financial analysis method.

Practical Implications: Combining financial statement analyses with financial reporting analyses will decrease the information asymmetry between the company and stakeholders. So stakeholders can make more accurate decisions. Analysis of non-financial data with text mining will provide a decisive competitive advantage, especially for investors to make the right decisions. This method will lead to allocating scarce resources more effectively. Another contribution of the study is that stakeholders can predict the corporate governance qualification of the company from the annual reports even if it does not include in the Corporate Governance Index (CGI).

Details

Contemporary Studies of Risks in Emerging Technology, Part B
Type: Book
ISBN: 978-1-80455-567-5

Keywords

Article
Publication date: 22 February 2024

Yumeng Feng, Weisong Mu, Yue Li, Tianqi Liu and Jianying Feng

For a better understanding of the preferences and differences of young consumers in emerging wine markets, this study aims to propose a clustering method to segment the super-new…

Abstract

Purpose

For a better understanding of the preferences and differences of young consumers in emerging wine markets, this study aims to propose a clustering method to segment the super-new generation wine consumers based on their sensitivity to wine brand, origin and price and then conduct user profiles for segmented consumer groups from the perspectives of demographic attributes, eating habits and wine sensory attribute preferences.

Design/methodology/approach

We first proposed a consumer clustering perspective based on their sensitivity to wine brand, origin and price and then conducted an adaptive density peak and label propagation layer-by-layer (ADPLP) clustering algorithm to segment consumers, which improved the issues of wrong centers' selection and inaccurate classification of remaining sample points for traditional DPC (DPeak clustering algorithm). Then, we built a consumer profile system from the perspectives of demographic attributes, eating habits and wine sensory attribute preferences for segmented consumer groups.

Findings

In this study, 10 typical public datasets and 6 basic test algorithms are used to evaluate the proposed method, and the results showed that the ADPLP algorithm was optimal or suboptimal on 10 datasets with accuracy above 0.78. The average improvement in accuracy over the base DPC algorithm is 0.184. As an outcome of the wine consumer profiles, sensitive consumers prefer wines with medium prices of 100–400 CNY and more personalized brands and origins, while casual consumers are fond of popular brands, popular origins and low prices within 50 CNY. The wine sensory attributes preferred by super-new generation consumers are red, semi-dry, semi-sweet, still, fresh tasting, fruity, floral and low acid.

Practical implications

Young Chinese consumers are the main driver of wine consumption in the future. This paper provides a tool for decision-makers and marketers to identify the preferences of young consumers quickly which is meaningful and helpful for wine marketing.

Originality/value

In this study, the ADPLP algorithm was introduced for the first time. Subsequently, the user profile label system was constructed for segmented consumers to highlight their characteristics and demand partiality from three aspects: demographic characteristics, consumers' eating habits and consumers' preferences for wine attributes. Moreover, the ADPLP algorithm can be considered for user profiles on other alcoholic products.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 1000