Search results

1 – 5 of 5
Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 2 May 2024

Mikias Gugssa, Long Li, Lina Pu, Ali Gurbuz, Yu Luo and Jun Wang

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However…

Abstract

Purpose

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However, it is still challenging to implement automated safety monitoring methods in near real time or in a time-efficient manner in real construction practices. Therefore, this study developed a novel solution to enhance the time efficiency to achieve near-real-time safety glove detection and meanwhile preserve data privacy.

Design/methodology/approach

The developed method comprises two primary components: (1) transfer learning methods to detect safety gloves and (2) edge computing to improve time efficiency and data privacy. To compare the developed edge computing-based method with the currently widely used cloud computing-based methods, a comprehensive comparative analysis was conducted from both the implementation and theory perspectives, providing insights into the developed approach’s performance.

Findings

Three DL models achieved mean average precision (mAP) scores ranging from 74.92% to 84.31% for safety glove detection. The other two methods by combining object detection and classification achieved mAP as 89.91% for hand detection and 100% for glove classification. From both implementation and theory perspectives, the edge computing-based method detected gloves faster than the cloud computing-based method. The edge computing-based method achieved a detection latency of 36%–68% shorter than the cloud computing-based method in the implementation perspective. The findings highlight edge computing’s potential for near-real-time detection with improved data privacy.

Originality/value

This study implemented and evaluated DL-based safety monitoring methods on different computing infrastructures to investigate their time efficiency. This study contributes to existing knowledge by demonstrating how edge computing can be used with DL models (without sacrificing their performance) to improve PPE-glove monitoring in a time-efficient manner as well as maintain data privacy.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 1 March 2024

Quoc Duy Nam Nguyen, Hoang Viet Anh Le, Tadashi Nakano and Thi Hong Tran

In the wine industry, maintaining superior quality standards is crucial to meet the expectations of both producers and consumers. Traditional approaches to assessing wine quality…

Abstract

Purpose

In the wine industry, maintaining superior quality standards is crucial to meet the expectations of both producers and consumers. Traditional approaches to assessing wine quality involve labor-intensive processes and rely on the expertise of connoisseurs proficient in identifying taste profiles and key quality factors. In this research, we introduce an innovative and efficient approach centered on the analysis of volatile organic compounds (VOCs) signals using an electronic nose, thereby empowering nonexperts to accurately assess wine quality.

Design/methodology/approach

To devise an optimal algorithm for this purpose, we conducted four computational experiments, culminating in the development of a specialized deep learning network. This network seamlessly integrates 1D-convolutional and long-short-term memory layers, tailor-made for the intricate task at hand. Rigorous validation ensued, employing a leave-one-out cross-validation methodology to scrutinize the efficacy of our design.

Findings

The outcomes of these e-demonstrates were subjected to meticulous evaluation and analysis, which unequivocally demonstrate that our proposed architecture consistently attains promising recognition accuracies, ranging impressively from 87.8% to an astonishing 99.41%. All this is achieved within a remarkably brief timeframe of a mere 4 seconds. These compelling findings have far-reaching implications, promising to revolutionize the assessment and tracking of wine quality, ultimately affording substantial benefits to the wine industry and all its stakeholders, with a particular focus on the critical aspect of VOCs signal analysis.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 27 February 2024

Shefali Arora, Ruchi Mittal, Avinash K. Shrivastava and Shivani Bali

Deep learning (DL) is on the rise because it can make predictions and judgments based on data that is unseen. Blockchain technologies are being combined with DL frameworks in…

Abstract

Purpose

Deep learning (DL) is on the rise because it can make predictions and judgments based on data that is unseen. Blockchain technologies are being combined with DL frameworks in various industries to provide a safe and effective infrastructure. The review comprises literature that lists the most recent techniques used in the aforementioned application sectors. We examine the current research trends across several fields and evaluate the literature in terms of its advantages and disadvantages.

Design/methodology/approach

The integration of blockchain and DL has been explored in several application domains for the past five years (2018–2023). Our research is guided by five research questions, and based on these questions, we concentrate on key application domains such as the usage of Internet of Things (IoT) in several applications, healthcare and cryptocurrency price prediction. We have analyzed the main challenges and possibilities concerning blockchain technologies. We have discussed the methodologies used in the pertinent publications in these areas and contrasted the research trends during the previous five years. Additionally, we provide a comparison of the widely used blockchain frameworks that are used to create blockchain-based DL frameworks.

Findings

By responding to five research objectives, the study highlights and assesses the effectiveness of already published works using blockchain and DL. Our findings indicate that IoT applications, such as their use in smart cities and cars, healthcare and cryptocurrency, are the key areas of research. The primary focus of current research is the enhancement of existing systems, with data analysis, storage and sharing via decentralized systems being the main motivation for this integration. Amongst the various frameworks employed, Ethereum and Hyperledger are popular among researchers in the domain of IoT and healthcare, whereas Bitcoin is popular for research on cryptocurrency.

Originality/value

There is a lack of literature that summarizes the state-of-the-art methods incorporating blockchain and DL in popular domains such as healthcare, IoT and cryptocurrency price prediction. We analyze the existing research done in the past five years (2018–2023) to review the issues and emerging trends.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Access

Year

Last 3 months (5)

Content type

Earlycite article (5)
1 – 5 of 5