Search results

1 – 10 of 21
Article
Publication date: 12 April 2024

Ahmad Honarjoo and Ehsan Darvishan

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of…

Abstract

Purpose

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of repairing and rehabilitating massive bridges and buildings is very high, highlighting the need to monitor the structures continuously. One way to track the structure's health is to check the cracks in the concrete. Meanwhile, the current methods of concrete crack detection have complex and heavy calculations.

Design/methodology/approach

This paper presents a new lightweight architecture based on deep learning for crack classification in concrete structures. The proposed architecture was identified and classified in less time and with higher accuracy than other traditional and valid architectures in crack detection. This paper used a standard dataset to detect two-class and multi-class cracks.

Findings

Results show that two images were recognized with 99.53% accuracy based on the proposed method, and multi-class images were classified with 91% accuracy. The low execution time of the proposed architecture compared to other valid architectures in deep learning on the same hardware platform. The use of Adam's optimizer in this research had better performance than other optimizers.

Originality/value

This paper presents a framework based on a lightweight convolutional neural network for nondestructive monitoring of structural health to optimize the calculation costs and reduce execution time in processing.

Details

International Journal of Structural Integrity, vol. 15 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 24 March 2022

Elavaar Kuzhali S. and Pushpa M.K.

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150…

Abstract

Purpose

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The COVID-19 diagnosis is required to detect at the beginning stage and special attention should be given to them. The fastest way to detect the COVID-19 infected patients is detecting through radiology and radiography images. The few early studies describe the particular abnormalities of the infected patients in the chest radiograms. Even though some of the challenges occur in concluding the viral infection traces in X-ray images, the convolutional neural network (CNN) can determine the patterns of data between the normal and infected X-rays that increase the detection rate. Therefore, the researchers are focusing on developing a deep learning-based detection model.

Design/methodology/approach

The main intention of this proposal is to develop the enhanced lung segmentation and classification of diagnosing the COVID-19. The main processes of the proposed model are image pre-processing, lung segmentation and deep classification. Initially, the image enhancement is performed by contrast enhancement and filtering approaches. Once the image is pre-processed, the optimal lung segmentation is done by the adaptive fuzzy-based region growing (AFRG) technique, in which the constant function for fusion is optimized by the modified deer hunting optimization algorithm (M-DHOA). Further, a well-performing deep learning algorithm termed adaptive CNN (A-CNN) is adopted for performing the classification, in which the hidden neurons are tuned by the proposed DHOA to enhance the detection accuracy. The simulation results illustrate that the proposed model has more possibilities to increase the COVID-19 testing methods on the publicly available data sets.

Findings

From the experimental analysis, the accuracy of the proposed M-DHOA–CNN was 5.84%, 5.23%, 6.25% and 8.33% superior to recurrent neural network, neural networks, support vector machine and K-nearest neighbor, respectively. Thus, the segmentation and classification performance of the developed COVID-19 diagnosis by AFRG and A-CNN has outperformed the existing techniques.

Originality/value

This paper adopts the latest optimization algorithm called M-DHOA to improve the performance of lung segmentation and classification in COVID-19 diagnosis using adaptive K-means with region growing fusion and A-CNN. To the best of the authors’ knowledge, this is the first work that uses M-DHOA for improved segmentation and classification steps for increasing the convergence rate of diagnosis.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

3123

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 17 February 2022

Prajakta Thakare and Ravi Sankar V.

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…

Abstract

Purpose

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.

Design/methodology/approach

The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.

Findings

The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.

Originality/value

The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 9 April 2024

Shola Usharani, R. Gayathri, Uday Surya Deveswar Reddy Kovvuri, Maddukuri Nivas, Abdul Quadir Md, Kong Fah Tee and Arun Kumar Sivaraman

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for…

Abstract

Purpose

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for inspectors. Image-based automatic inspection of cracks can be very effective when compared to human eye inspection. With the advancement in deep learning techniques, by utilizing these methods the authors can create automation of work in a particular sector of various industries.

Design/methodology/approach

In this study, an upgraded convolutional neural network-based crack detection method has been proposed. The dataset consists of 3,886 images which include cracked and non-cracked images. Further, these data have been split into training and validation data. To inspect the cracks more accurately, data augmentation was performed on the dataset, and regularization techniques have been utilized to reduce the overfitting problems. In this work, VGG19, Xception and Inception V3, along with Resnet50 V2 CNN architectures to train the data.

Findings

A comparison between the trained models has been performed and from the obtained results, Xception performs better than other algorithms with 99.54% test accuracy. The results show detecting cracked regions and firm non-cracked regions is very efficient by the Xception algorithm.

Originality/value

The proposed method can be way better back to an automatic inspection of cracks in buildings with different design patterns such as decorated historical monuments.

Details

International Journal of Structural Integrity, vol. 15 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 29 August 2023

Hei-Chia Wang, Martinus Maslim and Hung-Yu Liu

A clickbait is a deceptive headline designed to boost ad revenue without presenting closely relevant content. There are numerous negative repercussions of clickbait, such as…

Abstract

Purpose

A clickbait is a deceptive headline designed to boost ad revenue without presenting closely relevant content. There are numerous negative repercussions of clickbait, such as causing viewers to feel tricked and unhappy, causing long-term confusion, and even attracting cyber criminals. Automatic detection algorithms for clickbait have been developed to address this issue. The fact that there is only one semantic representation for the same term and a limited dataset in Chinese is a need for the existing technologies for detecting clickbait. This study aims to solve the limitations of automated clickbait detection in the Chinese dataset.

Design/methodology/approach

This study combines both to train the model to capture the probable relationship between clickbait news headlines and news content. In addition, part-of-speech elements are used to generate the most appropriate semantic representation for clickbait detection, improving clickbait detection performance.

Findings

This research successfully compiled a dataset containing up to 20,896 Chinese clickbait news articles. This collection contains news headlines, articles, categories and supplementary metadata. The suggested context-aware clickbait detection (CA-CD) model outperforms existing clickbait detection approaches on many criteria, demonstrating the proposed strategy's efficacy.

Originality/value

The originality of this study resides in the newly compiled Chinese clickbait dataset and contextual semantic representation-based clickbait detection approach employing transfer learning. This method can modify the semantic representation of each word based on context and assist the model in more precisely interpreting the original meaning of news articles.

Details

Data Technologies and Applications, vol. 58 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 15 February 2024

Xinyu Liu, Kun Ma, Ke Ji, Zhenxiang Chen and Bo Yang

Propaganda is a prevalent technique used in social media to intentionally express opinions or actions with the aim of manipulating or deceiving users. Existing methods for…

Abstract

Purpose

Propaganda is a prevalent technique used in social media to intentionally express opinions or actions with the aim of manipulating or deceiving users. Existing methods for propaganda detection primarily focus on capturing language features within its content. However, these methods tend to overlook the information presented within the external news environment from which propaganda news originated and spread. This news environment reflects recent mainstream media opinions and public attention and contains language characteristics of non-propaganda news. Therefore, the authors have proposed a graph-based multi-information integration network with an external news environment (abbreviated as G-MINE) for propaganda detection.

Design/methodology/approach

G-MINE is proposed to comprise four parts: textual information extraction module, external news environment perception module, multi-information integration module and classifier. Specifically, the external news environment perception module and multi-information integration module extract and integrate the popularity and novelty into the textual information and capture the high-order complementary information between them.

Findings

G-MINE achieves state-of-the-art performance on both the TSHP-17, Qprop and the PTC data sets, with an accuracy of 98.24%, 90.59% and 97.44%, respectively.

Originality/value

An external news environment perception module is proposed to capture the popularity and novelty information, and a multi-information integration module is proposed to effectively fuse them with the textual information.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 November 2023

Juan Yang, Zhenkun Li and Xu Du

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their…

Abstract

Purpose

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their emotional states in daily communication. Therefore, how to achieve automatic and accurate audiovisual emotion recognition is significantly important for developing engaging and empathetic human–computer interaction environment. However, two major challenges exist in the field of audiovisual emotion recognition: (1) how to effectively capture representations of each single modality and eliminate redundant features and (2) how to efficiently integrate information from these two modalities to generate discriminative representations.

Design/methodology/approach

A novel key-frame extraction-based attention fusion network (KE-AFN) is proposed for audiovisual emotion recognition. KE-AFN attempts to integrate key-frame extraction with multimodal interaction and fusion to enhance audiovisual representations and reduce redundant computation, filling the research gaps of existing approaches. Specifically, the local maximum–based content analysis is designed to extract key-frames from videos for the purpose of eliminating data redundancy. Two modules, including “Multi-head Attention-based Intra-modality Interaction Module” and “Multi-head Attention-based Cross-modality Interaction Module”, are proposed to mine and capture intra- and cross-modality interactions for further reducing data redundancy and producing more powerful multimodal representations.

Findings

Extensive experiments on two benchmark datasets (i.e. RAVDESS and CMU-MOSEI) demonstrate the effectiveness and rationality of KE-AFN. Specifically, (1) KE-AFN is superior to state-of-the-art baselines for audiovisual emotion recognition. (2) Exploring the supplementary and complementary information of different modalities can provide more emotional clues for better emotion recognition. (3) The proposed key-frame extraction strategy can enhance the performance by more than 2.79 per cent on accuracy. (4) Both exploring intra- and cross-modality interactions and employing attention-based audiovisual fusion can lead to better prediction performance.

Originality/value

The proposed KE-AFN can support the development of engaging and empathetic human–computer interaction environment.

Article
Publication date: 7 December 2022

Peyman Jafary, Davood Shojaei, Abbas Rajabifard and Tuan Ngo

Building information modeling (BIM) is a striking development in the architecture, engineering and construction (AEC) industry, which provides in-depth information on different…

Abstract

Purpose

Building information modeling (BIM) is a striking development in the architecture, engineering and construction (AEC) industry, which provides in-depth information on different stages of the building lifecycle. Real estate valuation, as a fully interconnected field with the AEC industry, can benefit from 3D technical achievements in BIM technologies. Some studies have attempted to use BIM for real estate valuation procedures. However, there is still a limited understanding of appropriate mechanisms to utilize BIM for valuation purposes and the consequent impact that BIM can have on decreasing the existing uncertainties in the valuation methods. Therefore, the paper aims to analyze the literature on BIM for real estate valuation practices.

Design/methodology/approach

This paper presents a systematic review to analyze existing utilizations of BIM for real estate valuation practices, discovers the challenges, limitations and gaps of the current applications and presents potential domains for future investigations. Research was conducted on the Web of Science, Scopus and Google Scholar databases to find relevant references that could contribute to the study. A total of 52 publications including journal papers, conference papers and proceedings, book chapters and PhD and master's theses were identified and thoroughly reviewed. There was no limitation on the starting date of research, but the end date was May 2022.

Findings

Four domains of application have been identified: (1) developing machine learning-based valuation models using the variables that could directly be captured through BIM and industry foundation classes (IFC) data instances of building objects and their attributes; (2) evaluating the capacity of 3D factors extractable from BIM and 3D GIS in increasing the accuracy of existing valuation models; (3) employing BIM for accurate estimation of components of cost approach-based valuation practices; and (4) extraction of useful visual features for real estate valuation from BIM representations instead of 2D images through deep learning and computer vision.

Originality/value

This paper contributes to research efforts on utilization of 3D modeling in real estate valuation practices. In this regard, this paper presents a broad overview of the current applications of BIM for valuation procedures and provides potential ways forward for future investigations.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 4
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 4 July 2023

Benjamin Scott

This paper aims to examine the history of data leaks and investigative journalism, the techniques and technology that enable them and their influence in Australia and abroad. It…

Abstract

Purpose

This paper aims to examine the history of data leaks and investigative journalism, the techniques and technology that enable them and their influence in Australia and abroad. It explores the ethical and professional considerations of investigative journalists, how they approach privacy and information-sharing and how this differs from intelligence practice in government and industry. The paper assesses the strengths and limitations of Collaborative Investigative Reporting based on Information Leaks (CIRIL) as a kind of public-facing intelligence practice.

Design/methodology/approach

This study draws on academic literature, source material from investigations by the International Consortium of Investigative Journalists and the Organised Crime and Corruption Reporting Project, and a survey of financial crime compliance professionals conducted in 2022.

Findings

The paper identifies three key causal factors that have enabled the rise of CIRIL even as traditional journalism has declined: the digital storage of information; increasing public interest in offshore finance and tax evasion; and “virtual newsrooms” enabled by internet communications. It concludes that the primary strength of CIRIL is its creation of complex global narratives to inform the public about corruption and tax evasion, while its key weakness is that the scale and breadth of the data released makes it difficult to focus on likely criminal activity. Results of a survey of industry and government professionals indicate that CIRIL is generally more effective as public information than as an investigative resource, owing to the volume, age and quality of information released. However, the trends enabling CIRIL are likely to continue, and this means that governments and financial institutions need to become more effective at using leaked information.

Originality/value

Over the past decade, large-scale, data-driven investigative journalism projects such as the Pandora Papers and the Russian Laundromat have had a significant public impact by exposing money laundering, financial crime and corruption. These projects share certain hallmarks: the use of human intelligence, often sourced from anonymous leaks; inventive fusion of this intelligence with data from open sources; and collaboration among a global collective of investigative journalists to build a narrative. These projects prioritise informing the public. They are also an important information source for government and private sector organisations working to investigate and disrupt financial crime.

Details

Journal of Financial Crime, vol. 31 no. 3
Type: Research Article
ISSN: 1359-0790

Keywords

Access

Year

Last month (21)

Content type

Article (21)
1 – 10 of 21