Search results

1 – 10 of 25
Article
Publication date: 17 February 2022

Prajakta Thakare and Ravi Sankar V.

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…

Abstract

Purpose

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.

Design/methodology/approach

The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.

Findings

The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.

Originality/value

The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 24 March 2022

Elavaar Kuzhali S. and Pushpa M.K.

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150…

Abstract

Purpose

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The COVID-19 diagnosis is required to detect at the beginning stage and special attention should be given to them. The fastest way to detect the COVID-19 infected patients is detecting through radiology and radiography images. The few early studies describe the particular abnormalities of the infected patients in the chest radiograms. Even though some of the challenges occur in concluding the viral infection traces in X-ray images, the convolutional neural network (CNN) can determine the patterns of data between the normal and infected X-rays that increase the detection rate. Therefore, the researchers are focusing on developing a deep learning-based detection model.

Design/methodology/approach

The main intention of this proposal is to develop the enhanced lung segmentation and classification of diagnosing the COVID-19. The main processes of the proposed model are image pre-processing, lung segmentation and deep classification. Initially, the image enhancement is performed by contrast enhancement and filtering approaches. Once the image is pre-processed, the optimal lung segmentation is done by the adaptive fuzzy-based region growing (AFRG) technique, in which the constant function for fusion is optimized by the modified deer hunting optimization algorithm (M-DHOA). Further, a well-performing deep learning algorithm termed adaptive CNN (A-CNN) is adopted for performing the classification, in which the hidden neurons are tuned by the proposed DHOA to enhance the detection accuracy. The simulation results illustrate that the proposed model has more possibilities to increase the COVID-19 testing methods on the publicly available data sets.

Findings

From the experimental analysis, the accuracy of the proposed M-DHOA–CNN was 5.84%, 5.23%, 6.25% and 8.33% superior to recurrent neural network, neural networks, support vector machine and K-nearest neighbor, respectively. Thus, the segmentation and classification performance of the developed COVID-19 diagnosis by AFRG and A-CNN has outperformed the existing techniques.

Originality/value

This paper adopts the latest optimization algorithm called M-DHOA to improve the performance of lung segmentation and classification in COVID-19 diagnosis using adaptive K-means with region growing fusion and A-CNN. To the best of the authors’ knowledge, this is the first work that uses M-DHOA for improved segmentation and classification steps for increasing the convergence rate of diagnosis.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 2 May 2024

Bikesh Manandhar, Thanh-Canh Huynh, Pawan Kumar Bhattarai, Suchita Shrestha and Ananta Man Singh Pradhan

This research is aimed at preparing landslide susceptibility using spatial analysis and soft computing machine learning techniques based on convolutional neural networks (CNNs)…

Abstract

Purpose

This research is aimed at preparing landslide susceptibility using spatial analysis and soft computing machine learning techniques based on convolutional neural networks (CNNs), artificial neural networks (ANNs) and logistic regression (LR) models.

Design/methodology/approach

Using the Geographical Information System (GIS), a spatial database including topographic, hydrologic, geological and landuse data is created for the study area. The data are randomly divided between a training set (70%), a validation (10%) and a test set (20%).

Findings

The validation findings demonstrate that the CNN model (has an 89% success rate and an 84% prediction rate). The ANN model (with an 84% success rate and an 81% prediction rate) predicts landslides better than the LR model (with a success rate of 82% and a prediction rate of 79%). In comparison, the CNN proves to be more accurate than the logistic regression and is utilized for final susceptibility.

Research limitations/implications

Land cover data and geological data are limited in largescale, making it challenging to develop accurate and comprehensive susceptibility maps.

Practical implications

It helps to identify areas with a higher likelihood of experiencing landslides. This information is crucial for assessing the risk posed to human lives, infrastructure and properties in these areas. It allows authorities and stakeholders to prioritize risk management efforts and allocate resources more effectively.

Social implications

The social implications of a landslide susceptibility map are profound, as it provides vital information for disaster preparedness, risk mitigation and landuse planning. Communities can utilize these maps to identify vulnerable areas, implement zoning regulations and develop evacuation plans, ultimately safeguarding lives and property. Additionally, access to such information promotes public awareness and education about landslide risks, fostering a proactive approach to disaster management. However, reliance solely on these maps may also create a false sense of security, necessitating continuous updates and integration with other risk assessment measures to ensure effective disaster resilience strategies are in place.

Originality/value

Landslide susceptibility mapping provides a proactive approach to identifying areas at higher risk of landslides before any significant events occur. Researchers continually explore new data sources, modeling techniques and validation approaches, leading to a better understanding of landslide dynamics and susceptibility factors.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 May 2024

Xinzhe Li, Qinglong Li, Dasom Jeong and Jaekyeong Kim

Most previous studies predicting review helpfulness ignored the significance of deep features embedded in review text and instead relied on hand-crafted features. Hand-crafted and…

Abstract

Purpose

Most previous studies predicting review helpfulness ignored the significance of deep features embedded in review text and instead relied on hand-crafted features. Hand-crafted and deep features have the advantages of high interpretability and predictive accuracy. This study aims to propose a novel review helpfulness prediction model that uses deep learning (DL) techniques to consider the complementarity between hand-crafted and deep features.

Design/methodology/approach

First, an advanced convolutional neural network was applied to extract deep features from unstructured review text. Second, this study used previous studies to extract hand-crafted features that impact the helpfulness of reviews and enhance their interpretability. Third, this study incorporated deep and hand-crafted features into a review helpfulness prediction model and evaluated its performance using the Yelp.com data set. To measure the performance of the proposed model, this study used 2,417,796 restaurant reviews.

Findings

Extensive experiments confirmed that the proposed methodology performs better than traditional machine learning methods. Moreover, this study confirms through an empirical analysis that combining hand-crafted and deep features demonstrates better prediction performance.

Originality/value

To the best of the authors’ knowledge, this is one of the first studies to apply DL techniques and use structured and unstructured data to predict review helpfulness in the restaurant context. In addition, an advanced feature-fusion method was adopted to better use the extracted feature information and identify the complementarity between features.

研究目的

大多数先前预测评论有用性的研究忽视了嵌入在评论文本中的深层特征的重要性, 而主要依赖手工制作的特征。手工制作和深层特征具有高解释性和预测准确性的优势。本研究提出了一种新颖的评论有用性预测模型, 利用深度学习技术来考虑手工制作特征和深层特征之间的互补性。

研究方法

首先, 采用先进的卷积神经网络从非结构化的评论文本中提取深层特征。其次, 本研究利用先前研究中提取的手工制作特征, 这些特征影响了评论的有用性并增强了其解释性。第三, 本研究将深层特征和手工制作特征结合到一个评论有用性预测模型中, 并使用Yelp.com数据集对其性能进行评估。为了衡量所提出模型的性能, 本研究使用了2,417,796条餐厅评论。

研究发现

广泛的实验验证了所提出的方法优于传统的机器学习方法。此外, 通过实证分析, 本研究证实了结合手工制作和深层特征可以展现出更好的预测性能。

研究创新

据我们所知, 这是首个在餐厅评论预测中应用深度学习技术, 并结合了结构化和非结构化数据来预测评论有用性的研究之一。此外, 本研究采用了先进的特征融合方法, 更好地利用了提取的特征信息, 并识别了特征之间的互补性。

Article
Publication date: 27 February 2024

Feng Qian, Yongsheng Tu, Chenyu Hou and Bin Cao

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods…

Abstract

Purpose

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods based on deep learning have been proposed, the methods proposed by these works cannot be directly applied to the actual wireless communication scenario, because there are usually two kinds of dilemmas when recognizing the real modulated signal, namely, long sequence and noise. This paper aims to effectively process in-phase quadrature (IQ) sequences of very long signals interfered by noise.

Design/methodology/approach

This paper proposes a general model for a modulation classifier based on a two-layer nested structure of long short-term memory (LSTM) networks, called a two-layer nested structure (TLN)-LSTM, which exploits the time sensitivity of LSTM and the ability of the nested network structure to extract more features, and can achieve effective processing of ultra-long signal IQ sequences collected from real wireless communication scenarios that are interfered by noise.

Findings

Experimental results show that our proposed model has higher recognition accuracy for five types of modulation signals, including amplitude modulation, frequency modulation, gaussian minimum shift keying, quadrature phase shift keying and differential quadrature phase shift keying, collected from real wireless communication scenarios. The overall classification accuracy of the proposed model for these signals can reach 73.11%, compared with 40.84% for the baseline model. Moreover, this model can also achieve high classification performance for analog signals with the same modulation method in the public data set HKDD_AMC36.

Originality/value

At present, although many AMR methods based on deep learning have been proposed, these works are based on the model’s classification results of various modulated signals in the AMR public data set to evaluate the signal recognition performance of the proposed method rather than collecting real modulated signals for identification in actual wireless communication scenarios. The methods proposed in these works cannot be directly applied to actual wireless communication scenarios. Therefore, this paper proposes a new AMR method, dedicated to the effective processing of the collected ultra-long signal IQ sequences that are interfered by noise.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 19 March 2024

Mingke Gao, Zhenyu Zhang, Jinyuan Zhang, Shihao Tang, Han Zhang and Tao Pang

Because of the various advantages of reinforcement learning (RL) mentioned above, this study uses RL to train unmanned aerial vehicles to perform two tasks: target search and…

Abstract

Purpose

Because of the various advantages of reinforcement learning (RL) mentioned above, this study uses RL to train unmanned aerial vehicles to perform two tasks: target search and cooperative obstacle avoidance.

Design/methodology/approach

This study draws inspiration from the recurrent state-space model and recurrent models (RPM) to propose a simpler yet highly effective model called the unmanned aerial vehicles prediction model (UAVPM). The main objective is to assist in training the UAV representation model with a recurrent neural network, using the soft actor-critic algorithm.

Findings

This study proposes a generalized actor-critic framework consisting of three modules: representation, policy and value. This architecture serves as the foundation for training UAVPM. This study proposes the UAVPM, which is designed to aid in training the recurrent representation using the transition model, reward recovery model and observation recovery model. Unlike traditional approaches reliant solely on reward signals, RPM incorporates temporal information. In addition, it allows the inclusion of extra knowledge or information from virtual training environments. This study designs UAV target search and UAV cooperative obstacle avoidance tasks. The algorithm outperforms baselines in these two environments.

Originality/value

It is important to note that UAVPM does not play a role in the inference phase. This means that the representation model and policy remain independent of UAVPM. Consequently, this study can introduce additional “cheating” information from virtual training environments to guide the UAV representation without concerns about its real-world existence. By leveraging historical information more effectively, this study enhances UAVs’ decision-making abilities, thus improving the performance of both tasks at hand.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 23 January 2024

Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…

Abstract

Purpose

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.

Design/methodology/approach

This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.

Findings

This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.

Originality/value

Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 6 February 2024

Lin Xue and Feng Zhang

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web…

Abstract

Purpose

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web service classification approaches ignore the class overlap in Web services, resulting in poor accuracy of classification in practice. This paper aims to provide an approach to address this issue.

Design/methodology/approach

This paper proposes a label confusion and priori correction-based Web service classification approach. First, functional semantic representations of Web services descriptions are obtained based on BERT. Then, the ability of the model is enhanced to recognize and classify overlapping instances by using label confusion learning techniques; Finally, the predictive results are corrected based on the label prior distribution to further improve service classification effectiveness.

Findings

Experiments based on the ProgrammableWeb data set show that the proposed model demonstrates 4.3%, 3.2% and 1% improvement in Macro-F1 value compared to the ServeNet-BERT, BERT-DPCNN and CARL-NET, respectively.

Originality/value

This paper proposes a Web service classification approach for the overlapping categories of Web services and improve the accuracy of Web services classification.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

3069

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 18 April 2024

Joseph Nockels, Paul Gooding and Melissa Terras

This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI)…

Abstract

Purpose

This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI). With HTR now achieving high levels of accuracy, we consider its potential impact on our near-future information environment and knowledge of the past.

Design/methodology/approach

In undertaking a more constructivist analysis, we identified gaps in the current literature through a Grounded Theory Method (GTM). This guided an iterative process of concept mapping through writing sprints in workshop settings. We identified, explored and confirmed themes through group discussion and a further interrogation of relevant literature, until reaching saturation.

Findings

Catalogued as part of our GTM, 120 published texts underpin this paper. We found that HTR facilitates accurate transcription and dataset cleaning, while facilitating access to a variety of historical material. HTR contributes to a virtuous cycle of dataset production and can inform the development of online cataloguing. However, current limitations include dependency on digitisation pipelines, potential archival history omission and entrenchment of bias. We also cite near-future HTR considerations. These include encouraging open access, integrating advanced AI processes and metadata extraction; legal and moral issues surrounding copyright and data ethics; crediting individuals’ transcription contributions and HTR’s environmental costs.

Originality/value

Our research produces a set of best practice recommendations for researchers, data providers and memory institutions, surrounding HTR use. This forms an initial, though not comprehensive, blueprint for directing future HTR research. In pursuing this, the narrative that HTR’s speed and efficiency will simply transform scholarship in archives is deconstructed.

1 – 10 of 25