Search results

1 – 10 of 121
Article
Publication date: 2 May 2024

Xin Fan, Yongshou Liu, Zongyi Gu and Qin Yao

Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional…

Abstract

Purpose

Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional methods struggle to conduct a reliability analysis. Therefore, this paper proposes a reliability analysis method aimed at enhancing the efficiency of rare event analysis, using the widely recognized Relevant Vector Machine (RVM).

Design/methodology/approach

Drawing from the principles of importance sampling (IS), this paper employs Harris Hawks Optimization (HHO) to ascertain the optimal design point. This approach not only guarantees precision but also facilitates the RVM in approximating the limit state surface. When the U learning function, designed for Kriging, is applied to RVM, it results in sample clustering in the design of experiment (DoE). Therefore, this paper proposes a FU learning function, which is more suitable for RVM.

Findings

Three numerical examples and two engineering problem demonstrate the effectiveness of the proposed method.

Originality/value

By employing the HHO algorithm, this paper innovatively applies RVM in IS reliability analysis, proposing a novel method termed RVM-HIS. The RVM-HIS demonstrates exceptional computational efficiency, making it eminently suitable for rare events reliability analysis with implicit performance function. Moreover, the computational efficiency of RVM-HIS has been significantly enhanced through the improvement of the U learning function.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 27 February 2024

Feng Qian, Yongsheng Tu, Chenyu Hou and Bin Cao

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods…

Abstract

Purpose

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods based on deep learning have been proposed, the methods proposed by these works cannot be directly applied to the actual wireless communication scenario, because there are usually two kinds of dilemmas when recognizing the real modulated signal, namely, long sequence and noise. This paper aims to effectively process in-phase quadrature (IQ) sequences of very long signals interfered by noise.

Design/methodology/approach

This paper proposes a general model for a modulation classifier based on a two-layer nested structure of long short-term memory (LSTM) networks, called a two-layer nested structure (TLN)-LSTM, which exploits the time sensitivity of LSTM and the ability of the nested network structure to extract more features, and can achieve effective processing of ultra-long signal IQ sequences collected from real wireless communication scenarios that are interfered by noise.

Findings

Experimental results show that our proposed model has higher recognition accuracy for five types of modulation signals, including amplitude modulation, frequency modulation, gaussian minimum shift keying, quadrature phase shift keying and differential quadrature phase shift keying, collected from real wireless communication scenarios. The overall classification accuracy of the proposed model for these signals can reach 73.11%, compared with 40.84% for the baseline model. Moreover, this model can also achieve high classification performance for analog signals with the same modulation method in the public data set HKDD_AMC36.

Originality/value

At present, although many AMR methods based on deep learning have been proposed, these works are based on the model’s classification results of various modulated signals in the AMR public data set to evaluate the signal recognition performance of the proposed method rather than collecting real modulated signals for identification in actual wireless communication scenarios. The methods proposed in these works cannot be directly applied to actual wireless communication scenarios. Therefore, this paper proposes a new AMR method, dedicated to the effective processing of the collected ultra-long signal IQ sequences that are interfered by noise.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 15 December 2023

Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…

Abstract

Purpose

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.

Design/methodology/approach

The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.

Findings

The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.

Practical implications

The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.

Originality/value

This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.

Details

International Journal of Productivity and Performance Management, vol. 73 no. 11
Type: Research Article
ISSN: 1741-0401

Keywords

Open Access
Article
Publication date: 23 January 2024

Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…

Abstract

Purpose

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.

Design/methodology/approach

This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.

Findings

This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.

Originality/value

Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 30 April 2024

Lin Kang, Junjie Chen, Jie Wang and Yaqi Wei

In order to meet the different quality of service (QoS) requirements of vehicle-to-infrastructure (V2I) and multiple vehicle-to-vehicle (V2V) links in vehicle networks, an…

Abstract

Purpose

In order to meet the different quality of service (QoS) requirements of vehicle-to-infrastructure (V2I) and multiple vehicle-to-vehicle (V2V) links in vehicle networks, an efficient V2V spectrum access mechanism is proposed in this paper.

Design/methodology/approach

A long-short-term-memory-based multi-agent hybrid proximal policy optimization (LSTM-H-PPO) algorithm is proposed, through which the distributed spectrum access and continuous power control of V2V link are realized.

Findings

Simulation results show that compared with the baseline algorithm, the proposed algorithm has significant advantages in terms of total system capacity, payload delivery success rate of V2V link and convergence speed.

Originality/value

The LSTM layer uses the time sequence information to estimate the accurate system state, which ensures the choice of V2V spectrum access based on local observation effective. The hybrid PPO framework shares training parameters among agents which speeds up the entire training process. The proposed algorithm adopts the mode of centralized training and distributed execution, so that the agent can achieve the optimal spectrum access based on local observation information with less signaling overhead.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 19 February 2024

Tauqeer Saleem, Ussama Yaqub and Salma Zaman

The present study distinguishes itself by pioneering an innovative framework that integrates key elements of prospect theory and the fundamental principles of electronic word of…

Abstract

Purpose

The present study distinguishes itself by pioneering an innovative framework that integrates key elements of prospect theory and the fundamental principles of electronic word of mouth (EWOM) to forecast Bitcoin/USD price fluctuations using Twitter sentiment analysis.

Design/methodology/approach

We utilized Twitter data as our primary data source. We meticulously collected a dataset consisting of over 3 million tweets spanning a nine-year period, from 2013 to 2022, covering a total of 3,215 days with an average daily tweet count of 1,000. The tweets were identified by utilizing the “bitcoin” and/or “btc” keywords through the snscrape python library. Diverging from conventional approaches, we introduce four distinct variables, encompassing normalized positive and negative sentiment scores as well as sentiment variance. These refinements markedly enhance sentiment analysis within the sphere of financial risk management.

Findings

Our findings highlight the substantial impact of negative sentiments in driving Bitcoin price declines, in contrast to the role of positive sentiments in facilitating price upswings. These results underscore the critical importance of continuous, real-time monitoring of negative sentiment shifts within the cryptocurrency market.

Practical implications

Our study holds substantial significance for both risk managers and investors, providing a crucial tool for well-informed decision-making in the cryptocurrency market. The implications drawn from our study hold notable relevance for financial risk management.

Originality/value

We present an innovative framework combining prospect theory and core principles of EWOM to predict Bitcoin price fluctuations through analysis of Twitter sentiment. Unlike conventional methods, we incorporate distinct positive and negative sentiment scores instead of relying solely on a single compound score. Notably, our pioneering sentiment analysis framework dissects sentiment into separate positive and negative components, advancing our comprehension of market sentiment dynamics. Furthermore, it equips financial institutions and investors with a more detailed and actionable insight into the risks associated not only with Bitcoin but also with other assets influenced by sentiment-driven market dynamics.

Details

The Journal of Risk Finance, vol. 25 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 17 February 2022

Prajakta Thakare and Ravi Sankar V.

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…

Abstract

Purpose

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.

Design/methodology/approach

The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.

Findings

The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.

Originality/value

The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 30 April 2024

Xiaohan Kong, Shuli Yin, Yunyi Gong and Hajime Igarashi

The prolonged training time of the neural network (NN) has sparked considerable debate regarding their application in the field of optimization. The purpose of this paper is to…

Abstract

Purpose

The prolonged training time of the neural network (NN) has sparked considerable debate regarding their application in the field of optimization. The purpose of this paper is to explore the beneficial assistance of NN-based alternative models in inductance design, with a particular focus on multi-objective optimization and uncertainty analysis processes.

Design/methodology/approach

Under Gaussian-distributed manufacturing errors, this study predicts error intervals for Pareto points and select robust solutions with minimal error margins. Furthermore, this study establishes correlations between manufacturing errors and inductance value discrepancies, offering a practical means of determining permissible manufacturing errors tailored to varying accuracy requirements.

Findings

The NN-assisted methods are demonstrated to offer a substantial time advantage in multi-objective optimization compared to conventional approaches, particularly in scenarios where the trained NN is repeatedly used. Also, NN models allow for extensive data-driven uncertainty quantification, which is challenging for traditional methods.

Originality/value

Three objectives including saturation current are considered in the multi-optimization, and the time advantages of the NN are thoroughly discussed by comparing scenarios involving single optimization, multiple optimizations, bi-objective optimization and tri-objective optimization. This study proposes direct error interval prediction on the Pareto front, using extensive data to predict the response of the Pareto front to random errors following a Gaussian distribution. This approach circumvents the compromises inherent in constrained robust optimization for inductance design and allows for a direct assessment of robustness that can be applied to account for manufacturing errors with complex distributions.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 6 May 2024

Issah Ibrahim and David Lowther

Evaluating the multiphysics performance of an electric motor can be a computationally intensive process, especially where several complex subsystems of the motor are coupled…

Abstract

Purpose

Evaluating the multiphysics performance of an electric motor can be a computationally intensive process, especially where several complex subsystems of the motor are coupled together. For example, evaluating acoustic noise requires the coupling of the electromagnetic, structural and acoustic models of the electric motor. Where skewed poles are considered in the design, the problem becomes a purely three-dimensional (3D) multiphysics problem, which could increase the computational burden astronomically. This study, therefore, aims to introduce surrogate models in the design process to reduce the computational cost associated with solving such 3D-coupled multiphysics problems.

Design/methodology/approach

The procedure involves using the finite element (FE) method to generate a database of several skewed rotor pole surface-mounted permanent magnet synchronous motors and their corresponding electromagnetic, structural and acoustic performances. Then, a surrogate model is fitted to the data to generate mapping functions that could be used in place of the time-consuming FE simulations.

Findings

It was established that the surrogate models showed promising results in predicting the multiphysics performance of skewed pole surface-mounted permanent magnet motors. As such, such models could be used to handle the skewing aspects, which has always been a major design challenge due to the scarcity of simulation tools with stepwise skewing capability.

Originality/value

The main contribution involves the use of surrogate models to replace FE simulations during the design cycle of skewed pole surface-mounted permanent magnet motors without compromising the integrity of the electromagnetic, structural, and acoustic results of the motor.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 28 August 2023

Julian Warner

The article extends the distinction of semantic from syntactic labour to comprehend all forms of mental labour. It answers a critique from de Fremery and Buckland, which required…

Abstract

Purpose

The article extends the distinction of semantic from syntactic labour to comprehend all forms of mental labour. It answers a critique from de Fremery and Buckland, which required envisaging mental labour as a differentiated spectrum.

Design/methodology/approach

The paper adopts a discursive approach. It first reviews the significance and extensive diffusion of the distinction of semantic from syntactic labour. Second, it integrates semantic and syntactic labour along a vertical dimension within mental labour, indicating analogies in principle with, and differences in application from, the inherited distinction of intellectual from clerical labour. Third, it develops semantic labour to the very highest level, on a consistent principle of differentiation from syntactic labour. Finally, it reintegrates the understanding developed of semantic labour with syntactic labour, confirming that they can fully and informatively occupy mental labour.

Findings

The article further validates the distinction of semantic from syntactic labour. It enables to address Norbert Wiener's classic challenge of appropriately distributing activity between human and computer.

Research limitations/implications

The article transforms work in progress into knowledge for diffusion.

Practical implications

It has practical implications for determining what tasks to delegate to computational technology.

Social implications

The paper has social implications for the understanding of appropriate human and machine computational tasks and our own distinctive humanness.

Originality/value

The paper is highly original. Although based on preceding research, from the late 20th century, it is the first separately published full account of semantic and syntactic labour.

Details

Journal of Documentation, vol. 80 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of 121