Search results

1 – 10 of over 3000
Article
Publication date: 8 April 2024

Hu Luo, Haobin Ruan and Dawei Tu

The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images…

Abstract

Purpose

The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images problems such as detail loss, low contrast and color distortion, and verify the feasibility of the proposed methods through experiments.

Design/methodology/approach

The improved RGHS algorithm to enhance the original underwater target image is proposed, and then the YOLOv4 deep learning network for underwater small sample targets detection is improved based on the combination of traditional data expansion method and Mosaic algorithm, expanding the feature extraction capability with SPP (Spatial Pyramid Pooling) module after each feature extraction layer to extract richer feature information.

Findings

The experimental results, using the official dataset, reveal a 3.5% increase in average detection accuracy for three types of underwater biological targets compared to the traditional YOLOv4 algorithm. In underwater robot application testing, the proposed method achieves an impressive 94.73% average detection accuracy for the three types of underwater biological targets.

Originality/value

Underwater target detection is an important task for underwater robot application. However, most underwater targets have the characteristics of small samples, and the detection of small sample targets is a comprehensive problem because it is affected by the quality of underwater images. This paper provides a whole set of methods to solve the problems, which is of great significance to the application of underwater robot.

Details

Robotic Intelligence and Automation, vol. 44 no. 2
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 5 April 2024

Fangqi Hong, Pengfei Wei and Michael Beer

Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and…

Abstract

Purpose

Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and alternative acquisition functions, such as the Posterior Variance Contribution (PVC) function, have been developed for adaptive experiment design of the integration points. However, those sequential design strategies also prevent BC from being implemented in a parallel scheme. Therefore, this paper aims at developing a parallelized adaptive BC method to further improve the computational efficiency.

Design/methodology/approach

By theoretically examining the multimodal behavior of the PVC function, it is concluded that the multiple local maxima all have important contribution to the integration accuracy as can be selected as design points, providing a practical way for parallelization of the adaptive BC. Inspired by the above finding, four multimodal optimization algorithms, including one newly developed in this work, are then introduced for finding multiple local maxima of the PVC function in one run, and further for parallel implementation of the adaptive BC.

Findings

The superiority of the parallel schemes and the performance of the four multimodal optimization algorithms are then demonstrated and compared with the k-means clustering method by using two numerical benchmarks and two engineering examples.

Originality/value

Multimodal behavior of acquisition function for BC is comprehensively investigated. All the local maxima of the acquisition function contribute to adaptive BC accuracy. Parallelization of adaptive BC is realized with four multimodal optimization methods.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 April 2024

Ting Zhou, Yingjie Wei, Jian Niu and Yuxin Jie

Metaheuristic algorithms based on biology, evolutionary theory and physical principles, have been widely developed for complex global optimization. This paper aims to present a…

Abstract

Purpose

Metaheuristic algorithms based on biology, evolutionary theory and physical principles, have been widely developed for complex global optimization. This paper aims to present a new hybrid optimization algorithm that combines the characteristics of biogeography-based optimization (BBO), invasive weed optimization (IWO) and genetic algorithms (GAs).

Design/methodology/approach

The significant difference between the new algorithm and original optimizers is a periodic selection scheme for offspring. The selection criterion is a function of cyclic discharge and the fitness of populations. It differs from traditional optimization methods where the elite always gains advantages. With this method, fitter populations may still be rejected, while poorer ones might be likely retained. The selection scheme is applied to help escape from local optima and maintain solution diversity.

Findings

The efficiency of the proposed method is tested on 13 high-dimensional, nonlinear benchmark functions and a homogenous slope stability problem. The results of the benchmark function show that the new method performs well in terms of accuracy and solution diversity. The algorithm converges with a magnitude of 10-4, compared to 102 in BBO and 10-2 in IWO. In the slope stability problem, the safety factor acquired by the analogy of slope erosion (ASE) is closer to the recommended value.

Originality/value

This paper introduces a periodic selection strategy and constructs a hybrid optimizer, which enhances the global exploration capacity of metaheuristic algorithms.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 8 March 2024

Sarah Jerasa and Sarah K. Burriss

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the…

Abstract

Purpose

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the ways multimodal composition takes place alongside AI. This study aims to argue that within spaces like TikTok, human composers must attend to the ways they write for, with and against the AI-powered algorithm.

Design/methodology/approach

Data collection was drawn from a larger study on #BookTok (the TikTok subcommunity for readers) that included semi-structured interviews including watching and reflecting on a TikTok they created. The authors grounded this study in critical posthumanist literacies to analyze and open code five #BookTok content creators’ interview transcripts. Using axial coding, authors collaboratively determined three overarching and entangled themes: writing for, with and against.

Findings

Findings highlight the nuanced ways #BookTokers consider the AI algorithm in their compositional choices, namely, in the ways how they want to disseminate their videos to a larger audience or more niche-focused community. Throughout the interviews, participants revealed how the AI algorithm was situated differently as both audience member, co-author and censor.

Originality/value

This study is grounded in critical posthumanist literacies and explores composition as a joint accomplishment between humans and machines. The authors argued that it is necessary to expand our human-centered notions of what it means to write for an audience, to co-author and to resist censorship or gatekeeping.

Details

English Teaching: Practice & Critique, vol. 23 no. 1
Type: Research Article
ISSN: 1175-8708

Keywords

Article
Publication date: 9 February 2024

Chengpeng Zhang, Zhihua Yu, Jimin Shi, Yu Li, Wenqiang Xu, Zheyi Guo, Hongshi Zhang, Zhongyuan Zhu and Sheng Qiang

Hexahedral meshing is one of the most important steps in performing an accurate simulation using the finite element analysis (FEA). However, the current hexahedral meshing method…

Abstract

Purpose

Hexahedral meshing is one of the most important steps in performing an accurate simulation using the finite element analysis (FEA). However, the current hexahedral meshing method in the industry is a nonautomatic and inefficient method, i.e. manually decomposing the model into suitable blocks and obtaining the hexahedral mesh from these blocks by mapping or sweeping algorithms. The purpose of this paper is to propose an almost automatic decomposition algorithm based on the 3D frame field and model features to replace the traditional time-consuming and laborious manual decomposition method.

Design/methodology/approach

The proposed algorithm is based on the 3D frame field and features, where features are used to construct feature-cutting surfaces and the 3D frame field is used to construct singular-cutting surfaces. The feature-cutting surfaces constructed from concave features first reduce the complexity of the model and decompose it into some coarse blocks. Then, an improved 3D frame field algorithm is performed on these coarse blocks to extract the singular structure and construct singular-cutting surfaces to further decompose the coarse blocks. In most modeling examples, the proposed algorithm uses both types of cutting surfaces to decompose models fully automatically. In a few examples with special requirements for hexahedral meshes, the algorithm requires manual input of some user-defined cutting surfaces and constructs different singular-cutting surfaces to ensure the effectiveness of the decomposition.

Findings

Benefiting from the feature decomposition and the 3D frame field algorithm, the output blocks of the proposed algorithm have no inner singular structure and are suitable for the mapping or sweeping algorithm. The introduction of internal constraints makes 3D frame field generation more robust in this paper, and it can automatically correct some invalid 3–5 singular structures. In a few examples with special requirements, the proposed algorithm successfully generates valid blocks even though the singular structure of the model is modified by user-defined cutting surfaces.

Originality/value

The proposed algorithm takes the advantage of feature decomposition and the 3D frame field to generate suitable blocks for a mapping or sweeping algorithm, which saves a lot of simulation time and requires less experience. The user-defined cutting surfaces enable the creation of special hexahedral meshes, which was difficult with previous algorithms. An improved 3D frame field generation method is proposed to correct some invalid singular structures and improve the robustness of the previous methods.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 2 January 2024

Wenlong Cheng and Wenjun Meng

This study aims to solve the problem of job scheduling and multi automated guided vehicle (AGV) cooperation in intelligent manufacturing workshops.

Abstract

Purpose

This study aims to solve the problem of job scheduling and multi automated guided vehicle (AGV) cooperation in intelligent manufacturing workshops.

Design/methodology/approach

In this study, an algorithm for job scheduling and cooperative work of multiple AGVs is designed. In the first part, with the goal of minimizing the total processing time and the total power consumption, the niche multi-objective evolutionary algorithm is used to determine the processing task arrangement on different machines. In the second part, AGV is called to transport workpieces, and an improved ant colony algorithm is used to generate the initial path of AGV. In the third part, to avoid path conflicts between running AGVs, the authors propose a simple priority-based waiting strategy to avoid collisions.

Findings

The experiment shows that the solution can effectively deal with job scheduling and multiple AGV operation problems in the workshop.

Originality/value

In this paper, a collaborative work algorithm is proposed, which combines the job scheduling and AGV running problem to make the research results adapt to the real job environment in the workshop.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 2 January 2024

Xiumei Cai, Xi Yang and Chengmao Wu

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to…

Abstract

Purpose

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.

Design/methodology/approach

The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.

Findings

The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.

Originality/value

Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 December 2023

Thanh-Nghi Do and Minh-Thu Tran-Nguyen

This study aims to propose novel edge device-tailored federated learning algorithms of local classifiers (stochastic gradient descent, support vector machines), namely, FL-lSGD…

Abstract

Purpose

This study aims to propose novel edge device-tailored federated learning algorithms of local classifiers (stochastic gradient descent, support vector machines), namely, FL-lSGD and FL-lSVM. These algorithms are designed to address the challenge of large-scale ImageNet classification.

Design/methodology/approach

The authors’ FL-lSGD and FL-lSVM trains in a parallel and incremental manner to build an ensemble local classifier on Raspberry Pis without requiring data exchange. The algorithms load small data blocks of the local training subset stored on the Raspberry Pi sequentially to train the local classifiers. The data block is split into k partitions using the k-means algorithm, and models are trained in parallel on each data partition to enable local data classification.

Findings

Empirical test results on the ImageNet data set show that the authors’ FL-lSGD and FL-lSVM algorithms with 4 Raspberry Pis (Quad core Cortex-A72, ARM v8, 64-bit SoC @ 1.5GHz, 4GB RAM) are faster than the state-of-the-art LIBLINEAR algorithm run on a PC (Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores, 32GB RAM).

Originality/value

Efficiently addressing the challenge of large-scale ImageNet classification, the authors’ novel federated learning algorithms of local classifiers have been tailored to work on the Raspberry Pi. These algorithms can handle 1,281,167 images and 1,000 classes effectively.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 19 December 2023

Susan Gardner Archambault

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…

Abstract

Purpose

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.

Design/methodology/approach

Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.

Findings

The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.

Originality/value

This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.

Details

Information and Learning Sciences, vol. 125 no. 1/2
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 16 October 2023

Maedeh Gholamazad, Jafar Pourmahmoud, Alireza Atashi, Mehdi Farhoudi and Reza Deljavan Anvari

A stroke is a serious, life-threatening condition that occurs when the blood supply to a part of the brain is cut off. The earlier a stroke is treated, the less damage is likely…

Abstract

Purpose

A stroke is a serious, life-threatening condition that occurs when the blood supply to a part of the brain is cut off. The earlier a stroke is treated, the less damage is likely to occur. One of the methods that can lead to faster treatment is timely and accurate prediction and diagnosis. This paper aims to compare the binary integer programming-data envelopment analysis (BIP-DEA) model and the logistic regression (LR) model for diagnosing and predicting the occurrence of stroke in Iran.

Design/methodology/approach

In this study, two algorithms of the BIP-DEA and LR methods were introduced and key risk factors leading to stroke were extracted.

Findings

The study population consisted of 2,100 samples (patients) divided into six subsamples of different sizes. The classification table of each algorithm showed that the BIP-DEA model had more reliable results than the LR for the small data size. After running each algorithm, the BIP-DEA and LR algorithms identified eight and five factors as more effective risk factors and causes of stroke, respectively. Finally, predictive models using the important risk factors were proposed.

Originality/value

The main objective of this study is to provide the integrated BIP-DEA algorithm as a fast, easy and suitable tool for evaluation and prediction. In fact, the BIP-DEA algorithm can be used as an alternative tool to the LR model when the sample size is small. These algorithms can be used in various fields, including the health-care industry, to predict and prevent various diseases before the patient’s condition becomes more dangerous.

Details

Journal of Modelling in Management, vol. 19 no. 2
Type: Research Article
ISSN: 1746-5664

Keywords

Access

Year

Last 12 months (3792)

Content type

Article (3792)
1 – 10 of over 3000