Search results

1 – 10 of over 2000
Open Access
Article
Publication date: 2 April 2024

Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…

Abstract

Purpose

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.

Design/methodology/approach

On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.

Findings

The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.

Originality/value

The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 4 December 2023

Yonghua Li, Zhe Chen, Maorui Hou and Tao Guo

This study aims to reduce the redundant weight of the anti-roll torsion bar brought by the traditional empirical design and improving its strength and stiffness.

Abstract

Purpose

This study aims to reduce the redundant weight of the anti-roll torsion bar brought by the traditional empirical design and improving its strength and stiffness.

Design/methodology/approach

Based on the finite element approach coupled with the improved beluga whale optimization (IBWO) algorithm, a collaborative optimization method is suggested to optimize the design of the anti-roll torsion bar structure and weight. The dimensions and material properties of the torsion bar were defined as random variables, and the torsion bar's mass and strength were investigated using finite elements. Then, chaotic mapping and differential evolution (DE) operators are introduced to improve the beluga whale optimization (BWO) algorithm and run case studies.

Findings

The findings demonstrate that the IBWO has superior solution set distribution uniformity, convergence speed, solution correctness and stability than the BWO. The IBWO algorithm is used to optimize the anti-roll torsion bar design. The error between the optimization and finite element simulation results was less than 1%. The weight of the optimized anti-roll torsion bar was lessened by 4%, the maximum stress was reduced by 35% and the stiffness was increased by 1.9%.

Originality/value

The study provides a methodological reference for the simulation optimization process of the lateral anti-roll torsion bar.

Details

Railway Sciences, vol. 3 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Open Access
Article
Publication date: 21 June 2023

Sudhaman Parthasarathy and S.T. Padmapriya

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias…

Abstract

Purpose

Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias in AI-enabled ERP software customization. Although algorithmic bias in machine learning models has uneven, unfair and unjust impacts, research on it is mostly anecdotal and scattered.

Design/methodology/approach

As guided by the previous research (Akter et al., 2022), this study presents the possible design bias (model, data and method) one may experience with enterprise resource planning (ERP) software customization algorithm. This study then presents the artificial intelligence (AI) version of ERP customization algorithm using k-nearest neighbours algorithm.

Findings

This study illustrates the possible bias when the prioritized requirements customization estimation (PRCE) algorithm available in the ERP literature is executed without any AI. Then, the authors present their newly developed AI version of the PRCE algorithm that uses ML techniques. The authors then discuss its adjoining algorithmic bias with an illustration. Further, the authors also draw a roadmap for managing algorithmic bias during ERP customization in practice.

Originality/value

To the best of the authors’ knowledge, no prior research has attempted to understand the algorithmic bias that occurs during the execution of the ERP customization algorithm (with or without AI).

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 3 no. 2
Type: Research Article
ISSN: 2633-7436

Keywords

Open Access
Article
Publication date: 15 July 2022

Jiansen Zhao, Xin Ma, Bing Yang, Yanjun Chen, Zhenzhen Zhou and Pangyi Xiao

Since many global path planning algorithms cannot achieve the planned path with both safety and economy, this study aims to propose a path planning method for unmanned vehicles…

Abstract

Purpose

Since many global path planning algorithms cannot achieve the planned path with both safety and economy, this study aims to propose a path planning method for unmanned vehicles with a controllable distance from obstacles.

Design/methodology/approach

First, combining satellite image and the Voronoi field algorithm (VFA) generates rasterized environmental information and establishes navigation area boundary. Second, establishing a hazard function associated with navigation area boundary improves the evaluation function of the A* algorithm and uses the improved A* algorithm for global path planning. Finally, to reduce the number of redundant nodes in the planned path and smooth the path, node optimization and gradient descent method (GDM) are used. Then, a continuous smooth path that meets the actual navigation requirements of unmanned vehicle is obtained.

Findings

The simulation experiment proved that the proposed global path planning method can realize the control of the distance between the planned path and the obstacle by setting different navigation area boundaries. The node reduction rate is between 33.52% and 73.15%, and the smoothness meets the navigation requirements. This method is reasonable and effective in the global path planning process of unmanned vehicle and can provide reference to unmanned vehicles’ autonomous obstacle avoidance decision-making.

Originality/value

This study establishes navigation area boundary for the environment based on the VFA and uses the improved A* algorithm to generate a navigation path that takes into account both safety and economy. This study also proposes a method to solve the redundancy of grid environment path nodes and large-angle steering and to smooth the path to improve the applicability of the proposed global path planning method. The proposed global path planning method solves the requirements of path safety and smoothness.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 18 November 2021

Eric Pettersson Ruiz and Jannis Angelis

This study aims to explore how to deanonymize cryptocurrency money launderers with the help of machine learning (ML). Money is laundered through cryptocurrencies by distributing…

5516

Abstract

Purpose

This study aims to explore how to deanonymize cryptocurrency money launderers with the help of machine learning (ML). Money is laundered through cryptocurrencies by distributing funds to multiple accounts and then reexchanging the crypto back. This process of exchanging currencies is done through cryptocurrency exchanges. Current preventive efforts are outdated, and ML may provide novel ways to identify illicit currency movements. Hence, this study investigates ML applicability for combatting money laundering activities using cryptocurrency.

Design/methodology/approach

Four supervised-learning algorithms were compared using the Bitcoin Elliptic Dataset. The method covered a quantitative analysis of the algorithmic performance, capturing differences in three key evaluation metrics of F1-scores, precision and recall. Two complementary qualitative interviews were performed at cryptocurrency exchanges to identify fit and applicability of the algorithms.

Findings

The study results show that the current implemented ML tools for preventing money laundering at cryptocurrency exchanges are all too slow and need to be optimized for the task. The results also show that while not one single algorithm is most suitable for detecting transactions related to money-laundering, the specific applicability of the decision tree algorithm is most suitable for adoption by cryptocurrency exchanges.

Originality/value

Given the growth of cryptocurrency use, this study explores the newly developed field of algorithmic tools to combat illicit currency movement, in particular in the growing arena of cryptocurrencies. The study results provide new insights into the applicability of ML as a tool to combat money laundering using cryptocurrency exchanges.

Details

Journal of Money Laundering Control, vol. 25 no. 4
Type: Research Article
ISSN: 1368-5201

Keywords

Open Access
Article
Publication date: 7 July 2022

Sirilak Ketchaya and Apisit Rattanatranurak

Sorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the…

1247

Abstract

Purpose

Sorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the data into subarrays and finally sorting them.

Design/methodology/approach

In this paper, the algorithm named Dual Parallel Partition Sorting (DPPSort) is analyzed and optimized. It consists of a partitioning algorithm named Dual Parallel Partition (DPPartition). The DPPartition is analyzed and optimized in this paper and sorted with standard sorting functions named qsort and STLSort which are quicksort, and introsort algorithms, respectively. This algorithm is run on any shared memory/multicore systems. OpenMP library which supports multiprocessing programming is developed to be compatible with C/C++ standard library function. The authors’ algorithm recursively divides an unsorted array into two halves equally in parallel with Lomuto's partitioning and merge without compare-and-swap instructions. Then, qsort/STLSort is executed in parallel while the subarray is smaller than the sorting cutoff.

Findings

In the authors’ experiments, the 4-core Intel i7-6770 with Ubuntu Linux system is implemented. DPPSort is faster than qsort and STLSort up to 6.82× and 5.88× on Uint64 random distributions, respectively.

Originality/value

The authors can improve the performance of the parallel sorting algorithm by reducing the compare-and-swap instructions in the algorithm. This concept can be used to develop related problems to increase speedup of algorithms.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 20 July 2020

Mehmet Fatih Uslu, Süleyman Uslu and Faruk Bulut

Optimization algorithms can differ in performance for a specific problem. Hybrid approaches, using this difference, might give a higher performance in many cases. This paper…

1351

Abstract

Optimization algorithms can differ in performance for a specific problem. Hybrid approaches, using this difference, might give a higher performance in many cases. This paper presents a hybrid approach of Genetic Algorithm (GA) and Ant Colony Optimization (ACO) specifically for the Integrated Process Planning and Scheduling (IPPS) problems. GA and ACO have given different performances in different cases of IPPS problems. In some cases, GA has outperformed, and so do ACO in other cases. This hybrid method can be constructed as (I) GA to improve ACO results or (II) ACO to improve GA results. Based on the performances of the algorithm pairs on the given problem scale. This proposed hybrid GA-ACO approach (hAG) runs both GA and ACO simultaneously, and the better performing one is selected as the primary algorithm in the hybrid approach. hAG also avoids convergence by resetting parameters which cause algorithms to converge local optimum points. Moreover, the algorithm can obtain more accurate solutions with avoidance strategy. The new hybrid optimization technique (hAG) merges a GA with a local search strategy based on the interior point method. The efficiency of hAG is demonstrated by solving a constrained multi-objective mathematical test-case. The benchmarking results of the experimental studies with AIS (Artificial Immune System), GA, and ACO indicate that the proposed model has outperformed other non-hybrid algorithms in different scenarios.

Details

Applied Computing and Informatics, vol. 18 no. 1/2
Type: Research Article
ISSN: 2210-8327

Keywords

Open Access
Article
Publication date: 25 March 2021

Fareed Sheriff

This paper presents the Edge Load Management and Optimization through Pseudoflow Prediction (ELMOPP) algorithm, which aims to solve problems detailed in previous algorithms;…

1971

Abstract

Purpose

This paper presents the Edge Load Management and Optimization through Pseudoflow Prediction (ELMOPP) algorithm, which aims to solve problems detailed in previous algorithms; through machine learning with nested long short-term memory (NLSTM) modules and graph theory, the algorithm attempts to predict the near future using past data and traffic patterns to inform its real-time decisions and better mitigate traffic by predicting future traffic flow based on past flow and using those predictions to both maximize present traffic flow and decrease future traffic congestion.

Design/methodology/approach

ELMOPP was tested against the ITLC and OAF traffic management algorithms using a simulation modeled after the one presented in the ITLC paper, a single-intersection simulation.

Findings

The collected data supports the conclusion that ELMOPP statistically significantly outperforms both algorithms in throughput rate, a measure of how many vehicles are able to exit inroads every second.

Originality/value

Furthermore, while ITLC and OAF require the use of GPS transponders and GPS, speed sensors and radio, respectively, ELMOPP only uses traffic light camera footage, something that is almost always readily available in contrast to GPS and speed sensors.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 5 September 2016

Qingyuan Wu, Changchen Zhan, Fu Lee Wang, Siyang Wang and Zeping Tang

The quick growth of web-based and mobile e-learning applications such as massive open online courses have created a large volume of online learning resources. Confronting such a…

3513

Abstract

Purpose

The quick growth of web-based and mobile e-learning applications such as massive open online courses have created a large volume of online learning resources. Confronting such a large amount of learning data, it is important to develop effective clustering approaches for user group modeling and intelligent tutoring. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, a minimum spanning tree based approach is proposed for clustering of online learning resources. The novel clustering approach has two main stages, namely, elimination stage and construction stage. During the elimination stage, the Euclidean distance is adopted as a metrics formula to measure density of learning resources. Resources with quite low densities are identified as outliers and therefore removed. During the construction stage, a minimum spanning tree is built by initializing the centroids according to the degree of freedom of the resources. Online learning resources are subsequently partitioned into clusters by exploiting the structure of minimum spanning tree.

Findings

Conventional clustering algorithms have a number of shortcomings such that they cannot handle online learning resources effectively. On the one hand, extant partitional clustering methods use a randomly assigned centroid for each cluster, which usually cause the problem of ineffective clustering results. On the other hand, classical density-based clustering methods are very computationally expensive and time-consuming. Experimental results indicate that the algorithm proposed outperforms the traditional clustering algorithms for online learning resources.

Originality/value

The effectiveness of the proposed algorithms has been validated by using several data sets. Moreover, the proposed clustering algorithm has great potential in e-learning applications. It has been demonstrated how the novel technique can be integrated in various e-learning systems. For example, the clustering technique can classify learners into groups so that homogeneous grouping can improve the effectiveness of learning. Moreover, clustering of online learning resources is valuable to decision making in terms of tutorial strategies and instructional design for intelligent tutoring. Lastly, a number of directions for future research have been identified in the study.

Details

Asian Association of Open Universities Journal, vol. 11 no. 2
Type: Research Article
ISSN: 1858-3431

Keywords

Open Access
Article
Publication date: 11 April 2018

Mohamed A. Tawhid and Kevin B. Dsouza

In this paper, we present a new hybrid binary version of bat and enhanced particle swarm optimization algorithm in order to solve feature selection problems. The proposed algorithm

Abstract

In this paper, we present a new hybrid binary version of bat and enhanced particle swarm optimization algorithm in order to solve feature selection problems. The proposed algorithm is called Hybrid Binary Bat Enhanced Particle Swarm Optimization Algorithm (HBBEPSO). In the proposed HBBEPSO algorithm, we combine the bat algorithm with its capacity for echolocation helping explore the feature space and enhanced version of the particle swarm optimization with its ability to converge to the best global solution in the search space. In order to investigate the general performance of the proposed HBBEPSO algorithm, the proposed algorithm is compared with the original optimizers and other optimizers that have been used for feature selection in the past. A set of assessment indicators are used to evaluate and compare the different optimizers over 20 standard data sets obtained from the UCI repository. Results prove the ability of the proposed HBBEPSO algorithm to search the feature space for optimal feature combinations.

Details

Applied Computing and Informatics, vol. 16 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of over 2000