Search results

1 – 10 of 12
Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 15 February 2022

Martin Nečaský, Petr Škoda, David Bernhauer, Jakub Klímek and Tomáš Skopal

Semantic retrieval and discovery of datasets published as open data remains a challenging task. The datasets inherently originate in the globally distributed web jungle, lacking…

1213

Abstract

Purpose

Semantic retrieval and discovery of datasets published as open data remains a challenging task. The datasets inherently originate in the globally distributed web jungle, lacking the luxury of centralized database administration, database schemes, shared attributes, vocabulary, structure and semantics. The existing dataset catalogs provide basic search functionality relying on keyword search in brief, incomplete or misleading textual metadata attached to the datasets. The search results are thus often insufficient. However, there exist many ways of improving the dataset discovery by employing content-based retrieval, machine learning tools, third-party (external) knowledge bases, countless feature extraction methods and description models and so forth.

Design/methodology/approach

In this paper, the authors propose a modular framework for rapid experimentation with methods for similarity-based dataset discovery. The framework consists of an extensible catalog of components prepared to form custom pipelines for dataset representation and discovery.

Findings

The study proposes several proof-of-concept pipelines including experimental evaluation, which showcase the usage of the framework.

Originality/value

To the best of authors’ knowledge, there is no similar formal framework for experimentation with various similarity methods in the context of dataset discovery. The framework has the ambition to establish a platform for reproducible and comparable research in the area of dataset discovery. The prototype implementation of the framework is available on GitHub.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 7 December 2022

T.O.M. Forslund, I.A.S. Larsson, J.G.I. Hellström and T.S. Lundström

The purpose of this paper is to present a fast and bare bones implementation of a numerical method for quickly simulating turbulent thermal flows on GPUs. The work also validates…

Abstract

Purpose

The purpose of this paper is to present a fast and bare bones implementation of a numerical method for quickly simulating turbulent thermal flows on GPUs. The work also validates earlier research showing that the lattice Boltzmann method (LBM) method is suitable for complex thermal flows.

Design/methodology/approach

A dual lattice hydrodynamic (D3Q27) thermal (D3Q7) multiple-relaxation time LBM model capable of thermal DNS calculations is implemented in CUDA.

Findings

The model has the same computational performance compared to earlier publications of similar LBM solvers. The solver is validated against three benchmark cases for turbulent thermal flow with available data and is shown to be in excellent agreement.

Originality/value

The combination of a D3Q27 and D3Q7 stencil for a multiple relaxation time -LBM has, to the authors’ knowledge, not been used for simulations of thermal flows. The code is made available in a public repository under a free license.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 28 March 2022

Yunfei Li, Shengbo Eben Li, Xingheng Jia, Shulin Zeng and Yu Wang

The purpose of this paper is to reduce the difficulty of model predictive control (MPC) deployment on FPGA so that researchers can make better use of FPGA technology for academic…

1334

Abstract

Purpose

The purpose of this paper is to reduce the difficulty of model predictive control (MPC) deployment on FPGA so that researchers can make better use of FPGA technology for academic research.

Design/methodology/approach

In this paper, the MPC algorithm is written into FPGA by combining hardware with software. Experiments have verified this method.

Findings

This paper implements a ZYNQ-based design method, which could significantly reduce the difficulty of development. The comparison with the CPU solution results proves that FPGA has a significant acceleration effect on the solution of MPC through the method.

Research limitations implications

Due to the limitation of practical conditions, this paper cannot carry out a hardware-in-the-loop experiment for the time being, instead of an open-loop experiment.

Originality value

This paper proposes a new design method to deploy the MPC algorithm to the FPGA, reducing the development difficulty of the algorithm implementation on FPGA. It greatly facilitates researchers in the field of autonomous driving to carry out FPGA algorithm hardware acceleration research.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 2
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 12 July 2022

Zheng Xu, Yihai Fang, Nan Zheng and Hai L. Vu

With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.

Abstract

Purpose

With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.

Design/methodology/approach

The simulation environment is established by integrating virtual reality interface with a micro-simulation model. In the simulation, the vehicle autonomy is developed by a framework that integrates artificial neural networks and genetic algorithms. Human-subject experiments are carried, and participants are asked to virtually sit in the developed autonomous vehicle (AV) that allows for both human driving and autopilot functions within a mixed traffic environment.

Findings

Not surprisingly, the inconsistency is identified between two driving modes, in which the AV’s driving maneuver causes the cognitive bias and makes participants feel unsafe. Even though only a shallow portion of the cases that the AV ended up with an accident during the testing stage, participants still frequently intervened during the AV operation. On a similar note, even though the statistical results reflect that the AV drives under perceived high-risk conditions, rarely an actual crash can happen. This suggests that the classic safety surrogate measurement, e.g. time-to-collision, may require adjustment for the mixed traffic flow.

Research limitations/implications

Understanding the behavior of AVs and the behavioral difference between AVs and human drivers are important, where the developed platform is only the first effort to identify the critical scenarios where the AVs might fail to react.

Practical implications

This paper attempts to fill the existing research gap in preparing close-to-reality tools for AV experience and further understanding human behavior during high-level autonomous driving.

Social implications

This work aims to systematically analyze the inconsistency in driving patterns between manual and autopilot modes in various driving scenarios (i.e. multiple scenes and various traffic conditions) to facilitate user acceptance of AV technology.

Originality/value

A close-to-reality tool for AV experience and AV-related behavioral study. A systematic analysis in relation to the inconsistency in driving patterns between manual and autonomous driving. A foundation for identifying the critical scenarios where the AVs might fail to react.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 19 December 2023

Qinxu Ding, Ding Ding, Yue Wang, Chong Guan and Bosheng Ding

The rapid rise of large language models (LLMs) has propelled them to the forefront of applications in natural language processing (NLP). This paper aims to present a comprehensive…

1521

Abstract

Purpose

The rapid rise of large language models (LLMs) has propelled them to the forefront of applications in natural language processing (NLP). This paper aims to present a comprehensive examination of the research landscape in LLMs, providing an overview of the prevailing themes and topics within this dynamic domain.

Design/methodology/approach

Drawing from an extensive corpus of 198 records published between 1996 to 2023 from the relevant academic database encompassing journal articles, books, book chapters, conference papers and selected working papers, this study delves deep into the multifaceted world of LLM research. In this study, the authors employed the BERTopic algorithm, a recent advancement in topic modeling, to conduct a comprehensive analysis of the data after it had been meticulously cleaned and preprocessed. BERTopic leverages the power of transformer-based language models like bidirectional encoder representations from transformers (BERT) to generate more meaningful and coherent topics. This approach facilitates the identification of hidden patterns within the data, enabling authors to uncover valuable insights that might otherwise have remained obscure. The analysis revealed four distinct clusters of topics in LLM research: “language and NLP”, “education and teaching”, “clinical and medical applications” and “speech and recognition techniques”. Each cluster embodies a unique aspect of LLM application and showcases the breadth of possibilities that LLM technology has to offer. In addition to presenting the research findings, this paper identifies key challenges and opportunities in the realm of LLMs. It underscores the necessity for further investigation in specific areas, including the paramount importance of addressing potential biases, transparency and explainability, data privacy and security, and responsible deployment of LLM technology.

Findings

The analysis revealed four distinct clusters of topics in LLM research: “language and NLP”, “education and teaching”, “clinical and medical applications” and “speech and recognition techniques”. Each cluster embodies a unique aspect of LLM application and showcases the breadth of possibilities that LLM technology has to offer. In addition to presenting the research findings, this paper identifies key challenges and opportunities in the realm of LLMs. It underscores the necessity for further investigation in specific areas, including the paramount importance of addressing potential biases, transparency and explainability, data privacy and security, and responsible deployment of LLM technology.

Practical implications

This classification offers practical guidance for researchers, developers, educators, and policymakers to focus efforts and resources. The study underscores the importance of addressing challenges in LLMs, including potential biases, transparency, data privacy, and responsible deployment. Policymakers can utilize this information to shape regulations, while developers can tailor technology development based on the diverse applications identified. The findings also emphasize the need for interdisciplinary collaboration and highlight ethical considerations, providing a roadmap for navigating the complex landscape of LLM research and applications.

Originality/value

This study stands out as the first to examine the evolution of LLMs across such a long time frame and across such diversified disciplines. It provides a unique perspective on the key areas of LLM research, highlighting the breadth and depth of LLM’s evolution.

Details

Journal of Electronic Business & Digital Economics, vol. 3 no. 1
Type: Research Article
ISSN: 2754-4214

Keywords

Open Access
Article
Publication date: 3 August 2020

Maryam AlJame and Imtiaz Ahmad

The evolution of technologies has unleashed a wealth of challenges by generating massive amount of data. Recently, biological data has increased exponentially, which has…

1151

Abstract

The evolution of technologies has unleashed a wealth of challenges by generating massive amount of data. Recently, biological data has increased exponentially, which has introduced several computational challenges. DNA short read alignment is an important problem in bioinformatics. The exponential growth in the number of short reads has increased the need for an ideal platform to accelerate the alignment process. Apache Spark is a cluster-computing framework that involves data parallelism and fault tolerance. In this article, we proposed a Spark-based algorithm to accelerate DNA short reads alignment problem, and it is called Spark-DNAligning. Spark-DNAligning exploits Apache Spark ’s performance optimizations such as broadcast variable, join after partitioning, caching, and in-memory computations. Spark-DNAligning is evaluated in term of performance by comparing it with SparkBWA tool and a MapReduce based algorithm called CloudBurst. All the experiments are conducted on Amazon Web Services (AWS). Results demonstrate that Spark-DNAligning outperforms both tools by providing a speedup in the range of 101–702 in aligning gigabytes of short reads to the human genome. Empirical evaluation reveals that Apache Spark offers promising solutions to DNA short reads alignment problem.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 4 April 2024

Yanmin Zhou, Zheng Yan, Ye Yang, Zhipeng Wang, Ping Lu, Philip F. Yuan and Bin He

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing…

Abstract

Purpose

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing system is essential for intelligent robots with various types of sensors. To mimic human-like abilities, sensors similar to human perception capabilities are indispensable. However, most research only concentrated on analyzing literature on single-modal sensors and their robotics application.

Design/methodology/approach

This study presents a systematic review of five bioinspired senses, especially considering a brief introduction of multimodal sensing applications and predicting current trends and future directions of this field, which may have continuous enlightenments.

Findings

This review shows that bioinspired sensors can enable robots to better understand the environment, and multiple sensor combinations can support the robot’s ability to behave intelligently.

Originality/value

The review starts with a brief survey of the biological sensing mechanisms of the five senses, which are followed by their bioinspired electronic counterparts. Their applications in the robots are then reviewed as another emphasis, covering the main application scopes of localization and navigation, objection identification, dexterous manipulation, compliant interaction and so on. Finally, the trends, difficulties and challenges of this research were discussed to help guide future research on intelligent robot sensors.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Open Access
Article
Publication date: 4 January 2021

Radosław Wajman

Crystallization is the process widely used for components separation and solids purification. The systems for crystallization process evaluation applied so far, involve numerous…

2435

Abstract

Purpose

Crystallization is the process widely used for components separation and solids purification. The systems for crystallization process evaluation applied so far, involve numerous non-invasive tomographic measurement techniques which suffers from some reported problems. The purpose of this paper is to show the abilities of three-dimensional Electrical Capacitance Tomography (3D ECT) in the context of non-invasive and non-intrusive visualization of crystallization processes. Multiple aspects and problems of ECT imaging, as well as the computer model design to work with the high relative permittivity liquids, have been pointed out.

Design/methodology/approach

To design the most efficient (from a mechanical and electrical point of view) 3D ECT sensor structure, the high-precise impedance meter was applied. The three types of sensor were designed, built, and tested. To meet the new concept requirements, the dedicated ECT device has been constructed.

Findings

It has been shown that the ECT technique can be applied to the diagnosis of crystallization. The crystals distribution can be identified using this technique. The achieved measurement resolution allows detecting the localization of crystals. The usage of stabilized electrodes improves the sensitivity of the sensor and provides the images better suitable for further analysis.

Originality/value

The dedicated 3D ECT sensor construction has been proposed to increase its sensitivity in the border area, where the crystals grow. Regarding this feature, some new algorithms for the potential field distribution and the sensitivity matrix calculation have been developed. The adaptation of the iterative 3D image reconstruction process has also been described.

Details

Sensor Review, vol. 41 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 1 June 2021

Ondřej Bublík, Libor Lobovský, Václav Heidler, Tomáš Mandys and Jan Vimmr

The paper targets on providing new experimental data for validation of the well-established mathematical models within the framework of the lattice Boltzmann method (LBM), which…

Abstract

Purpose

The paper targets on providing new experimental data for validation of the well-established mathematical models within the framework of the lattice Boltzmann method (LBM), which are applied to problems of casting processes in complex mould cavities.

Design/methodology/approach

An experimental campaign aiming at the free-surface flow within a system of narrow channels is designed and executed under well-controlled laboratory conditions. An in-house lattice Boltzmann solver is implemented. Its algorithm is described in detail and its performance is tested thoroughly using both the newly recorded experimental data and well-known analytical benchmark tests.

Findings

The benchmark tests prove the ability of the implemented algorithm to provide a reliable solution when the surface tension effects become dominant. The convergence of the implemented method is assessed. The two new experimentally studied problems are resolved well by simulations using a coarse computational grid.

Originality/value

A detailed set of original experimental data for validation of computational schemes for simulations of free-surface gravity-driven flow within a system of narrow channels is presented.

Details

Engineering Computations, vol. 38 no. 10
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 12