Search results

1 – 4 of 4
Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 18 October 2021

Anilkumar Chandrashekhar Korishetti and Virendra S. Malemath

High-efficiency video coding (HEVC) is the latest video coding standard that has better coding efficiency than the H.264/advanced video coding (AVC) standard. The purpose of this…

Abstract

Purpose

High-efficiency video coding (HEVC) is the latest video coding standard that has better coding efficiency than the H.264/advanced video coding (AVC) standard. The purpose of this paper is to design and develop an effective block search mechanism for the video compression-HEVC standard such that the developed compression standard is applied for the communication applications.

Design/methodology/approach

In the proposed method, an rate-distortion (RD) trade-off, named regressive RD trade-off is used based on the conditional autoregressive value at risk (CaViar) model. The motion estimation (ME) is based on the new block search mechanism, which is developed with the modification in the Ordered Tree-based Hex-Octagon (OrTHO)-search algorithm along with the chronological Salp swarm algorithm (SSA) based on deep recurrent neural network (deepRNN) for optimally deciding the shape of search, search length of the tree and dimension. The chronological SSA is developed by integrating the chronological concept in SSA, which is used for training the deep RNN for ME.

Findings

The competing methods used for the comparative analysis of the proposed OrTHO-search based RD + chronological-salp swarm algorithm (RD + C-SSA) based deep RNN are support vector machine (SVM), fast encoding framework, wavefront-based high parallel (WHP) and OrTHO-search based RD method. The proposed video compression method obtained a maximum peak signal-to-noise ratio (PSNR) of 42.9180 dB and a maximum structural similarity index measure (SSIM) of 0.9827.

Originality/value

In this research, an effective block search mechanism was developed with the modification in the OrTHO-search algorithm along with the chronological SSA based on deepRNN for the video compression-HEVC standard.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 16 April 2024

Liezl Smith and Christiaan Lamprecht

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine…

Abstract

Purpose

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine learning (ML) is a strategic technology that enables digital transformation to the metaverse, and it is becoming a more prevalent driver of business performance and reporting on performance. However, ML has limitations, and using the technology in business processes, such as accounting, poses a technology governance failure risk. To address this risk, decision makers and those tasked to govern these technologies must understand where the technology fits into the business process and consider its limitations to enable a governed transition to the metaverse. Using selected accounting processes, this study aims to describe the limitations that ML techniques pose to ensure the quality of financial information.

Design/methodology/approach

A grounded theory literature review method, consisting of five iterative stages, was used to identify the accounting tasks that ML could perform in the respective accounting processes, describe the ML techniques that could be applied to each accounting task and identify the limitations associated with the individual techniques.

Findings

This study finds that limitations such as data availability and training time may impact the quality of the financial information and that ML techniques and their limitations must be clearly understood when developing and implementing technology governance measures.

Originality/value

The study contributes to the growing literature on enterprise information and technology management and governance. In this study, the authors integrated current ML knowledge into an accounting context. As accounting is a pervasive aspect of business, the insights from this study will benefit decision makers and those tasked to govern these technologies to understand how some processes are more likely to be affected by certain limitations and how this may impact the accounting objectives. It will also benefit those users hoping to exploit the advantages of ML in their accounting processes while understanding the specific technology limitations on an accounting task level.

Details

Journal of Financial Reporting and Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 25 January 2024

Inamul Hasan, Mukesh R., Radha Krishnan P., Srinath R. and Boomadevi P.

This study aims to find the characteristics of supercritical airfoil in helicopter rotor blades for hovering phase using numerical analysis and the validation using experimental…

Abstract

Purpose

This study aims to find the characteristics of supercritical airfoil in helicopter rotor blades for hovering phase using numerical analysis and the validation using experimental results.

Design/methodology/approach

Using numerical analysis in the forward phase of the helicopter, supercritical airfoil is compared with the conventional airfoil for the aerodynamic performance. The multiple reference frame method is used to produce the results for rotational analysis. A grid independence test was carried out, and validation was obtained using benchmark values from NASA data.

Findings

From the analysis results, a supercritical airfoil in hovering flight analysis proved that the NASA SC rotor produces 25% at 5°, 26% at 12° and 32% better thrust at 8° of collective pitch than the HH02 rotor. Helicopter performance parameters are also calculated based on momentum theory. Theoretical calculations prove that the NASA SC rotor is better than the HH02 rotor. The results of helicopter performance prove that the NASA SC rotor provides better aerodynamic efficiency than the HH02 rotor.

Originality/value

The novelty of the paper is it proved the aerodynamic performance of supercritical airfoil is performing better than the HH02 airfoil. The results are validated with the experimental values and theoretical calculations from the momentum theory.

Details

Aircraft Engineering and Aerospace Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 4 of 4