Search results

1 – 10 of 109
Open Access
Article
Publication date: 26 March 2024

Manuel Rossetti, Juliana Bright, Andrew Freeman, Anna Lee and Anthony Parrish

This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management…

Abstract

Purpose

This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management processes creates difficulties in both the complexity of the analysis and in performing risk assessments that are based on the manual (human analyst) assessment methods. Thus, analysts require methods that can be automated and that can incorporate on-going operational data on a regular basis.

Design/methodology/approach

The approach taken to address the identification of supply chain risk within an operational setting is based on aspects of multiobjective decision analysis (MODA). The approach constructs a risk and importance index for supply chain elements based on operational data. These indices are commensurate in value, leading to interpretable measures for decision-making.

Findings

Risk and importance indices were developed for the analysis of items within an example supply chain. Using the data on items, individual MODA models were formed and demonstrated using a prototype tool.

Originality/value

To better prepare risk mitigation strategies, analysts require the ability to identify potential sources of risk, especially in times of disruption such as natural disasters.

Details

Journal of Defense Analytics and Logistics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2399-6439

Keywords

Open Access
Article
Publication date: 21 March 2024

Hedi Khedhiri and Taher Mkademi

In this paper we talk about complex matrix quaternions (biquaternions) and we deal with some abstract methods in mathematical complex matrix analysis.

Abstract

Purpose

In this paper we talk about complex matrix quaternions (biquaternions) and we deal with some abstract methods in mathematical complex matrix analysis.

Design/methodology/approach

We introduce and investigate the complex space HC consisting of all 2 × 2 complex matrices of the form ξ=z1+iw1z2+iw2z2iw2z1+iw1, (z1,w1,z2,w2)C4.

Findings

We develop on HC a new matrix holomorphic structure for which we provide the fundamental operational calculus properties.

Originality/value

We give sufficient and necessary conditions in terms of Cauchy–Riemann type quaternionic differential equations for holomorphicity of a function of one complex matrix variable ξHC. In particular, we show that we have a lot of holomorphic functions of one matrix quaternion variable.

Details

Arab Journal of Mathematical Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 8 April 2024

Oussama-Ali Dabaj, Ronan Corin, Jean-Philippe Lecointe, Cristian Demian and Jonathan Blaszkowski

This paper aims to investigate the impact of combining grain-oriented electrical steel (GOES) grades on specific iron losses and the flux density distribution within a…

Abstract

Purpose

This paper aims to investigate the impact of combining grain-oriented electrical steel (GOES) grades on specific iron losses and the flux density distribution within a single-phase magnetic core.

Design/methodology/approach

This paper presents the results of finite-element method (FEM) simulations investigating the impact of mixing two different GOES grades on losses of a single-phase magnetic core. The authors used different models: a 3D model with a highly detailed geometry including both saturation and anisotropy, as well as a simplified 2D model to save computation time. The behavior of the flux distribution in the mixed magnetic core is analyzed. Finally, the results from the numerical simulations are compared with experimental results.

Findings

The specific iron losses of a mixed magnetic core exhibit a nonlinear decrease with respect to the GOES grade with the lowest losses. Analyzing the magnetic core behavior using 2D and 3D FEM shows that the rolling direction of the GOES grades plays a critical role on the nonlinearity variation of the specific losses.

Originality/value

The novelty of this research lies in achieving an optimum trade-off between the manufacturing cost and the core efficiency by combining conventional and high-performance GOES grade in a single-phase magnetic core.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 13 March 2024

Keanu Telles

The paper provides a detailed historical account of Douglass C. North's early intellectual contributions and analytical developments in pursuing a Grand Theory for why some…

Abstract

Purpose

The paper provides a detailed historical account of Douglass C. North's early intellectual contributions and analytical developments in pursuing a Grand Theory for why some countries are rich and others poor.

Design/methodology/approach

The author approaches the discussion using a theoretical and historical reconstruction based on published and unpublished materials.

Findings

The systematic, continuous and profound attempt to answer the Smithian social coordination problem shaped North's journey from being a young serious Marxist to becoming one of the founders of New Institutional Economics. In the process, he was converted in the early 1950s into a rigid neoclassical economist, being one of the leaders in promoting New Economic History. The success of the cliometric revolution exposed the frailties of the movement itself, namely, the limitations of neoclassical economic theory to explain economic growth and social change. Incorporating transaction costs, the institutional framework in which property rights and contracts are measured, defined and enforced assumes a prominent role in explaining economic performance.

Originality/value

In the early 1970s, North adopted a naive theory of institutions and property rights still grounded in neoclassical assumptions. Institutional and organizational analysis is modeled as a social maximizing efficient equilibrium outcome. However, the increasing tension between the neoclassical theoretical apparatus and its failure to account for contrasting political and institutional structures, diverging economic paths and social change propelled the modification of its assumptions and progressive conceptual innovation. In the later 1970s and early 1980s, North abandoned the efficiency view and gradually became more critical of the objective rationality postulate. In this intellectual movement, North's avant-garde research program contributed significantly to the creation of New Institutional Economics.

Details

EconomiA, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1517-7580

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1041

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 April 2021

Arunit Maity, P. Prakasam and Sarthak Bhargava

Due to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is…

1296

Abstract

Purpose

Due to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is most significant.

Design/methodology/approach

A novel machine learning-based approach to detect DTMF tones affected by noise, frequency and time variations by employing the k-nearest neighbour (KNN) algorithm is proposed. The features required for training the proposed KNN classifier are extracted using Goertzel's algorithm that estimates the absolute discrete Fourier transform (DFT) coefficient values for the fundamental DTMF frequencies with or without considering their second harmonic frequencies. The proposed KNN classifier model is configured in four different manners which differ in being trained with or without augmented data, as well as, with or without the inclusion of second harmonic frequency DFT coefficient values as features.

Findings

It is found that the model which is trained using the augmented data set and additionally includes the absolute DFT values of the second harmonic frequency values for the eight fundamental DTMF frequencies as the features, achieved the best performance with a macro classification F1 score of 0.980835, a five-fold stratified cross-validation accuracy of 98.47% and test data set detection accuracy of 98.1053%.

Originality/value

The generated DTMF signal has been classified and detected using the proposed KNN classifier which utilizes the DFT coefficient along with second harmonic frequencies for better classification. Additionally, the proposed KNN classifier has been compared with existing models to ascertain its superiority and proclaim its state-of-the-art performance.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 25 July 2022

Fung Yuen Chin, Kong Hoong Lem and Khye Mun Wong

The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the…

1018

Abstract

Purpose

The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the employment of a feature selection algorithm becomes crucial for successful classification modeling, because the inclusion of irrelevant or redundant features can mislead the modeling algorithms, resulting in overfitting and decrease in efficiency.

Design/methodology/approach

The minimum redundancy and maximum relevance (mRMR) and the recursive feature elimination (RFE) are two frequently used feature selection algorithms. While mRMR is capable of identifying a subset of features that are highly relevant to the targeted classification variable, mRMR still carries the weakness of capturing redundant features along with the algorithm. On the other hand, RFE is flawed by the fact that those features selected by RFE are not ranked by importance, albeit RFE can effectively eliminate the less important features and exclude redundant features.

Findings

The hybrid method was exemplified in a binary classification between digits “4” and “9” and between digits “6” and “8” from a multiple features dataset. The result showed that the hybrid mRMR +  support vector machine recursive feature elimination (SVMRFE) is better than both the sole support vector machine (SVM) and mRMR.

Originality/value

In view of the respective strength and deficiency mRMR and RFE, this study combined both these methods and used an SVM as the underlying classifier anticipating the mRMR to make an excellent complement to the SVMRFE.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 14 March 2022

Luke McCully, Hung Cao, Monica Wachowicz, Stephanie Champion and Patricia A.H. Williams

A new research domain known as the Quantified Self has recently emerged and is described as gaining self-knowledge through using wearable technology to acquire information on…

Abstract

Purpose

A new research domain known as the Quantified Self has recently emerged and is described as gaining self-knowledge through using wearable technology to acquire information on self-monitoring activities and physical health related problems. However, very little is known about the impact of time window models on discovering self-quantified patterns that can yield new self-knowledge insights. This paper aims to discover the self-quantified patterns using multi-time window models.

Design/methodology/approach

This paper proposes a multi-time window analytical workflow developed to support the streaming k-means clustering algorithm, based on an online/offline approach that combines both sliding and damped time window models. An intervention experiment with 15 participants is used to gather Fitbit data logs and implement the proposed analytical workflow.

Findings

The clustering results reveal the impact of a time window model has on exploring the evolution of micro-clusters and the labelling of macro-clusters to accurately explain regular and irregular individual physical behaviour.

Originality/value

The preliminary results demonstrate the impact they have on finding meaningful patterns.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 21 March 2024

Warisa Thangjai and Sa-Aat Niwitpong

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty…

Abstract

Purpose

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty. Their applications encompass economic forecasting, market research, financial forecasting, econometric analysis, policy analysis, financial reporting, investment decision-making, credit risk assessment and consumer confidence surveys. Signal-to-noise ratio (SNR) finds applications in economics and finance across various domains such as economic forecasting, financial modeling, market analysis and risk assessment. A high SNR indicates a robust and dependable signal, simplifying the process of making well-informed decisions. On the other hand, a low SNR indicates a weak signal that could be obscured by noise, so decision-making procedures need to take this into serious consideration. This research focuses on the development of confidence intervals for functions derived from the SNR and explores their application in the fields of economics and finance.

Design/methodology/approach

The construction of the confidence intervals involved the application of various methodologies. For the SNR, confidence intervals were formed using the generalized confidence interval (GCI), large sample and Bayesian approaches. The difference between SNRs was estimated through the GCI, large sample, method of variance estimates recovery (MOVER), parametric bootstrap and Bayesian approaches. Additionally, confidence intervals for the common SNR were constructed using the GCI, adjusted MOVER, computational and Bayesian approaches. The performance of these confidence intervals was assessed using coverage probability and average length, evaluated through Monte Carlo simulation.

Findings

The GCI approach demonstrated superior performance over other approaches in terms of both coverage probability and average length for the SNR and the difference between SNRs. Hence, employing the GCI approach is advised for constructing confidence intervals for these parameters. As for the common SNR, the Bayesian approach exhibited the shortest average length. Consequently, the Bayesian approach is recommended for constructing confidence intervals for the common SNR.

Originality/value

This research presents confidence intervals for functions of the SNR to assess SNR estimation in the fields of economics and finance.

Details

Asian Journal of Economics and Banking, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2615-9821

Keywords

Access

Only Open Access

Year

Content type

Earlycite article (109)
1 – 10 of 109