Search results

1 – 10 of 87
Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 5 May 2023

Peter Wanke, Jorge Junio Moreira Antunes, Antônio L. L. Filgueira, Flavia Michelotto, Isadora G. E. Tardin and Yong Tan

This paper aims to investigate the performance of OECD countries' long-term productivity during the period of 1975–2018.

Abstract

Purpose

This paper aims to investigate the performance of OECD countries' long-term productivity during the period of 1975–2018.

Design/methodology/approach

This study employed different approaches to evaluate how efficiency scores vary with changes in inputs and outputs: Data Envelopment Analysis (CRS, VRS and FDH), TOPSIS and TOPSIS of these scores.

Findings

The findings suggest that, during the period of this study, countries with higher freedom of religion and with Presidential democracy regimes are positively associated with higher productivity.

Originality/value

To the best of the authors’ knowledge, this is the first study that uses efficiency models to assess the productivity levels of OECD countries based on several contextual variables that can potentially affect it.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 21 June 2023

Brad C. Meyer, Daniel Bumblauskas, Richard Keegan and Dali Zhang

This research fills a gap in process science by defining and explaining entropy and the increase of entropy in processes.

Abstract

Purpose

This research fills a gap in process science by defining and explaining entropy and the increase of entropy in processes.

Design/methodology/approach

This is a theoretical treatment that begins with a conceptual understanding of entropy in thermodynamics and information theory and extends it to the study of degradation and improvement in a transformation process.

Findings

A transformation process with three inputs: demand volume, throughput and product design, utilizes a system composed of processors, stores, configuration, human actors, stored data and controllers to provide a product. Elements of the system are aligned with the inputs and each other with a purpose to raise standard of living. Lack of alignment is entropy. Primary causes of increased entropy are changes in inputs and disordering of the system components. Secondary causes result from changes made to cope with the primary causes. Improvement and innovation reduce entropy by providing better alignments and new ways of aligning resources.

Originality/value

This is the first detailed theoretical treatment of entropy in a process science context.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 27 February 2024

Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Dandan Wen, Jiake Li and Dandan Guo

Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of…

Abstract

Purpose

Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of the existing research in the industry, this paper proposes a case-adaptation optimization algorithm to support the effective application of tacit knowledge resources.

Design/methodology/approach

The attribute simplification algorithm based on the forward search strategy in the neighborhood decision information system is implemented to realize the vertical dimensionality reduction of the case base, and the fuzzy C-mean (FCM) clustering algorithm based on the simulated annealing genetic algorithm (SAGA) is implemented to compress the case base horizontally with multiple decision classes. Then, the subspace K-nearest neighbors (KNN) algorithm is used to induce the decision rules for the set of adapted cases to complete the optimization of the adaptation model.

Findings

The findings suggest the rapid enrichment of data, information and tacit knowledge in the field of practice has led to low efficiency and low utilization of knowledge dissemination, and this algorithm can effectively alleviate the problems of users falling into “knowledge disorientation” in the era of the knowledge economy.

Practical implications

This study provides a model with case knowledge that meets users’ needs, thereby effectively improving the application of the tacit knowledge in the explicit case base and the problem-solving efficiency of knowledge users.

Social implications

The adaptation model can serve as a stable and efficient prediction model to make predictions for the effects of the many logistics and e-commerce enterprises' plans.

Originality/value

This study designs a multi-decision class case-adaptation optimization study based on forward attribute selection strategy-neighborhood rough sets (FASS-NRS) and simulated annealing genetic algorithm-fuzzy C-means (SAGA-FCM) for tacit knowledgeable exogenous cases. By effectively organizing and adjusting tacit knowledge resources, knowledge service organizations can maintain their competitive advantages. The algorithm models established in this study develop theoretical directions for a multi-decision class case-adaptation optimization study of tacit knowledge.

Details

Journal of Advances in Management Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0972-7981

Keywords

Article
Publication date: 25 October 2022

Xu Wang

Under the background of open science, this paper integrates altmetrics data and combines multiple evaluation methods to analyze and evaluate the indicators' characteristics of…

262

Abstract

Purpose

Under the background of open science, this paper integrates altmetrics data and combines multiple evaluation methods to analyze and evaluate the indicators' characteristics of discourse leading for academic journals, which is of great significance to enrich and improve the evaluation theory and indicator system of academic journals.

Design/methodology/approach

This paper obtained 795,631 citations and 10.3 million altmetrics indicators data for 126,424 published papers from 151 medicine, general and internal academic journals. In this paper, descriptive statistical analysis and distribution rules of evaluation indicators are first carried out at the macro level. The distribution characteristics of evaluation indicators under different international collaboration conditions are analyzed at the micro level. Second, according to the characteristics and connotation of the evaluation indicators, the evaluation indicator system is constructed. Third, correlation analysis, factor analysis, entropy weight method and TOPSIS method are adopted to evaluate and analyze the discourse leading in medicine, general and internal academic journals by integrating altmetrics. At the same time, this paper verifies the reliability of the evaluation results.

Findings

Six features of discourse leading integrated with altmetrics indicators are obtained. In the era of open science, online academic exchanges are becoming more and more popular. The evaluation activities based on altmetrics have fine-grained and procedural advantages. It is feasible and necessary to integrate altmetrics indicators and combine the advantages of multiple methods to evaluate the academic journals' discourse leading of which are in a diversified academic ecosystem.

Originality/value

This paper uses descriptive statistical analysis to analyze the distribution characteristics and distribution rules of discourse leading indicators of academic journals and to explore the availability of altmetrics indicators and the effectiveness of constructing an evaluation system. Then, combining the advantages of multiple evaluation methods, The author integrates altmetrics indicators to comprehensively evaluate the discourse leading of academic journals and verify the reliability of the evaluation results. This paper aims to provide references for enriching and improving the evaluation theory and indicator system of academic journals.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 8 June 2023

Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Shuwei Zhang and Longfei He

This study aims to deal with the case adaptation problem associated with continuous data by providing a non-zero base solution for knowledge users in solving a given situation.

Abstract

Purpose

This study aims to deal with the case adaptation problem associated with continuous data by providing a non-zero base solution for knowledge users in solving a given situation.

Design/methodology/approach

Firstly, the neighbourhood transformation of the initial case base and the view similarity between the problem and the existing cases will be examined. Multiple cases with perspective similarity or above a predefined threshold will be used as the adaption cases. Secondly, on the decision rule set of the decision space, the deterministic decision model of the corresponding distance between the problem and the set of lower approximate objects under each choice class of the adaptation set is applied to extract the decision rule set of the case condition space. Finally, the solution elements of the problem will be reconstructed using the rule set and the values of the problem's conditional elements.

Findings

The findings suggest that the classic knowledge matching approach reveals the user with the most similar knowledge/cases but relatively low satisfaction. This also revealed a non-zero adaptation based on human–computer interaction, which has the difficulties of solid subjectivity and low adaptation efficiency.

Research limitations/implications

In this study the multi-case inductive adaptation of the problem to be solved is carried out by analyzing and extracting the law of the effect of the centralized conditions on the decision-making of the adaptation. The adaption process is more rigorous with less subjective influence better reliability and higher application value. The approach described in this research can directly change the original data set which is more beneficial to enhancing problem-solving accuracy while broadening the application area of the adaptation mechanism.

Practical implications

The examination of the calculation cases confirms the innovation of this study in comparison to the traditional method of matching cases with tacit knowledge extrapolation.

Social implications

The algorithm models established in this study develop theoretical directions for a multi-case induction adaptation study of tacit knowledge.

Originality/value

This study designs a multi-case induction adaptation scheme by combining NRS and CBR for implicitly knowledgeable exogenous cases. A game-theoretic combinatorial assignment method is applied to calculate the case view and the view similarity based on the threshold screening.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 February 2024

Yuzhuo Wang, Chengzhi Zhang, Min Song, Seongdeok Kim, Youngsoo Ko and Juhee Lee

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers…

84

Abstract

Purpose

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers, making mention frequency a classical indicator of their popularity and influence. However, contemporary methods for evaluating influence tend to focus solely on individual algorithms, disregarding the collective impact resulting from the interconnectedness of these algorithms, which can provide a new way to reveal their roles and importance within algorithm clusters. This paper aims to build the co-occurrence network of algorithms in the natural language processing field based on the full-text content of academic papers and analyze the academic influence of algorithms in the group based on the features of the network.

Design/methodology/approach

We use deep learning models to extract algorithm entities from articles and construct the whole, cumulative and annual co-occurrence networks. We first analyze the characteristics of algorithm networks and then use various centrality metrics to obtain the score and ranking of group influence for each algorithm in the whole domain and each year. Finally, we analyze the influence evolution of different representative algorithms.

Findings

The results indicate that algorithm networks also have the characteristics of complex networks, with tight connections between nodes developing over approximately four decades. For different algorithms, algorithms that are classic, high-performing and appear at the junctions of different eras can possess high popularity, control, central position and balanced influence in the network. As an algorithm gradually diminishes its sway within the group, it typically loses its core position first, followed by a dwindling association with other algorithms.

Originality/value

To the best of the authors’ knowledge, this paper is the first large-scale analysis of algorithm networks. The extensive temporal coverage, spanning over four decades of academic publications, ensures the depth and integrity of the network. Our results serve as a cornerstone for constructing multifaceted networks interlinking algorithms, scholars and tasks, facilitating future exploration of their scientific roles and semantic relations.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 2 October 2023

Deergha Sharma and Pawan Kumar

Growing concern over sustainability adoption has presented an array of challenges to businesses. While vital to an economy's success, banking is not immune to societal…

Abstract

Purpose

Growing concern over sustainability adoption has presented an array of challenges to businesses. While vital to an economy's success, banking is not immune to societal, environmental and economic consequences of business practices. The study has examined the sustainable performance of banking institutions on the suggested multidimensional framework comprising economic, environmental, social, governance and financial dimensions and 52 sustainability indicators. The study benchmarks the significant performance indicators of leading banks indispensable to sustainable banking performance. The findings attempt to address research questions concerning the extent of sustainable banking performance, ranking the sustainability dimensions and indicators and standardizing sustainability adoption metrics.

Design/methodology/approach

To determine the responsiveness of the banking industry to sustainability dimensions, content analysis was conducted using NVivo software for the year 2021–2022. Furthermore, a hybrid multicriteria decision-making (MCDM) approach is used by integrating entropy, the technique for order preference by similarity to ideal solution (TOPSIS) and VlseKriterijumska Optimizacija KOmpromisno Resenje (VIKOR) to provide relative weights to performance indicators and prioritize banks based on their sustainable performance. Sensitivity analysis is used to ensure the robustness of results.

Findings

In the context of the Indian banking industry, the pattern of sustainability reporting is inconsistent and concentrated on addressing environmental and social concerns. The results of the entropy methodology prioritized “Environmental” sustainability over other selected dimensions while “Financial” dimension has been assigned the least priority in the ranking order. The significant sustainable performance indicators delineated in this study should be used as standards to ensure the accountability and credibility of the sustainable banking industry. Additionally, the research findings will provide valuable inputs to policymakers and regulators to assure better contribution of the banking sector in meeting sustainability goals.

Originality/value

Considering the paucity of studies on sustainable banking performance, this study makes two significant contributions to the literature. First, the suggested multidimensional disclosure model integrating financial and nonfinancial indicators would facilitate banking institutions in addressing the five aspects of sustainability. As one of the first studies in the context of the Indian banking industry, the findings would pave the way for better diffusion of sustainability practices. Second, the inclusion of MCDM techniques prioritizes the significance of sustainability indicators and benchmarks the performance of leading banks to achieve better profits and more substantial growth.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 21 February 2024

Nehal Elshaboury, Tarek Zayed and Eslam Mohammed Abdelkader

Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective…

Abstract

Purpose

Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective maintenance and rehabilitation strategies for water pipes based on reliable deterioration models and cost-effective inspection programs. In the light of foregoing, the paramount objective of this research study is to develop condition assessment and deterioration prediction models for saltwater pipes in Hong Kong.

Design/methodology/approach

As a perquisite to the development of condition assessment models, spherical fuzzy analytic hierarchy process (SFAHP) is harnessed to analyze the relative importance weights of deterioration factors. Afterward, the relative importance weights of deterioration factors coupled with their effective values are leveraged using the measurement of alternatives and ranking according to the compromise solution (MARCOS) algorithm to analyze the performance condition of water pipes. A condition rating system is then designed counting on the generalized entropy-based probabilistic fuzzy C means (GEPFCM) algorithm. A set of fourth order multiple regression functions are constructed to capture the degradation trends in condition of pipelines overtime covering their disparate characteristics.

Findings

Analytical results demonstrated that the top five influential deterioration factors comprise age, material, traffic, soil corrosivity and material. In addition, it was derived that developed deterioration models accomplished correlation coefficient, mean absolute error and root mean squared error of 0.8, 1.33 and 1.39, respectively.

Originality/value

It can be argued that generated deterioration models can assist municipalities in formulating accurate and cost-effective maintenance, repair and rehabilitation programs.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 4 January 2024

Zicheng Zhang

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent…

Abstract

Purpose

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent applications including automated process management, standard construction and more accurate dispatched orders to build high-quality government service platforms as more widely data-driven methods are in the process.

Design/methodology/approach

In this study, based on the influence of the record specifications of texts related to work orders generated by the government hotline, machine learning tools are implemented and compared to optimize classify dispatching tasks by performing exploratory studies on the hotline work order text, including linguistics analysis of text feature processing, new word discovery, text clustering and text classification.

Findings

The complexity of the content of the work order is reduced by applying more standardized writing specifications based on combining text grammar numerical features. So, order dispatch success prediction accuracy rate reaches 89.6 per cent after running the LSTM model.

Originality/value

The proposed method can help improve the current dispatching processes run by the government hotline, better guide staff to standardize the writing format of work orders, improve the accuracy of order dispatching and provide innovative support to the current mechanism.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of 87