Search results

1 – 10 of 174
Article
Publication date: 29 November 2022

Mina Safizadeh, Mohammad Javad Maghsoodi Tilaki, Massoomeh Hedayati Marzbali and Aldrin Abdullah

The emerging concept of smart city is known to aim at sustainable urban development. One of the requirements for a smart city is to address accessibility inequalities. This study…

Abstract

Purpose

The emerging concept of smart city is known to aim at sustainable urban development. One of the requirements for a smart city is to address accessibility inequalities. This study aims to investigate the accessibility level issues in urban transformation before and after combining different street networks for Penang, Malaysia, as a case study to reveal greater insight and helpful information into mobility and accessibility inequalities for future smart city planning.

Design/methodology/approach

Using DepthmapX software, two main quantitative methodologies of space syntax, namely, spatial integration accessibility (SIA) and angular segment analysis by metric distance (ASDMA), are employed to analyse the level of accessibility for the main streets of George Town site before and after combination with contemporary networks. Integration, choice and entropy values were calculated for the study analysis.

Findings

Results revealed the implications of combining old irregular gridiron structures with the existing planned grid structures. George Town seems to have gained a higher capacity for pedestrian accessibility; however, vehicle accessibility has lost its capacity. Findings further suggest that a combination of irregular structure and grid structure is essential for urban growth in similar historical contexts to improve accessibility and address mobility inequalities.

Originality/value

The study concludes by highlighting the importance of the analysis of street structure transformation to predict consequences and promote the potential to reduce current inequalities in vehicle accessibility.

Details

Open House International, vol. 48 no. 3
Type: Research Article
ISSN: 0168-2601

Keywords

Article
Publication date: 29 November 2022

H.D. Arora and Anjali Naithani

The purpose of this paper is to create a numerical technique to tackle the challenge of selecting software reliability growth models (SRGMs).

Abstract

Purpose

The purpose of this paper is to create a numerical technique to tackle the challenge of selecting software reliability growth models (SRGMs).

Design/methodology/approach

A real-time case study with five SRGMs tested against a set of four selection indexes were utilised to show the functionality of TOPSIS approach. As a result of the current research, rating of the different SRGMs is generated based on their comparative closeness.

Findings

An innovative approach has been developed to generate the current SRGMs selection under TOPSIS environment by blending the entropy technique and the distance-based approach.

Originality/value

In any multi-criteria decision-making process, ambiguity is a crucial issue. To deal with the uncertain environment of decision-making, various devices and methodologies have been explained. Pythagorean fuzzy sets (PFSs) are perhaps the most contemporary device for dealing with ambiguity. This article addresses novel tangent distance-entropy measures under PFSs. Additionally, numerical illustration is utilized to ascertain the strength and authenticity of the suggested measures.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 5 May 2023

Peter Wanke, Jorge Junio Moreira Antunes, Antônio L. L. Filgueira, Flavia Michelotto, Isadora G. E. Tardin and Yong Tan

This paper aims to investigate the performance of OECD countries' long-term productivity during the period of 1975–2018.

Abstract

Purpose

This paper aims to investigate the performance of OECD countries' long-term productivity during the period of 1975–2018.

Design/methodology/approach

This study employed different approaches to evaluate how efficiency scores vary with changes in inputs and outputs: Data Envelopment Analysis (CRS, VRS and FDH), TOPSIS and TOPSIS of these scores.

Findings

The findings suggest that, during the period of this study, countries with higher freedom of religion and with Presidential democracy regimes are positively associated with higher productivity.

Originality/value

To the best of the authors’ knowledge, this is the first study that uses efficiency models to assess the productivity levels of OECD countries based on several contextual variables that can potentially affect it.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 21 June 2023

Brad C. Meyer, Daniel Bumblauskas, Richard Keegan and Dali Zhang

This research fills a gap in process science by defining and explaining entropy and the increase of entropy in processes.

Abstract

Purpose

This research fills a gap in process science by defining and explaining entropy and the increase of entropy in processes.

Design/methodology/approach

This is a theoretical treatment that begins with a conceptual understanding of entropy in thermodynamics and information theory and extends it to the study of degradation and improvement in a transformation process.

Findings

A transformation process with three inputs: demand volume, throughput and product design, utilizes a system composed of processors, stores, configuration, human actors, stored data and controllers to provide a product. Elements of the system are aligned with the inputs and each other with a purpose to raise standard of living. Lack of alignment is entropy. Primary causes of increased entropy are changes in inputs and disordering of the system components. Secondary causes result from changes made to cope with the primary causes. Improvement and innovation reduce entropy by providing better alignments and new ways of aligning resources.

Originality/value

This is the first detailed theoretical treatment of entropy in a process science context.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 27 February 2024

Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Dandan Wen, Jiake Li and Dandan Guo

Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of…

Abstract

Purpose

Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of the existing research in the industry, this paper proposes a case-adaptation optimization algorithm to support the effective application of tacit knowledge resources.

Design/methodology/approach

The attribute simplification algorithm based on the forward search strategy in the neighborhood decision information system is implemented to realize the vertical dimensionality reduction of the case base, and the fuzzy C-mean (FCM) clustering algorithm based on the simulated annealing genetic algorithm (SAGA) is implemented to compress the case base horizontally with multiple decision classes. Then, the subspace K-nearest neighbors (KNN) algorithm is used to induce the decision rules for the set of adapted cases to complete the optimization of the adaptation model.

Findings

The findings suggest the rapid enrichment of data, information and tacit knowledge in the field of practice has led to low efficiency and low utilization of knowledge dissemination, and this algorithm can effectively alleviate the problems of users falling into “knowledge disorientation” in the era of the knowledge economy.

Practical implications

This study provides a model with case knowledge that meets users’ needs, thereby effectively improving the application of the tacit knowledge in the explicit case base and the problem-solving efficiency of knowledge users.

Social implications

The adaptation model can serve as a stable and efficient prediction model to make predictions for the effects of the many logistics and e-commerce enterprises' plans.

Originality/value

This study designs a multi-decision class case-adaptation optimization study based on forward attribute selection strategy-neighborhood rough sets (FASS-NRS) and simulated annealing genetic algorithm-fuzzy C-means (SAGA-FCM) for tacit knowledgeable exogenous cases. By effectively organizing and adjusting tacit knowledge resources, knowledge service organizations can maintain their competitive advantages. The algorithm models established in this study develop theoretical directions for a multi-decision class case-adaptation optimization study of tacit knowledge.

Details

Journal of Advances in Management Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0972-7981

Keywords

Article
Publication date: 8 June 2023

Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Shuwei Zhang and Longfei He

This study aims to deal with the case adaptation problem associated with continuous data by providing a non-zero base solution for knowledge users in solving a given situation.

Abstract

Purpose

This study aims to deal with the case adaptation problem associated with continuous data by providing a non-zero base solution for knowledge users in solving a given situation.

Design/methodology/approach

Firstly, the neighbourhood transformation of the initial case base and the view similarity between the problem and the existing cases will be examined. Multiple cases with perspective similarity or above a predefined threshold will be used as the adaption cases. Secondly, on the decision rule set of the decision space, the deterministic decision model of the corresponding distance between the problem and the set of lower approximate objects under each choice class of the adaptation set is applied to extract the decision rule set of the case condition space. Finally, the solution elements of the problem will be reconstructed using the rule set and the values of the problem's conditional elements.

Findings

The findings suggest that the classic knowledge matching approach reveals the user with the most similar knowledge/cases but relatively low satisfaction. This also revealed a non-zero adaptation based on human–computer interaction, which has the difficulties of solid subjectivity and low adaptation efficiency.

Research limitations/implications

In this study the multi-case inductive adaptation of the problem to be solved is carried out by analyzing and extracting the law of the effect of the centralized conditions on the decision-making of the adaptation. The adaption process is more rigorous with less subjective influence better reliability and higher application value. The approach described in this research can directly change the original data set which is more beneficial to enhancing problem-solving accuracy while broadening the application area of the adaptation mechanism.

Practical implications

The examination of the calculation cases confirms the innovation of this study in comparison to the traditional method of matching cases with tacit knowledge extrapolation.

Social implications

The algorithm models established in this study develop theoretical directions for a multi-case induction adaptation study of tacit knowledge.

Originality/value

This study designs a multi-case induction adaptation scheme by combining NRS and CBR for implicitly knowledgeable exogenous cases. A game-theoretic combinatorial assignment method is applied to calculate the case view and the view similarity based on the threshold screening.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 7 July 2023

Rongying Zhao and Weijie Zhu

This paper aims to conduct a comprehensive analysis to evaluate the current situation of journals, examine the factors that influence their development, and establish an…

Abstract

Purpose

This paper aims to conduct a comprehensive analysis to evaluate the current situation of journals, examine the factors that influence their development, and establish an evaluation index system and model. The objective is to enhance the theory and methodologies used for journal evaluation and provide guidance for their positive development.

Design/methodology/approach

This study uses empirical data from economics journals to analyse their evaluation dimensions, methods, index system and evaluation framework. This study then assigns weights to journal data using single and combined evaluations in three dimensions: influence, communication and novelty. It calculates several evaluation metrics, including the explanation rate, information entropy value, difference coefficient and novelty degree. Finally, this study applies the concept of fuzzy mathematics to measure the final results.

Findings

The use of affiliation degree and fuzzy Borda number can synthesize ranking and score differences among evaluation methods. It combines internal objective information and improves model accuracy. The novelty of journal topics positively correlates with both the journal impact factor and social media mentions. In addition, journal communication power indicators compensate for the shortcomings of traditional citation analysis. Finally, the three-dimensional representative evaluation index serves as a reminder to academic journals to avoid the vortex of the Matthew effect.

Originality/value

This paper proposes a journal evaluation model comprising academic influence, communication power and novelty dimensions. It uses fuzzy Borda evaluation to address issues related to the weighing of single evaluation methods. This study also analyses the relationship of the three dimensions and offers insights for journal development in the new media era.

Details

The Electronic Library , vol. 41 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 2 January 2024

Xiumei Cai, Xi Yang and Chengmao Wu

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to…

Abstract

Purpose

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.

Design/methodology/approach

The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.

Findings

The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.

Originality/value

Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 22 February 2024

Yuzhuo Wang, Chengzhi Zhang, Min Song, Seongdeok Kim, Youngsoo Ko and Juhee Lee

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers…

84

Abstract

Purpose

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers, making mention frequency a classical indicator of their popularity and influence. However, contemporary methods for evaluating influence tend to focus solely on individual algorithms, disregarding the collective impact resulting from the interconnectedness of these algorithms, which can provide a new way to reveal their roles and importance within algorithm clusters. This paper aims to build the co-occurrence network of algorithms in the natural language processing field based on the full-text content of academic papers and analyze the academic influence of algorithms in the group based on the features of the network.

Design/methodology/approach

We use deep learning models to extract algorithm entities from articles and construct the whole, cumulative and annual co-occurrence networks. We first analyze the characteristics of algorithm networks and then use various centrality metrics to obtain the score and ranking of group influence for each algorithm in the whole domain and each year. Finally, we analyze the influence evolution of different representative algorithms.

Findings

The results indicate that algorithm networks also have the characteristics of complex networks, with tight connections between nodes developing over approximately four decades. For different algorithms, algorithms that are classic, high-performing and appear at the junctions of different eras can possess high popularity, control, central position and balanced influence in the network. As an algorithm gradually diminishes its sway within the group, it typically loses its core position first, followed by a dwindling association with other algorithms.

Originality/value

To the best of the authors’ knowledge, this paper is the first large-scale analysis of algorithm networks. The extensive temporal coverage, spanning over four decades of academic publications, ensures the depth and integrity of the network. Our results serve as a cornerstone for constructing multifaceted networks interlinking algorithms, scholars and tasks, facilitating future exploration of their scientific roles and semantic relations.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 2 October 2023

Deergha Sharma and Pawan Kumar

Growing concern over sustainability adoption has presented an array of challenges to businesses. While vital to an economy's success, banking is not immune to societal…

Abstract

Purpose

Growing concern over sustainability adoption has presented an array of challenges to businesses. While vital to an economy's success, banking is not immune to societal, environmental and economic consequences of business practices. The study has examined the sustainable performance of banking institutions on the suggested multidimensional framework comprising economic, environmental, social, governance and financial dimensions and 52 sustainability indicators. The study benchmarks the significant performance indicators of leading banks indispensable to sustainable banking performance. The findings attempt to address research questions concerning the extent of sustainable banking performance, ranking the sustainability dimensions and indicators and standardizing sustainability adoption metrics.

Design/methodology/approach

To determine the responsiveness of the banking industry to sustainability dimensions, content analysis was conducted using NVivo software for the year 2021–2022. Furthermore, a hybrid multicriteria decision-making (MCDM) approach is used by integrating entropy, the technique for order preference by similarity to ideal solution (TOPSIS) and VlseKriterijumska Optimizacija KOmpromisno Resenje (VIKOR) to provide relative weights to performance indicators and prioritize banks based on their sustainable performance. Sensitivity analysis is used to ensure the robustness of results.

Findings

In the context of the Indian banking industry, the pattern of sustainability reporting is inconsistent and concentrated on addressing environmental and social concerns. The results of the entropy methodology prioritized “Environmental” sustainability over other selected dimensions while “Financial” dimension has been assigned the least priority in the ranking order. The significant sustainable performance indicators delineated in this study should be used as standards to ensure the accountability and credibility of the sustainable banking industry. Additionally, the research findings will provide valuable inputs to policymakers and regulators to assure better contribution of the banking sector in meeting sustainability goals.

Originality/value

Considering the paucity of studies on sustainable banking performance, this study makes two significant contributions to the literature. First, the suggested multidimensional disclosure model integrating financial and nonfinancial indicators would facilitate banking institutions in addressing the five aspects of sustainability. As one of the first studies in the context of the Indian banking industry, the findings would pave the way for better diffusion of sustainability practices. Second, the inclusion of MCDM techniques prioritizes the significance of sustainability indicators and benchmarks the performance of leading banks to achieve better profits and more substantial growth.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

1 – 10 of 174