Search results

1 – 10 of 608
Article
Publication date: 7 October 2013

Atri Sengupta, D.N. Venkatesh and Arun K. Sinha

The aims of the article are to not only review existing competency models and offer a comprehensive performance-linked competency model towards sustaining competitive advantage…

3614

Abstract

Purpose

The aims of the article are to not only review existing competency models and offer a comprehensive performance-linked competency model towards sustaining competitive advantage, but also validate the proposed model in an Indian textile organisation.

Design/methodology/approach

The article operationalises the term “competency” and intends to develop a comprehensive performance-linked competency model after analysing the existing models with respect to competitive advantage; and the model has been validated empirically in an Indian textile company using data envelopment analysis (DEA), cross-efficiency DEA, and rank order centroid (ROC) methods.

Findings

It reveals that the comprehensive performance-linked competency model focuses on competency identification, competency scoring and aligning competency with other strategic HR functions in a three-phase systematic method which will subsequently help the organisation to sustain in the competition. It has further been shown how using DEA, cross-efficiency DEA and ROC, an organisation can align individual performances and their competencies in terms of efficiency.

Research limitations/implications

If the number of competencies get increased, DEA cannot be used.

Practical implications

This can be applied to industry for more efficient and effective performance measurement tool.

Originality/value

The paper enables organizations to systematically manage their employee competences to ensure high-performance level and competitive advantage.

Details

International Journal of Organizational Analysis, vol. 21 no. 4
Type: Research Article
ISSN: 1934-8835

Keywords

Article
Publication date: 9 December 2020

Fatma Pakdil, Pelin Toktaş and Gülin Feryal Can

The purpose of this study is to develop a methodology in which alternate Six Sigma projects are prioritized and selected using appropriate multi-criteria decision-making (MCDM…

Abstract

Purpose

The purpose of this study is to develop a methodology in which alternate Six Sigma projects are prioritized and selected using appropriate multi-criteria decision-making (MCDM) methods in healthcare organizations. This study addresses a particular gap in implementing a systematic methodology for Six Sigma project prioritization and selection in the healthcare industry.

Design/methodology/approach

This study develops a methodology in which alternate Six Sigma projects are prioritized and selected using a modified Kemeny median indicator rank accordance (KEMIRA-M), an MCDM method based on a case study in healthcare organizations. The case study was hypothetically developed in the healthcare industry and presented to demonstrate the proposed framework’s applicability and validity for future decision-makers who will take place in Six Sigma project selection processes.

Findings

The study reveals that the Six Sigma project prioritized by KEMIRA-M assign the highest ranks to patient satisfaction, revenue enhancement and sigma level benefit criteria, while resource utilization and process cycle time receive the lowest rank.

Practical implications

The methodology developed in this paper proposes an MCDM-based approach for practitioners to prioritize and select Six Sigma projects in the healthcare industry. The findings regarding patient satisfaction and revenue enhancement mesh with the current trends that dominate and regulate the industry. KEMIRA-M provides flexibility for Six Sigma project selection and uses multiple criteria in two-criteria groups, simultaneously. In this study, a more objective KEMIRA-M method was suggested by implementing two different ranking-based weighting approaches.

Originality/value

This is the first study that implements KEMIRA-M in Six Sigma project prioritization and selection process in the healthcare industry. To overcome previous KEMIRA-M shortcomings, two ranking based weighting approaches were proposed to form a weighting procedure of KEMIRA-M. As the first implementation of the KEMIRA-M weighting procedure, the criteria weighting procedure of the KEMIRA-M method was developed using two different weighting methods based on ranking. The study provides decision-makers with a methodology that considers both benefit and cost type criteria for alternates and gives importance to experts’ rankings related to criteria and the performance values of alternates for criteria.

Details

International Journal of Lean Six Sigma, vol. 12 no. 3
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 16 January 2023

Renan Alves Viegas and Ana Paula Cabral Seixas Costa

Over the years, several business process management maturity models (BPM-MMs) have been proposed. Despite great advances, some issues concerning the effectiveness of their…

367

Abstract

Purpose

Over the years, several business process management maturity models (BPM-MMs) have been proposed. Despite great advances, some issues concerning the effectiveness of their practical functionality still need to be addressed. These are related to three important aspects of BPM maturity assessment and improvement: their mechanisms for evaluating maturity (clarity, availability and accuracy), their flexibility (compliance) and their structure (path to maturity). The main goal with this article is to address such issues by introducing a new concept to evaluate and improve BPM maturity.

Design/methodology/approach

The authors proceed in accordance with a design science research (DSR) integrating multi-criteria decision-making (MCDM) with intuitionistic fuzzy sets (IFSs).

Findings

The authors’ proposal provides a practical BPM maturity framework and its assessment procedure to support organizations to determine and improve their initiatives appropriately, which means that it fully or partially addresses all the issues raised. To demonstrate the applicability of this framework, a real application was conducted, and a parallel between existing BPM-MMs is presented to emphasize its advances.

Originality/value

It is the first time that the MCDM approach has been used to support BPM maturity assessment. This approach not only takes into account the uncertainties and subjectivities inherent to this type of decision problem but also allows it to be treated quantitatively, thus making it possible to obtain more accurate results even with less experienced teams.

Details

Business Process Management Journal, vol. 29 no. 2
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 1 March 2022

Elisabetta Colucci, Francesca Matrone, Francesca Noardo, Vanessa Assumma, Giulia Datola, Federica Appiotti, Marta Bottero, Filiberto Chiabrando, Patrizia Lombardi, Massimo Migliorini, Enrico Rinaldi, Antonia Spanò and Andrea Lingua

The study, within the Increasing Resilience of Cultural Heritage (ResCult) project, aims to support civil protection to prevent, lessen and mitigate disasters impacts on cultural…

2020

Abstract

Purpose

The study, within the Increasing Resilience of Cultural Heritage (ResCult) project, aims to support civil protection to prevent, lessen and mitigate disasters impacts on cultural heritage using a unique standardised-3D geographical information system (GIS), including both heritage and risk and hazard information.

Design/methodology/approach

A top-down approach, starting from existing standards (an INSPIRE extension integrated with other parts from the standardised and shared structure), was completed with a bottom-up integration according to current requirements for disaster prevention procedures and risk analyses. The results were validated and tested in case studies (differentiated concerning the hazard and type of protected heritage) and refined during user forums.

Findings

Besides the ensuing reusable database structure, the filling with case studies data underlined the tough challenges and allowed proposing a sample of workflows and possible guidelines. The interfaces are provided to use the obtained knowledge base.

Originality/value

The increasing number of natural disasters could severely damage the cultural heritage, causing permanent damage to movable and immovable assets and tangible and intangible heritage. The study provides an original tool properly relating the (spatial) information regarding cultural heritage and the risk factors in a unique archive as a standard-based European tool to cope with these frequent losses, preventing risk.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. 14 no. 2
Type: Research Article
ISSN: 2044-1266

Keywords

Article
Publication date: 26 January 2010

Esmaeil Mehdizadeh

The aim of this paper is to present a fuzzy centroid‐based method to ranking customer requirements with competition consideration. The proposed method not only focuses on normal…

Abstract

Purpose

The aim of this paper is to present a fuzzy centroid‐based method to ranking customer requirements with competition consideration. The proposed method not only focuses on normal fuzzy numbers, but also considers non‐normal fuzzy numbers to capture the true customer requirements.

Design/methodology/approach

This paper proposes a new customer requirements ranking method using QFD that not only focuses on the voice of the customer, but also considers the competitive environment. The method uses fuzzy mathematics instead of crisp numbers; this is known as the fuzzy centroid‐based method.

Findings

A numerical example demonstrates that if the fuzzy numbers are non‐normal, previous ranking methods were shown to be incorrect and to have led to some misapplications. To avoid possible further misapplications or spread in the future, the correct centroid formula used for fuzzy numbers is derived and their simplified expressions for non‐normal fuzzy numbers are given.

Originality/value

Various methods have been developed to rate and rank customer needs; however, few methods consider the competitive environment. In addition, in real applications, fuzzy mathematics are usually more appropriate than crisp models. Many previous methods are misleading and have led to some misapplications if the fuzzy numbers are non‐normal. The paper contributes to theory and practice by explaining the reasons for using the fuzzy centroid‐based method.

Details

International Journal of Quality & Reliability Management, vol. 27 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 3 October 2016

Muhammet Enis Bulak, Ali Turkyilmaz, Metin Satir, Muhammad Shoaib and Muhammad Shahbaz

The purpose of this paper is to concentrate on measuring and evaluating the performance efficiency of electrical machinery manufacturing small- and medium-sized enterprises (SMEs…

Abstract

Purpose

The purpose of this paper is to concentrate on measuring and evaluating the performance efficiency of electrical machinery manufacturing small- and medium-sized enterprises (SMEs) in Turkey. The industry-based efficiency evaluation provides management with information including the relatively best practice firms in the observation sets and locates the relatively inefficient firms by comparing with the frontiers.

Design/methodology/approach

In this study, an evaluation model, based on previous literature and recent industry SWOT analysis, is proposed to carry out efficiency analysis for electrical machinery manufacturing SMEs and output-oriented CCR data envelopment analysis methodology is used to find out frontier SMEs. The proposed efficiency measurement model is used for 93 SMEs from electrical machinery manufacturing sector.

Findings

Rely on the model results, efficiency score of the firms are compared and enhancements required for becoming an efficient unit are denoted. This study is based on previous research model that was carried out for ten different industries. The results indicated that 39 out of 93 companies efficiently performed in general perspective. The analysis also resulted that firms have significant resource excesses and shortages on outputs.

Originality/value

The distinguished way of this study is to evaluate Turkish electrical machinery manufacturing companies’ resources relying on performance efficiency model that compromises of strategic competitive priorities and also the model provides enhancement opportunities to SMEs for being more competitive in the sector. The characteristics features of the firms are offered with respect to demographic, financial and quality perspectives.

Details

Benchmarking: An International Journal, vol. 23 no. 7
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 9 May 2016

Chao-Lung Yang and Thi Phuong Quyen Nguyen

Class-based storage has been studied extensively and proved to be an efficient storage policy. However, few literature addressed how to cluster stuck items for class-based…

2531

Abstract

Purpose

Class-based storage has been studied extensively and proved to be an efficient storage policy. However, few literature addressed how to cluster stuck items for class-based storage. The purpose of this paper is to develop a constrained clustering method integrated with principal component analysis (PCA) to meet the need of clustering stored items with the consideration of practical storage constraints.

Design/methodology/approach

In order to consider item characteristic and the associated storage restrictions, the must-link and cannot-link constraints were constructed to meet the storage requirement. The cube-per-order index (COI) which has been used for location assignment in class-based warehouse was analyzed by PCA. The proposed constrained clustering method utilizes the principal component loadings as item sub-group features to identify COI distribution of item sub-groups. The clustering results are then used for allocating storage by using the heuristic assignment model based on COI.

Findings

The clustering result showed that the proposed method was able to provide better compactness among item clusters. The simulated result also shows the new location assignment by the proposed method was able to improve the retrieval efficiency by 33 percent.

Practical implications

While number of items in warehouse is tremendously large, the human intervention on revealing storage constraints is going to be impossible. The developed method can be easily fit in to solve the problem no matter what the size of the data is.

Originality/value

The case study demonstrated an example of practical location assignment problem with constraints. This paper also sheds a light on developing a data clustering method which can be directly applied on solving the practical data analysis issues.

Details

Industrial Management & Data Systems, vol. 116 no. 4
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Article
Publication date: 14 August 2020

Wei Chen, Wally Smieliauskas and Sifeng Liu

An important issue in online auditing is how to improve the reliability of online auditing in order to reduce the overall audit risk. In this paper, a reliability assessment and…

Abstract

Purpose

An important issue in online auditing is how to improve the reliability of online auditing in order to reduce the overall audit risk. In this paper, a reliability assessment and early-warning method of online auditing based on RC (rank centroid), AHP (analytic hierarchy process) and GM (1,1) is proposed from the perspective of information technology (IT) audit risk control.

Design/methodology/approach

The paper begins by structuring the AHP hierarchy to the reliability assessment of online auditing used in China. Then, RC is used to rank the importance of the assessment criteria. Pairwise comparisons of criteria are made based on the rank results of RC, and this leads to a matrix of comparisons. Next, the comparison matrices are translated into weights, and the reliability assessment and early-warning model of online auditing is constructed using the GM (1,1) model. A case illustration is given to analyze the application of this method.

Findings

Research results show that the reliability of the evaluation method designed in this paper is rigorous and effective. The reliability assessment and early-warning method of online auditing based on RC/AHP/GM (1,1) can assess and give an effective early warning of reliability changes in an online auditing system, which can meet the needs of current online auditing projects.

Practical implications

The results of this study have good potential for widespread future implementation of online auditing projects.

Originality/value

An effective reliability assessment and early-warning method of online auditing is proposed from the perspective of IT audit risk control in this study.

Details

Grey Systems: Theory and Application, vol. 11 no. 3
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

1 – 10 of 608