Search results

1 – 10 of 324
Open Access
Article
Publication date: 1 August 2021

Dara O. Connor and Kathryn Cormican

There is compelling evidence that demonstrates that organisations are failing to reap the full benefits of lean initiatives. While much work has been conducted on what factors are…

3339

Abstract

Purpose

There is compelling evidence that demonstrates that organisations are failing to reap the full benefits of lean initiatives. While much work has been conducted on what factors are critical to the success of lean initiatives, there is a dearth of empirical evidence relating to whether team leaders implement critical success factors (CSFs) in practice. Therefore, this study aims to explore the extent to which functional team leaders implement lean practices focussing on the role of leadership, empowerment and culture.

Design/methodology/approach

The research analysed team leaders in a single-site manufacturing organisation. A state-of-the-art analysis was conducted to isolate relevant themes and an instrument was developed to capture data. Empirical data was collected and analysed from 34 team leaders in engineering, quality and manufacturing.

Findings

The study found that while many good managerial practices to support lean is implemented, there remain significant challenges relating to cultural issues which must be addressed. The findings illuminate a latent gap in commitment and communication from senior management, as well as an underlying discrepancy in time and resource allocation.

Originality/value

The study’s findings provide new knowledge concerning the extent to which CSFs are implemented by functional team leaders in a real-world environment. The enquiry makes a valuable departure from previous research that focusses on leadership at a senior and middle manager level. It bridges the gap between academia and practice and provides tangible and concise results to management on how CSFs relating to leadership, empowerment and culture impact team leaders to drive lean methodologies.

Details

International Journal of Lean Six Sigma, vol. 13 no. 2
Type: Research Article
ISSN: 2040-4166

Keywords

Open Access
Article
Publication date: 20 February 2018

Ben Clegg

The purpose of this paper is to know which growth-impeding constraints are perceived to act upon operations of small- to medium-sized (SME) companies by their owner-managers and…

3675

Abstract

Purpose

The purpose of this paper is to know which growth-impeding constraints are perceived to act upon operations of small- to medium-sized (SME) companies by their owner-managers and to recommend transitionary paths to elevate constraints and increase contribution levels made by SMEs’ operations. To do so, this research has been primarily founded upon Hayes et al.’s (2005) operations contribution model for differentiating between different levels of operations’ contribution, and secondarily on the theory of constraints philosophy to explain the perceptions of constraints found at each level – current and future.

Design/methodology/approach

An open-ended survey and a series of group workshops have gathered new empirical data about these perceptions, which were coded using the relational content analysis to identify a parsimonious set of perceptual growth-impeding constraint categories. The most popular transitions were identified and a correlation of frequency rank orders between “perceived current” and “perceived future” constraints categories was calculated, and likely transitionary paths for growth are discussed. Three SME case studies were documented in related action research to contextualise survey findings.

Findings

The most popular transition was from “neutral” to “leading”. A lack of people capability was perceived to be the most commonly reported growth-impeding constraint category, followed by a combined lack of process competence and product and service innovation, further followed by a lack of skills in information technology automation. In addition, a new conceptual model has been generated inductively to address shortcomings found in the original operations contribution model (Hayes et al., 2005) during its application to UK SMEs. The new model is referred to in this paper as the “Operations Growth Rocket”.

Research limitations/implications

This research only used data from UK SMEs.

Practical implications

This work should help SME owner-managers to overcome growth-impeding constraints that act upon their operations and assist them to develop more effective actions and paths to increase the contribution levels made by their operations. This in turn should support growth of their organisations. Findings will also inform teaching about more effective operations management in SMEs.

Social implications

This work should help UK SMEs to grow, which in turn will strengthen the UK economy.

Originality/value

A novel approach and new data from 208 SMEs modify a classical operations contribution model (Hayes et al., 2005). This is achieved by considering transitionary paths to be meta-categories continua abstracted from constraint categories combined with case data for moving towards higher levels of operations contribution, rather than using discrete growth-impeding and growth-constraining “levels”. This research has inductively generated a new version of the classical contribution model that should be more suitable for stimulating growth in (UK) SMEs.

Details

International Journal of Operations & Production Management, vol. 38 no. 3
Type: Research Article
ISSN: 0144-3577

Keywords

Content available
Book part
Publication date: 15 May 2023

Abstract

Details

Contemporary Studies of Risks in Emerging Technology, Part B
Type: Book
ISBN: 978-1-80455-567-5

Open Access
Article
Publication date: 7 June 2021

Adriana Soares Ito, Torbjörn Ylipää, Per Gullander, Jon Bokrantz and Anders Skoogh

Manufacturing companies struggle to manage production disturbances. One step of such management deals with prioritising those disturbances which should undergo root cause analysis

1858

Abstract

Purpose

Manufacturing companies struggle to manage production disturbances. One step of such management deals with prioritising those disturbances which should undergo root cause analysis. The focus of this work is on two areas. First, investigating current challenges faced by manufacturing companies when prioritising root cause analysis of production disturbances. Second, identifying the stakeholders and factors impacted by production disturbances. Understanding the current challenges and identifying impacted stakeholders and factors allows the development of more efficient prioritisation strategies and, thus, contributes to the reduction of frequency and impact of disturbances.

Design/methodology/approach

To achieve the intended purpose of this research, a qualitative approach was chosen. A series of interviews was conducted with practitioners, to identify current challenges. A series of focus groups was also held, to identify the impacted stakeholders and factors by disturbances.

Findings

Various challenges were identified. These are faced by manufacturing companies in their prioritisation of production disturbances and relate to the time needed, criteria used, centralisation of the process, perspective considered and data support. It was also found that a wide range of stakeholders is impacted by production disturbances, surpassing the limits of production and maintenance departments. Furthermore, the most critical factors impacted are quality, work environment, safety, time, company results, customer satisfaction, productivity, deliverability, resource utilisation, profit, process flow, plannability, machine health and reputation.

Originality/value

The current situation regarding root cause analysis prioritisation has not been identified in previous works. Moreover, there has been no prior systematic identification of the various stakeholders and factors impacted by production disturbances.

Details

International Journal of Quality & Reliability Management, vol. 39 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 23 June 2021

Bart A. Lameijer, Wilmer Pereira and Jiju Antony

The purpose of this research is to develop a better understanding of the hurdles in implementing Lean Six Sigma (LSS) for operational excellence in digital emerging technology…

6945

Abstract

Purpose

The purpose of this research is to develop a better understanding of the hurdles in implementing Lean Six Sigma (LSS) for operational excellence in digital emerging technology companies.

Design/methodology/approach

We have conducted case studies of LSS implementations in six US-based companies in the digital emerging technology industry.

Findings

Critical success factors (CSF) for LSS implementations in digital emerging technology companies are: (1) organizational leadership that is engaged to the implementation, (2) LSS methodology that is rebranded to fit existing shared values in the organization, (3) restructuring of the traditional LSS training program to include a more incremental, prioritized, on-the-job training approach and (4) a modified LSS project execution methodology that includes (a) condensing the phases and tools applied in LSS projects and (b) adopting more iterative project management methods compared to the standard phased LSS project approach.

Research limitations/implications

The qualitative nature of our analysis and the geographic coverage of our sample limit the generalizability of our findings.

Practical implications

Implications comprise the awareness and knowledge of critical success factors and LSS methodology modifications specifically relevant for digital emerging technology companies or companies that share similarities in terms of focus on product development, innovation and growth, such as R&D departments in high-tech manufacturing companies.

Originality/value

Research on industry-specific enablers for successful LSS implementation in the digital emerging technology industry is virtually absent. Our research informs practitioners on how to implement LSS in this and alike industries, and points to aspects of such implementations that are worthy of further attention from the academic community.

Details

Journal of Manufacturing Technology Management, vol. 32 no. 9
Type: Research Article
ISSN: 1741-038X

Keywords

Open Access
Article
Publication date: 6 September 2022

Rose Clancy, Ken Bruton, Dominic T.J. O’Sullivan and Aidan J. Cloonan

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital…

2876

Abstract

Purpose

Quality management practitioners have yet to cease the potential of digitalisation. Furthermore, there is a lack of tools such as frameworks guiding practitioners in the digital transformation of their organisations. The purpose of this study is to provide a framework to guide quality practitioners with the implementation of digitalisation in their existing practices.

Design/methodology/approach

A review of literature assessed how quality management and digitalisation have been integrated. Findings from the literature review highlighted the success of the integration of Lean manufacturing with digitalisation. A comprehensive list of Lean Six Sigma tools were then reviewed in terms of their effectiveness and relevance for the hybrid digitisation approach to process improvement (HyDAPI) framework.

Findings

The implementation of the proposed HyDAPI framework in an industrial case study led to increased efficiency, reduction of waste, standardised work, mistake proofing and the ability to root cause non-conformance products.

Research limitations/implications

The activities and tools in the HyDAPI framework are not inclusive of all techniques from Lean Six Sigma.

Practical implications

The HyDAPI framework is a flexible guide for quality practitioners to digitalise key information from manufacturing processes. The framework allows organisations to select the appropriate tools as needed. This is required because of the varying and complex nature of organisation processes and the challenge of adapting to the continually evolving Industry 4.0.

Originality/value

This research proposes the HyDAPI framework as a flexible and adaptable approach for quality management practitioners to implement digitalisation. This was developed because of the gap in research regarding the lack of procedures guiding organisations in their digital transition to Industry 4.0.

Details

International Journal of Lean Six Sigma, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-4166

Keywords

Abstract

Details

Journal of Global Operations and Strategic Sourcing, vol. 17 no. 2
Type: Research Article
ISSN: 2398-5364

Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2234

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Content available
Article
Publication date: 1 June 2021

Albert Vasso, Richard Cobb, John Colombi, Bryan Little and David Meyer

The US Government is challenged to maintain pace as the world’s de facto provider of space object cataloging data. Augmenting capabilities with nontraditional sensors present an…

967

Abstract

Purpose

The US Government is challenged to maintain pace as the world’s de facto provider of space object cataloging data. Augmenting capabilities with nontraditional sensors present an expeditious and low-cost improvement. However, the large tradespace and unexplored system of systems performance requirements pose a challenge to successful capitalization. This paper aims to better define and assess the utility of augmentation via a multi-disiplinary study.

Design/methodology/approach

Hypothetical telescope architectures are modeled and simulated on two separate days, then evaluated against performance measures and constraints using multi-objective optimization in a heuristic algorithm. Decision analysis and Pareto optimality identifies a set of high-performing architectures while preserving decision-maker design flexibility.

Findings

Capacity, coverage and maximum time unobserved are recommended as key performance measures. A total of 187 out of 1017 architectures were identified as top performers. A total of 29% of the sensors considered are found in over 80% of the top architectures. Additional considerations further reduce the tradespace to 19 best choices which collect an average of 49–51 observations per space object with a 595–630 min average maximum time unobserved, providing redundant coverage of the Geosynchronous Orbit belt. This represents a three-fold increase in capacity and coverage and a 2 h (16%) decrease in the maximum time unobserved compared to the baseline government-only architecture as-modeled.

Originality/value

This study validates the utility of an augmented network concept using a physics-based model and modern analytical techniques. It objectively responds to policy mandating cataloging improvements without relying solely on expert-derived point solutions.

Details

Journal of Defense Analytics and Logistics, vol. 5 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Open Access
Article
Publication date: 31 August 2018

Gangani Sureka, Yapa Mahinda Bandara and Deepthi Wickramarachchi

The purpose of this research is to identify the current reverse logistics practices adopted by soft drink companies and the prominent factors which can decide the efficiency and…

1036

Abstract

The purpose of this research is to identify the current reverse logistics practices adopted by soft drink companies and the prominent factors which can decide the efficiency and effectiveness of the entire process of the reverse logistics channel. The paper employs Pareto analysis and the Analytical Hierarchy Process (AHP) method on data collected from logistics professionals involved in the software industry in Sri Lanka using two questionnaires. As the prominent factors, transportation, accidents, packaging, a method of storage, the cleaning process and sorting process was identified and the first four prominent factors have a higher influence on both measures of efficiency and effectiveness. They can also identify the external factors which can emerge inefficiencies due to outsourced dealers. Lack of previous literature on the subject matter and the difficulty to access the filed data were the main limitations of this study. The identified factors will help to identify the correct root causes for the inefficiencies of the current reverse logistics practices and concentrating on these factors will give an opportunity for the soft drink industry players to successfully implement a sustainable green supply chain which reduces waste at each stage of its forwards and reverse logistics process. Transportation, Accidents, Packaging, and Storage have been previously identified as considerations in reverse logistics processes and the current study showed that they have a higher impact on both efficiency and effectiveness on reverse logistics and these factors should be given specific consideration while in the operations.

Details

Journal of International Logistics and Trade, vol. 16 no. 2
Type: Research Article
ISSN: 1738-2122

Keywords

1 – 10 of 324