Search results

1 – 10 of over 10000
To view the access options for this content please click here
Article
Publication date: 22 November 2010

Eisenhower C. Etienne

This paper aims to show that the extent to which convergence/divergence of a company's quality policies and practices towards/away from those of Six Sigma benchmark…

Downloads
1061

Abstract

Purpose

This paper aims to show that the extent to which convergence/divergence of a company's quality policies and practices towards/away from those of Six Sigma benchmark policies and practices mirror and anticipate the divergence of its sigma metric (SMs) from quantitative Six Sigma benchmarks. Further, the paper proposes to evaluate the robustness of the quality processes of these three companies and to compare them to that of the Six Sigma benchmark by subjecting these processes to the twin performance shocks of the benchmark Six Sigma 1.5σ allowance for process drift and a 25 percent tightening of customer requirements.

Design/methodology/approach

Using a novel methodology more appropriate to the critical quality characteristics of typical service industry companies, the paper computes a set of SMs for each company that is richer and broader than the metrics found in standard Six Sigma tables. This new methodology is based on the empirically observed defect rates that are currently being generated by a service process. Further, based on the available empirical data, the paper compared these metrics to the Six Sigma benchmarks.

Findings

First, the paper shows that it is possible to compute a broad array of Six Sigma metrics for service businesses based on defect rate data. Second, the results confirm the central proposition of the research to the effect that the divergence/convergence of the qualitative characteristics of a company's quality system from benchmark Six Sigma policies and practices mirror and anticipate the convergence/divergence of the company's quality metrics from the Six Sigma benchmark. Third, the research produced the unanticipated result that the quantitative quality performance of high‐performing service businesses on the Six Sigma metrics are much lower than anticipated and below what is normally achieved by their manufacturing counterparts. The results were also used to do an evaluation of the Taguchi robustness of service processes.

Originality/value

First, the paper demonstrates that traditional Six Sigma computational methodology for generating Six Sigma metrics that is prevalent in manufacturing applies equally to service businesses. Second, the parallel convergence of the qualitative characteristics of a company's quality system towards Six Sigma practices and its quantitative metrics towards the Six Sigma benchmark means that primacy must be given to quality practices as the drivers of quality improvement. Third, the fact that high‐performing service businesses achieve Six Sigma measures that are so low compared to their manufacturing counterparts seems to point either to some key measurement challenges in deploying Six Sigma in service industries or to the need to further change Six Sigma methodology to make it more applicable to these businesses.

Details

International Journal of Lean Six Sigma, vol. 1 no. 4
Type: Research Article
ISSN: 2040-4166

Keywords

To view the access options for this content please click here
Article
Publication date: 30 March 2010

Yahaya Makarfi Ibrahim

The desire to improve efficiency has led both academics and practitioners to embrace various technologies to aid managers to discharge their functions. Recently, there has…

Downloads
2300

Abstract

Purpose

The desire to improve efficiency has led both academics and practitioners to embrace various technologies to aid managers to discharge their functions. Recently, there has been a growing interest amongst construction researchers on the use of computer vision and image‐processing techniques to automatically capture work in progress. Reported findings are promising; however, those previous studies fall short of providing a reporting mechanism to aid decision making. The purpose of this paper is to develop a reporting model based on progress captured using computer vision.

Design/methodology/approach

The paper first presents trends in research relating to use of computer vision in the monitoring of work in progress. It then employs the unified modelling language to present the conceptual development of the model. The computerised reporting model is developed using the visual basic programming language.

Findings

The key elements of the model are computations of cost‐schedule variances, payments and cash flows. Results of a test on a hypothetical case show that the model accurately computes the metrics.

Originality/value

The reporting model serves to provide managers with a quick and easy means of interpreting work progress captured using computer vision. It reinforces the value of already existing work on the application of computer vision techniques to the measurement of work in progress on construction sites.

Details

Journal of Engineering, Design and Technology, vol. 8 no. 1
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Article
Publication date: 26 January 2018

Amin Mahmoudi, Mohd Ridzwan Yaakub and Azuraliza Abu Bakar

Users are the key players in an online social network (OSN), so the behavior of the OSN is strongly related to their behavior. User weight refers to the influence of the…

Abstract

Purpose

Users are the key players in an online social network (OSN), so the behavior of the OSN is strongly related to their behavior. User weight refers to the influence of the users on the OSN. The purpose of this paper is to propose a method to identify the user weight based on a new metric for defining the time intervals.

Design/methodology/approach

The behavior of an OSN changes over time, thus the user weight in the OSN is different in each time frame. Therefore, a good metric for estimating the user weight in an OSN depends on the accuracy of the metric used to define the time interval. New metric for defining the time intervals is based on the standard deviation and identifies that the user weight is based on a simple exponential smoothing model.

Findings

The results show that the proposed method covers the maximum behavioral changes of the OSN and is able to identify the influential users in the OSN more accurately than existing methods.

Research limitations/implications

In event detection, when a terrorist attack occurs as an event, knowing the influential users help us to know the leader of the attack. Knowing the influential user in each time interval based on this study can help us to detect communities which formed around these people. Finally, in marketing, this issue helps us to have a targeted advertising.

Practical implications

User effect is a significant issue in many OSN domain problems, such as community detection, event detection and recommender systems.

Originality/value

Previous studies do not give priority to the recent time intervals in identifying the relative importance of users. Thus, defining a metric to compute a time interval that covers the maximum changes in the network is a major shortcoming of earlier studies. Some experiments were conducted on six different data sets to test the performance of the proposed model in terms of the computed time intervals and user weights.

Details

Data Technologies and Applications, vol. 52 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

To view the access options for this content please click here
Article
Publication date: 16 March 2012

Kabir C. Sen

Although the PGA Tour provides a wide array of statistics, no single measure has successfully been able to predict a player's success during the season, either in terms of…

Downloads
186

Abstract

Purpose

Although the PGA Tour provides a wide array of statistics, no single measure has successfully been able to predict a player's success during the season, either in terms of earnings per tournament or weighted average scores. The purpose of this paper is to present a metric that attempts to predict annual player rankings based on these two criteria.

Design/methodology/approach

The metric is computed from available statistics and attempts to encapsulate a player's unique strengths and weaknesses in a single number.

Findings

Deviations in rankings based on the metric are compared to those based on earnings per event and adjusted scoring averages. The results suggest that in addition to the average annual performance on the greens, the mix of tournaments played and the incidence of heroics or consistency have an important impact on the chances of success on the Tour.

Research limitations/implications

The metric's predictions can be negatively affected if a golfer makes a large proportion of double eagles or double bogies.

Practical implications

The KCS (Key Criterion of Success) metric provides a quick route to succinctly summarizing a golfer's unique strengths and weaknesses in a single number.

Originality/value

Previous literature has mentioned the gap between statistics and success in golf. For the first time, possible reasons behind this divergence are identified in this paper.

Details

Sport, Business and Management: An International Journal, vol. 2 no. 1
Type: Research Article
ISSN: 2042-678X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 2021

Narasimhulu K, Meena Abarna KT and Sivakumar B

The purpose of the paper is to study multiple viewpoints which are required to access the more informative similarity features among the tweets documents, which is useful…

Abstract

Purpose

The purpose of the paper is to study multiple viewpoints which are required to access the more informative similarity features among the tweets documents, which is useful for achieving the robust tweets data clustering results.

Design/methodology/approach

Let “N” be the number of tweets documents for the topics extraction. Unwanted texts, punctuations and other symbols are removed, tokenization and stemming operations are performed in the initial tweets pre-processing step. Bag-of-features are determined for the tweets; later tweets are modelled with the obtained bag-of-features during the process of topics extraction. Approximation of topics features are extracted for every tweet document. These set of topics features of N documents are treated as multi-viewpoints. The key idea of the proposed work is to use multi-viewpoints in the similarity features computation. The following figure illustrates multi-viewpoints based cosine similarity computation of the five tweets documents (here N = 5) and corresponding documents are defined in projected space with five viewpoints, say, v1,v2, v3, v4, and v5. For example, similarity features between two documents (viewpoints v1, and v2) are computed concerning the other three multi-viewpoints (v3, v4, and v5), unlike a single viewpoint in traditional cosine metric.

Findings

Healthcare problems with tweets data. Topic models play a crucial role in the classification of health-related tweets with finding topics (or health clusters) instead of finding term frequency and inverse document frequency (TF–IDF) for unlabelled tweets.

Originality/value

Topic models play a crucial role in the classification of health-related tweets with finding topics (or health clusters) instead of finding TF-IDF for unlabelled tweets.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 28 April 2014

Jonas Johansson, Ilja Belov, Erland Johnson and Peter Leisner

The purpose of this paper is to introduce a novel computational method to evaluate damage accumulation in a solder joint of an electronic package, when exposed to…

Abstract

Purpose

The purpose of this paper is to introduce a novel computational method to evaluate damage accumulation in a solder joint of an electronic package, when exposed to operating temperature environment. A procedure to implement the method is suggested, and a discussion of the method and its possible applications is provided in the paper.

Design/methodology/approach

Methodologically, interpolated response surfaces based on specially designed finite element (FE) simulation runs, are employed to compute a damage metric at regular time intervals of an operating temperature profile. The developed method has been evaluated on a finite-element model of a lead-free PBGA256 package, and accumulated creep strain energy density has been chosen as damage metric.

Findings

The method has proven to be two orders of magnitude more computationally efficient compared to FE simulation. A general agreement within 3 percent has been found between the results predicted with the new method, and FE simulations when tested on a number of temperature profiles from an avionic application. The solder joint temperature ranges between +25 and +75°C.

Practical implications

The method can be implemented as part of reliability assessment of electronic packages in the design phase.

Originality/value

The method enables increased accuracy in thermal fatigue life prediction of solder joints. Combined with other failure mechanisms, it may contribute to the accuracy of reliability assessment of electronic packages.

To view the access options for this content please click here
Article
Publication date: 13 June 2019

Arthur Piquet, Boubakr Zebiri, Abdellah Hadjadj and Mostafa Safdari Shadloo

This paper aims to present the development of a highly parallel finite-difference computational fluid dynamics code in generalized curvilinear coordinates system. The…

Abstract

Purpose

This paper aims to present the development of a highly parallel finite-difference computational fluid dynamics code in generalized curvilinear coordinates system. The objectives are to handle internal and external flows in fairly complex geometries including shock waves, compressible turbulence and heat transfer.

Design/methodology/approach

The code is equipped with high-order discretization schemes to improve the computational accuracy of the solution algorithm. Besides, a new method to deal with the geometrical singularities, so-called domain decomposition method (DDM), is implemented. The DDM consists of using two different meshes communicating with each other, where the base mesh is Cartesian and the overlapped one a hollow cylinder.

Findings

The robustness of the present implemented code is appraised through several numerical test cases including a vortex advection, supersonic compressible flow over a cylinder, Poiseuille flow, turbulent channel and pipe flows. The results obtained here are in an excellent agreement when compared to the experimental data and the previous direct numerical simulation (DNS). As for the DDM strategy, it was successful as simulation time is clearly decreased and the connection between the two subdomains does not create spurious oscillations.

Originality/value

In sum, the developed solver was capable of solving, accurately and with high-precision, two- and three-dimensional compressible flows including fairly complex geometries. It is noted that the data provided by the DNS of supersonic pipe flows are not abundant in the literature and therefore will be available online for the community.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 30 no. 1
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Article
Publication date: 20 August 2018

Corren G. McCoy, Michael L. Nelson and Michele C. Weigle

The purpose of this study is to present an alternative to university ranking lists published in U.S. News & World Report, Times Higher Education, Academic Ranking of World

Abstract

Purpose

The purpose of this study is to present an alternative to university ranking lists published in U.S. News & World Report, Times Higher Education, Academic Ranking of World Universities and Money Magazine. A strategy is proposed to mine a collection of university data obtained from Twitter and publicly available online academic sources to compute social media metrics that approximate typical academic rankings of US universities.

Design/methodology/approach

The Twitter application programming interface (API) is used to rank 264 universities using two easily collected measurements. The University Twitter Engagement (UTE) score is the total number of primary and secondary followers affiliated with the university. The authors mine other public data sources related to endowment funds, athletic expenditures and student enrollment to compute a ranking based on the endowment, expenditures and enrollment (EEE) score.

Findings

In rank-to-rank comparisons, the authors observed a significant, positive rank correlation (τ = 0.6018) between UTE and an aggregate reputation ranking, which indicates UTE could be a viable proxy for ranking atypical institutions normally excluded from traditional lists.

Originality/value

The UTE and EEE metrics offer distinct advantages because they can be calculated on-demand rather than relying on an annual publication and they promote diversity in the ranking lists, as any university with a Twitter account can be ranked by UTE and any university with online information about enrollment, expenditures and endowment can be given an EEE rank. The authors also propose a unique approach for discovering official university accounts by mining and correlating the profile information of Twitter friends.

Details

Information Discovery and Delivery, vol. 46 no. 3
Type: Research Article
ISSN: 2398-6247

Keywords

To view the access options for this content please click here
Article
Publication date: 1 May 1998

W.F. Spotz

Considers the extension of a new class of higher‐order compact methods to nonuniform grids and examines the effect of pollution that arises with differencing the…

Abstract

Considers the extension of a new class of higher‐order compact methods to nonuniform grids and examines the effect of pollution that arises with differencing the associated metric coefficients. Numerical studies for the standard model convection diffusion equation in 1D and 2D are carried out to validate the convergence behaviour and demonstrate the high‐order accuracy.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 8 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Article
Publication date: 15 December 2020

Francisco Villarreal-Valderrama, Carlos Santana Delgado, Patricia Del Carmen Zambrano-Robledo and Luis Amezquita-Brooks

Reducing fuel consumption of unmanned aerial vehicles (UAVs) during transient operation is a cornerstone to achieve environment-friendly operations. The purpose of this…

Abstract

Purpose

Reducing fuel consumption of unmanned aerial vehicles (UAVs) during transient operation is a cornerstone to achieve environment-friendly operations. The purpose of this paper is to develop a control scheme that improves the fuel economy of a turbojet in its full operating envelope.

Design/methodology/approach

A novel direct-thrust linear quadratic integral (LQI) approach, comprised by an optimal observer/controller satisfying specified performance parameters, is presented. The thrust estimator, based in a Wiener model, is validated with the experimental data of a micro-turbojet. Model uncertainty is characterized by analyzing variations between the identified model and measured data. The resulting uncertainty range is used to verify closed-loop stability with the circle criterion. The proposed controller provides stable responses with the specified performance in the whole operating range, even with after considering plant nonlinearities. Finally, the direct-thrust LQI is compared with a standard thrust controller to assess fuel economy and performance.

Findings

The direct-thrust LQI approach reduced the fuel consumption by 2.1090% in the most realistic scenario. The controllers were also evaluated using the environmental effect parameter (EEP) and transient-thrust-specific fuel consumption (T-TSFC). These novel metrics are proposed to evaluate the environmental impact during transient-thrust operations. The direct-thrust LQI approach has a more efficient fuel consumption according to these metrics. The results also show that isolating the thrust dynamics within the feedback loop has an important impact in fuel economy. Controllers were also evaluated using the EEP and T-TSFC. These novel metrics are proposed to evaluate the environmental impact during transient-thrust operations. The direct-thrust LQI approach has a more efficient fuel consumption according to these metrics. The results also show that isolating the thrust dynamics within the feedback loop has an important impact in fuel economy.

Originality/value

This study shows the design of an effective direct-thrust control approach that minimizes fuel consumption, ensures stable responses for the full operation range, allows isolating the thrust dynamics when designing the controller and is compatible with classical robustness and performance metrics. Finally, the study shows that a simple controller can reduce the fuel consumption of the turbojet during transient operation in scenarios that approximate realistic operating conditions.

Details

Aircraft Engineering and Aerospace Technology, vol. 93 no. 3
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 10 of over 10000