Search results

1 – 10 of over 301000
To view the access options for this content please click here
Article
Publication date: 1 March 2001

Sandi Mann

Downloads
111

Abstract

Details

Leadership & Organization Development Journal, vol. 22 no. 2
Type: Research Article
ISSN: 0143-7739

Keywords

To view the access options for this content please click here
Article
Publication date: 20 September 2021

Marwa Kh. Hassan

Distribution. The purpose of this study is to obtain the modified maximum likelihood estimator of stress–strength model using the ranked set sampling, to obtain the…

Abstract

Purpose

Distribution. The purpose of this study is to obtain the modified maximum likelihood estimator of stress–strength model using the ranked set sampling, to obtain the asymptotic and bootstrap confidence interval of P[Y < X], to compare the performance of author’s estimates with the estimates under simple random sampling and to apply author’s estimates on head and neck cancer.

Design/methodology/approach

The maximum likelihood estimator of R = P[Y < X], where X and Y are two independent inverse Weibull random variables common shape parameter that affect the shape of the distribution, and different scale parameters that have an effect on the distribution dispersion are given under ranked set sampling. Together with the asymptotic and bootstrap confidence interval, Monte Carlo simulation shows that this estimator performs better than the estimator under simple random sampling. Also, the asymptotic and bootstrap confidence interval under ranked set sampling is better than these interval estimators under simple random sampling. The application to head and neck cancer disease data shows that the estimator of R = P[Y < X] that shows the treatment with radiotherapy is more efficient than the treatment with a combined radiotherapy and chemotherapy under ranked set sampling that is better than these estimators under simple random sampling.

Findings

The ranked set sampling is more effective than the simple random sampling for the inference of stress-strength model based on inverse Weibull distribution.

Originality/value

This study sheds light on the author’s estimates on head and neck cancer.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article
Publication date: 22 September 2021

Uchenna Uzo

This study aims to investigate how and why retailers and resellers in sample firms of the informal economy set prices and the performance implications for the firm’s…

Abstract

Purpose

This study aims to investigate how and why retailers and resellers in sample firms of the informal economy set prices and the performance implications for the firm’s pricing efforts.

Design/methodology/approach

The author generated their insights through an inductive qualitative study of four organizations operating within the informal economy in the Nigerian retailing sector.

Findings

The study found that some organizations within the informal economy set prices in different ways i.e. negotiated pricing and fixed pricing. The contracting criteria between the retailers and resellers determine the pricing strategy. Contractual terms based on relational ties between both facilitate negotiated price-setting, while contractual terms based on non-relational ties promote fixed pricing. The type of price-setting arrangement of the sampled retailer relates to the organization’s performance within its industry. Particularly, the study found that retailers that adopted negotiated pricing performed above the industry average for their product category. In contrast, the retailers that adopted fixed pricing performed below the industry average for their product category.

Originality/value

As far as the author knows, this is the first study to investigate pricing methods within the informal economy. This is also the first known study to investigate price-setting arrangements between retailers and resellers within the informal economy. Another unique contribution of this paper is that it is the first study that focuses on pricing interactions among business-to-business firms within the informal economy. The study contributes to the work on relational embeddedness, relational contracting and informal economies.

Details

Qualitative Market Research: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1352-2752

Keywords

To view the access options for this content please click here
Article
Publication date: 22 September 2021

Samar Ali Shilbayeh and Sunil Vadera

This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question…

Abstract

Purpose

This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?”

Design/methodology/approach

This paper describes the use of a meta-learning framework for recommending cost-sensitive classification methods for the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” The framework is based on the idea of applying machine learning techniques to discover knowledge about the performance of different machine learning algorithms. It includes components that repeatedly apply different classification methods on data sets and measures their performance. The characteristics of the data sets, combined with the algorithms and the performance provide the training examples. A decision tree algorithm is applied to the training examples to induce the knowledge, which can then be used to recommend algorithms for new data sets. The paper makes a contribution to both meta-learning and cost-sensitive machine learning approaches. Those both fields are not new, however, building a recommender that recommends the optimal case-sensitive approach for a given data problem is the contribution. The proposed solution is implemented in WEKA and evaluated by applying it on different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. The developed solution takes the misclassification cost into consideration during the learning process, which is not available in the compared project.

Findings

The proposed solution is implemented in WEKA and evaluated by applying it to different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system.

Originality/value

The paper presents a major piece of new information in writing for the first time. Meta-learning work has been done before but this paper presents a new meta-learning framework that is costs sensitive.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

To view the access options for this content please click here
Article
Publication date: 26 July 2021

Pengcheng Li, Qikai Liu, Qikai Cheng and Wei Lu

This paper aims to identify data set entities in scientific literature. To address poor recognition caused by a lack of training corpora in existing studies, a distant…

Abstract

Purpose

This paper aims to identify data set entities in scientific literature. To address poor recognition caused by a lack of training corpora in existing studies, a distant supervised learning-based approach is proposed to identify data set entities automatically from large-scale scientific literature in an open domain.

Design/methodology/approach

Firstly, the authors use a dictionary combined with a bootstrapping strategy to create a labelled corpus to apply supervised learning. Secondly, a bidirectional encoder representation from transformers (BERT)-based neural model was applied to identify data set entities in the scientific literature automatically. Finally, two data augmentation techniques, entity replacement and entity masking, were introduced to enhance the model generalisability and improve the recognition of data set entities.

Findings

In the absence of training data, the proposed method can effectively identify data set entities in large-scale scientific papers. The BERT-based vectorised representation and data augmentation techniques enable significant improvements in the generality and robustness of named entity recognition models, especially in long-tailed data set entity recognition.

Originality/value

This paper provides a practical research method for automatically recognising data set entities in scientific literature. To the best of the authors’ knowledge, this is the first attempt to apply distant learning to the study of data set entity recognition. The authors introduce a robust vectorised representation and two data augmentation strategies (entity replacement and entity masking) to address the problem inherent in distant supervised learning methods, which the existing research has mostly ignored. The experimental results demonstrate that our approach effectively improves the recognition of data set entities, especially long-tailed data set entities.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article
Publication date: 17 August 2021

Stuti Saxena

The purpose of this paper is to present an evaluation of the national Open Government Data (OGD) portal of India (www.data.gov.in) and underline the significance of…

Abstract

Purpose

The purpose of this paper is to present an evaluation of the national Open Government Data (OGD) portal of India (www.data.gov.in) and underline the significance of maintaining the quality of the data sets published online.

Design/methodology/approach

The research approach bases itself on the adapted version of embeddedness theory and cybernetic model apart from the data sets–usability framework proposed in recent literature (Machova et al., 2018).

Findings

Findings from this study indicate that OGD initiative needs to be embedded in the social fabric of the country to ensure that the data sets are being reused by a myriad set of stakeholders for deriving social and economic value. Likewise, the linkages between the stakeholders (for instance, government, citizens, non-governmental bodies, private sector, etc.) should be fortified to enable the reuse of the data sets in an appropriate manner.

Originality/value

Maintenance of quality of the data sets holds paramount importance. Implicitly, efforts should be made on the part of all the stakeholders concerned that the data sets be qualitatively and quantitatively adequate. This paper concludes with limitations and further research pointers.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

To view the access options for this content please click here
Article
Publication date: 24 August 2021

Zehra Canan Araci, Ahmed Al-Ashaab and Cesar Garcia Almeida

This paper aims to present a process to generate physics-based trade-off curves (ToCs) to facilitate lean product development processes by enabling two key activities of…

Abstract

Purpose

This paper aims to present a process to generate physics-based trade-off curves (ToCs) to facilitate lean product development processes by enabling two key activities of set-based concurrent engineering (SBCE) process model that are comparing alternative design solutions and narrowing down the design set. The developed process of generating physics-based ToCs has been demonstrated via an industrial case study which is a research project.

Design/methodology/approach

The adapted research approach for this paper consists of three phases: a review of the related literature, developing the process of generating physics-based ToCs in the concept of lean product development, implementing the developed process in an industrial case study for validation through the SBCE process model.

Findings

Findings of this application showed that physics-based ToC is an effective tool to enable SBCE activities, as well as to save time and provide the required knowledge environment for the designers to support their decision-making.

Practical implications

Authors expect that this paper will guide companies, which are implementing SBCE processes throughout their lean product development journey. Physics-based ToCs will facilitate accurate decision-making in comparing and narrowing down the design-set through the provision of the right knowledge environment.

Originality/value

SBCE is a useful approach to develop a new product. It is essential to provide the right knowledge environment in a quick and visual manner which has been addressed by demonstrating physics knowledge in ToCs. Therefore, a systematic process has been developed and presented in this paper. The research found that physics-based ToCs could help to identify different physics characteristics of the product in the form of design parameters and visualise in a single graph for all stakeholders to understand without a need for an extensive engineering background and for designers to make a decision faster.

Details

International Journal of Lean Six Sigma, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-4166

Keywords

Content available
Article
Publication date: 19 July 2021

Johanna Gummerus, Jacob Mickelsson, Jakob Trischler, Tuomas Härkönen and Christian Grönroos

This paper aims to develop and apply a service design method that allows for stronger recognition and integration of human activities into the front-end stages of the…

Abstract

Purpose

This paper aims to develop and apply a service design method that allows for stronger recognition and integration of human activities into the front-end stages of the service design process.

Design/methodology/approach

Following a discussion of different service design perspectives and activity theory, the paper develops a method called activity-set mapping (ActS). ActS is applied to an exploratory service design project to demonstrate its use.

Findings

Three broad perspectives on service design are suggested: (1) the dyadic interaction, (2) the systemic interaction and (3) the customer activity perspectives. The ActS method draws on the latter perspective and focuses on the study of human activity sets. The application of ActS shows that the method can help identify and visualize sets of activities.

Research limitations/implications

The ActS method opens new avenues for service design by zooming in on the micro level and capturing the set of activities linked to a desired goal achievement. However, the method is limited to activities reported by research participants and may exclude unconscious activities. Further research is needed to validate and refine the method.

Practical implications

The ActS method will help service designers explore activities in which humans engage to achieve a desired goal/end state.

Originality/value

The concept of “human activity set” is new to service research and opens analytical opportunities for service design. The ActS method contributes a visualization tool for identifying activity sets and uncovering the benefits, sacrifices and frequency of activities.

Details

Journal of Service Management, vol. 32 no. 6
Type: Research Article
ISSN: 1757-5818

Keywords

Content available
Article
Publication date: 8 July 2021

Johann Eder and Vladimir A. Shekhovtsov

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown…

Abstract

Purpose

Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules.

Design/methodology/approach

Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research.

Findings

This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies.

Originality/value

This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 17 June 2021

Pengyue Guo, Zhijing Zhang, Lingling Shi and Yujun Liu

The purpose of this study was to solve the problem of pose measurement of various parts for a precision assembly system.

Abstract

Purpose

The purpose of this study was to solve the problem of pose measurement of various parts for a precision assembly system.

Design/methodology/approach

A novel alignment method which can achieve high-precision pose measurement of microparts based on monocular microvision system was developed. To obtain the precise pose of parts, an area-based contour point set extraction algorithm and a point set registration algorithm were developed. First, the part positioning problem was transformed into a probability-based two-dimensional point set rigid registration problem. Then, a Gaussian mixture model was fitted to the template point set, and the contour point set is represented by hierarchical data. The maximum likelihood estimate and expectation-maximization algorithm were used to estimate the transformation parameters of the two point sets.

Findings

The method has been validated for accelerometer assembly on a customized assembly platform through experiments. The results reveal that the proposed method can complete letter-pedestal assembly and the swing piece-basal part assembly with a minimum gap of 10 µm. In addition, the experiments reveal that the proposed method has better robustness to noise and disturbance.

Originality/value

Owing to its good accuracy and robustness for the pose measurement of complex parts, this method can be easily deployed to assembly system.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 301000