Search results

1 – 10 of over 115000
To view the access options for this content please click here
Article
Publication date: 5 June 2009

Agata Filipowska, Monika Kaczmarek, Marek Kowalkiewicz, Xuan Zhou and Matthias Born

The purpose of this paper is to present a methodology that may be used to perform evaluation of business process management (BPM) methodologies. As this area lacks proper…

Abstract

Purpose

The purpose of this paper is to present a methodology that may be used to perform evaluation of business process management (BPM) methodologies. As this area lacks proper formalized approaches, the aim of the publication is to promote the new approach, proposed by the authors.

Design/methodology/approach

The authors analyse related methodologies and theoretical background. Based on that, they suggest an evaluation approach, which is used to verify correctness of a semantic BPM (SBPM) methodology.

Findings

The proposed evaluation methodology has been practically tested. Additional interviews have been conducted and interviewees stress high value of the approach. The presented evaluation methodology was validated on the example of the SBPM methodology used in a European integrated project.

Research limitations/implications

The results of this paper can be used to guide development and verify correctness of new BPM methodologies.

Practical implications

The paper demonstrates how validation of BPM methodologies can be conducted in practice.

Originality/value

The approach presented in the paper is the first comprehensive approach to provide methodology for evaluating BPM methodologies.

Details

Business Process Management Journal, vol. 15 no. 3
Type: Research Article
ISSN: 1463-7154

Keywords

To view the access options for this content please click here
Article
Publication date: 8 April 2014

Sue Cooper

– This research paper presents an innovative evaluation methodology which was developed as part of a doctoral research study in a voluntary sector youth organisation in England.

Abstract

Purpose

This research paper presents an innovative evaluation methodology which was developed as part of a doctoral research study in a voluntary sector youth organisation in England.

Design/methodology/approach

The transformative methodology synthesises aspects of appreciative inquiry, participatory evaluation and transformative learning and engages the whole organisation in evaluating impact. Using an interpretive paradigm, data were collected from youth workers via semi-structured interviews prior and post implementation of the transformative evaluation methodology.

Findings

Drawing on thematic analysis of the youth workers' experiences, it is argued that the illuminative and transformative nature of the methodology enabled the learning and development functions of evaluation to be realised. Further, it is argued that this form of evaluation not only supports the collection of evidence to demonstrate impact externally, but that the process itself has the potential to enhance practice, improve outcomes “in the moment” and promotes organisational learning.

Research limitations/implications

The research findings are limited by the small-scale nature of the project. Further research is needed to investigate the supporting and enabling factors that underpin participatory practices in organisation evaluation; and in particular to investigate the experience of the managers and trustees as these were not the focus of this research.

Originality/value

This article makes a significant contribution to knowledge in regard to the design and use of participatory evaluation. It evidences the benefits in relation to generating practice improvements and for practitioners themselves in terms of countering the negatives effects of performativity. Transformative evaluation offers an innovative structure and process through which organisational learning can be realised.

Details

The Learning Organization, vol. 21 no. 2
Type: Research Article
ISSN: 0969-6474

Keywords

To view the access options for this content please click here
Article
Publication date: 13 July 2015

Razieh Dehghani and Raman Ramsin

– This paper aims to provide a criteria-based evaluation framework for assessing knowledge management system (KMS) development methodologies.

Abstract

Purpose

This paper aims to provide a criteria-based evaluation framework for assessing knowledge management system (KMS) development methodologies.

Design/methodology/approach

The evaluation criteria have been elicited based on the features expected from a successful KMS. Furthermore, a number of prominent KMS development methodologies have been scrutinized based on the proposed evaluation framework.

Findings

It was demonstrated that the proposed evaluation framework is detailed and comprehensive enough to reveal the strengths and weaknesses of KMS development methodologies. It was also revealed that even though the evaluated methodologies possess certain strong features, they suffer from several shortcomings that need to be addressed.

Research limitations/implications

The evaluation framework has not been applied to all existing KMS development methodologies; however, the evaluation does cover the most comprehensive methodologies which exist in the research context.

Practical implications

The results of this research can be used for the following purposes: organizational goal-based selection of KMS development methodologies, evolution of existing KMS development methodologies and engineering of tailored-to-fit KMS development methodologies.

Originality/value

The proposed evaluation framework provides a comprehensive and detailed set of criteria for assessing general, area-specific and context-specific features of KMS development methodologies. KMS developers can select the methodology which best fits their requirements based on the evaluation results. Furthermore, method engineers can extend existing methodologies or engineer new ones so as to satisfy the specific requirements of the project at hand.

Details

Journal of Knowledge Management, vol. 19 no. 4
Type: Research Article
ISSN: 1367-3270

Keywords

To view the access options for this content please click here
Article
Publication date: 31 October 2018

Ludek Broz and Tereza Stöckelová

The purpose of this paper is to contribute to the body of knowledge on how research evaluation in different national and organisational contexts affects, often in…

Abstract

Purpose

The purpose of this paper is to contribute to the body of knowledge on how research evaluation in different national and organisational contexts affects, often in unintended ways, research and publication practices. In particular, it looks at the development of book publication in the social sciences and humanities (SSH) in the Czech Republic since 2004, when a performance-based system of evaluation was introduced, up to the present.

Design/methodology/approach

The paper builds upon ethnographic research complemented by the analysis of Czech science policy documents, data available in the governmental database “Information Register of R&D results” and formal and informal interviews with expert evaluators and other stakeholders in the research system. It further draws on the authors’ own experience as scholars, who have also over the years participated in a number of evaluation procedures as peers and experts.

Findings

The number of books published by researchers in SSH based at Czech institutions has risen considerably in reaction to the pressure for productivity that is inscribed into the evaluation methodology and has resulted in the rise of in-house publishing by researchers’ own research institution, “fake internationalisation” using foreign low-quality presses as the publication venue, and the development of a culture of orphaned books that have no readers.

Practical implications

In the Czech Republic robust and internationally harmonised bibliometric data regarding books would definitely help to create a form of research evaluation that would stimulate meaningful scholarly book production. At the same time, better-resourced and better-designed peer review evaluation is needed.

Originality/value

This is the first attempt to analyse in detail the conditions and consequences the Czech performance-based research evaluation system has for SSH book publication. The paper demonstrates that often discussed harming of SSH and book-writing in particular by performance-based IF-centred research evaluation does not necessarily manifest in declining numbers of publications. On the contrary, the number of books published may increase at the cost of producing more texts of questionable scholarly quality.

Details

Aslib Journal of Information Management, vol. 70 no. 6
Type: Research Article
ISSN: 2050-3806

Keywords

Content available
Article
Publication date: 21 April 2020

Tonderai Washington Shumba, Desderius Haufiku and Kabwebwe Honoré Mitonga

For the past four decades, there is no evidence of a consensus on the suitable community-based rehabilitation (CBR) evaluation methodologies. To this end, the purpose of…

Abstract

Purpose

For the past four decades, there is no evidence of a consensus on the suitable community-based rehabilitation (CBR) evaluation methodologies. To this end, the purpose of this study is to provide a narrative review on CBR evaluations and the potential of photovoice method when used alone and when used in combination with quality of life assessment tools as CBR evaluation methodologies.

Design/methodology/approach

A narrative review was undertaken, but including some aspects of scoping review methodology.

Findings

Thirty-three full-text articles were included for review. Three key findings were an overview of the evolution of CBR evaluation; the use of photovoice method in CBR evaluation and the use of photovoice method in combination with quality of life assessment tools in CBR evaluation.

Research limitations/implications

Photovoice methodology was found to be participatory in nature and as has the potential to elicit the experiences of persons with disabilities. However, photovoice falls short of measuring the quality of life of persons with disabilities, thus will need to be collaborated with another assessment tool. A combination of photovoice and World Health Organization Quality of Life (WHOQOL)-BREF and WHOQOL-Dis assessment has a potential to give an adequate representation of the voices of persons with disabilities and their quality of life.

Originality/value

There is need for changes in CBR evaluation methodologies in response to the evolution of disability models from medical model to human rights model. Thus CBR evaluation methodologies should embrace the diversity among persons with disabilities in interpreting life experiences and quality of life.

Details

Journal of Health Research, vol. 34 no. 6
Type: Research Article
ISSN: 0857-4421

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 2006

Lefteris Moussiades and Anthi Iliopolou

The evolution of Internet Technology has influenced the basis of education by introducing new methodologies in to teaching and giving a new dimension to distance learning…

Abstract

The evolution of Internet Technology has influenced the basis of education by introducing new methodologies in to teaching and giving a new dimension to distance learning. On the other hand, there is an emerging need on the users part to define some standards that will be used for judging the quality and effectiveness of the educational Websites. In this project, a presentation of evaluation methodology of Virtual Learning Environments (VLEs) is presented, along with basic guidelines that must be followed when evaluating a VLE. This report emphasizes the importance of evaluation in the educational practice and presents a review of the current literature in terms of helping the VLE practitioner to find his own way concerning the evaluation process.

Details

Interactive Technology and Smart Education, vol. 3 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

To view the access options for this content please click here
Article
Publication date: 28 March 2008

Greta Kelly

This paper seeks to propose a collaborative process for evaluating, piloting and selecting, new and emerging educational technologies. It aims to promote discussion about…

Abstract

Purpose

This paper seeks to propose a collaborative process for evaluating, piloting and selecting, new and emerging educational technologies. It aims to promote discussion about how such an evaluative process can be inclusive of interdisciplinary stakeholders and envision the actual application of these technologies in real teaching and learning contexts across disciplines.

Design/methodology/approach

The methodology applied in the piloting and evaluation of new educational technologies involves the design and identification of learning activities, the development of evaluation criteria which map to the goals of the learning activities and stakeholders' needs, and the incorporation of the technology‐enabled activities into the total course design.

Findings

Evaluation methodologies that involve interdisciplinary stakeholders collaborating on a software pilot expose participants to multiple perspectives and divergent views. The evaluation of new educational technologies within a teaching and learning context is more effective in exposing the benefits and weaknesses of the technology than a conventional software pilot.

Research limitations/implications

The new educational technologies evaluation methodology proposed in this paper has only been fully applied in three product pilots and is still in its developmental stage. The research is limited to the evaluation of educational software, not the implementation of new educational technologies.

Practical implications

The re‐iterative and time‐consuming process of piloting and evaluating new educational technologies within a course context is one in which academics require pedagogical, technological and administrative support. This paper presents a methodology that ensures each of these varieties of support is included.

Originality/value

With the rapid expansion of new, sometimes costly educational technologies, universities can benefit from employing evaluation techniques based within an educational context, and ensure that their investments in these tools make an effective contribution to the enhancement of teaching and learning.

Details

Campus-Wide Information Systems, vol. 25 no. 2
Type: Research Article
ISSN: 1065-0741

Keywords

To view the access options for this content please click here
Article
Publication date: 6 December 2019

Gianpaolo Iazzolino, Domenico Greco, Saverino Verteramo, Andrea Luca Attanasio, Gilda Carravetta and Teresa Granato

This paper aims to propose an integrated methodology for evaluating academic spin-offs (ASOs) for supporting both the development phase and performance evaluation. The…

Abstract

Purpose

This paper aims to propose an integrated methodology for evaluating academic spin-offs (ASOs) for supporting both the development phase and performance evaluation. The ASOs have peculiar characteristics compared to other start-up companies and the debate on their evaluation is still open.

Design/methodology/approach

The proposed methodology, adopting a lean approach, faces the typical problems that characterize the growth of an ASO: the excessive attention to the technological aspects with respect to the commercial and managerial ones; and the need for evaluation systems that try to evaluate all risk areas and to highlight any misalignment. The methodology was built also starting from the results of an Erasmus + research project, co-funded by the European Commission, called spin-off lean acceleration.

Findings

The methodology proposes to monitor the main risk areas (market, technological, implementation, governance and financial). For each of these areas, at first, a framework and a checklist are proposed for supporting the qualitative assessment of the potential of each areas. In the second part, a set of metrics for monitoring the performances and to understand if the spinoff is developing in the right direction is proposed. Moreover, the methodology was applied to the spin-offs at the University of Calabria (Italy), and the paper reports the first results obtained.

Originality/value

A new canvas model (lean acceleration canvas), more specific and suited to the context of ASOs, was developed and tested. A lean approach has been adopted also for understanding the weakness of traditional methods. The proposed methodology could be used by the technology transfer offices in their institutional activity of supporting ASOs.

Details

Measuring Business Excellence, vol. 24 no. 1
Type: Research Article
ISSN: 1368-3047

Keywords

To view the access options for this content please click here
Article
Publication date: 26 September 2019

Bello Abdullahi, Yahaya Makarfi Ibrahim, Ahmed Ibrahim and Kabir Bala

The revolution brought about by the internet and the World Wide Web has led to the development of numerous e-Tendering systems for public sector tendering that have…

Abstract

Purpose

The revolution brought about by the internet and the World Wide Web has led to the development of numerous e-Tendering systems for public sector tendering that have automated various aspects of the manual tendering processes that are known to experience numerous problems. However, one key area that has not been fully addressed is the automation of the evaluation of public tenders based on group decision-making. This paper presents part of the development of a Web-based e-tendering system called Nigerian Public Sector eTender (NPS-eTender) that automate the evaluation of public sector tenders based on group decision-making.

Design/methodology/approach

The system was developed using object-oriented methodologies. Specifically, Ripple and unified process methodologies were adopted.

Findings

The results of the system validation showed that NPS-eTender has an average rating of 74% with respect to correct and accurate modelling of the existing tendering domain and an average rating of 67.6% with respect to its potential to enhance the proficiency of public sector tendering in Nigeria. Based on the results of the validation, it can be concluded that the automation of the tender evaluation process can lead to a more proficient tendering process.

Originality/value

This research has contributed to the development of an e-Tendering system for the public sector that supports the whole tendering lifecycle including the automation of evaluation of public tenders based on group decision-making.

Details

Journal of Engineering, Design and Technology , vol. 18 no. 1
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Article
Publication date: 11 July 2019

Betteke Van Ruler

The purpose of this paper is to analyze what the concept of agility means for communication evaluation and measurement and to challenge assumptions of goal-oriented and…

Abstract

Purpose

The purpose of this paper is to analyze what the concept of agility means for communication evaluation and measurement and to challenge assumptions of goal-oriented and organization-centric approaches to evaluation and measurement.

Design/methodology/approach

This paper is a development debate based on a literature review, regarding agility, evaluation theory, communication evaluation approaches and what agility means for communication measurement.

Findings

Agility teaches that what works is more important than what was agreed upon in advance, so it is with more emphasis on needs rather than objectives. Regarding evaluation, the findings show that in today’s communication evaluation theory, evaluation is equated with summative evaluation of smart designed and fixed objectives. In agility, evaluation is always formative, to foster development and improvement within an ongoing activity. Consequently smart objectives are no longer valid as fixed benchmarks and ex ante and ex post evaluations do not exist; instead evaluation is an on-going and forward looking activity during action. Regarding measurement, the basic focus in agility on user needs implies that qualitative methods are more obvious than quantitative. The classic Weberian idea of “Verstehen” is helpful to understand how to focus on needs rather than objectives. This paper finally explores the merits of action research and sense-making methodology as applicable measurements in which “Verstehen” is the basis.

Research limitations/implications

Agility is a very radical concept. The practical and theoretical implications of agile evaluation and measurement mean a total change for practice as well as for communication measurement and evaluation theory building.

Originality/value

The value of this paper is that it is the first to include agility into communication evaluation and measurement and that it, consequently, moves beyond organization-centric concepts of evaluation and measurement by bringing the often overlooked user needs into the game.

Details

Journal of Communication Management, vol. 23 no. 3
Type: Research Article
ISSN: 1363-254X

Keywords

1 – 10 of over 115000