Search results

1 – 10 of over 99000
Article
Publication date: 26 April 2022

Hero Khezri, Peyman Rezaei, Fateme Askarian and Reza Ferdousi

Evaluating health information systems is an integral part of the life cycle and development of information systems as it can improve the quality of health care. The…

Abstract

Purpose

Evaluating health information systems is an integral part of the life cycle and development of information systems as it can improve the quality of health care. The purpose of this paper is to introduce a bilingual Web-based repository of health-related software products evaluation tools.

Design/methodology/approach

The present paper is an applied-developmental study that includes the stages of analysis, design, implementation and evaluation procedures. By searching valid databases as well as holding focus group meetings with a group of experts, the necessary elements for designing a Web-based repository were identified, and also unified modelling language diagrams were designed by using a visual paradigm. The coding(programming) was conducted based on the Gantlet Web Systems Development Framework at the next stage. Finally, after implementing and testing the system, the content was added to the repository, and then the repository was evaluated in terms of usability testing.

Findings

The health informatics evaluation tools (HIET) repository provides a functional and selective environment that facilitates the sharing, online storage and retrieval of assessment tools by the scientific community. The HIET repository is easily accessible at www.hiet.ir/ The website is implemented in structured query language (MySQL), personal homepage. Hypertext Preprocessor (PHP) and Linux, Apache, MySQL, and PHP (LEMP) and supports all major browsers.

Originality/value

The HIET repository, as mentioned earlier, serves as an application environment for sharing, storing and online retrieving the assessment tools of health information systems. Therefore, this tool not only facilitates the search, retrieving and study of many evaluation-related papers, which are time-consuming and stressful for researchers and students but can lead to a faster and more scientific evaluation of information systems.

Details

The Electronic Library , vol. 40 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 22 December 2020

Stamatios Papadakis

This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children…

Abstract

Purpose

This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children. To realize this objective, a systematic literature review was conducted to search all English literature published after January 2010 in multiple electronic databases and internet sources. Various combinations of search strings were used due to database construction differences, while the results were cross-referenced to discard repeated references, obtaining those that met the criteria for inclusion.

Design/methodology/approach

The present study was conducted according to the methods provided by Khan et al. (2003) and Thomé et al. (2016). The whole procedure included four stages: planning the review, identifying relevant studies in the literature, critical analysis of the literature, summarizing and interpreting the findings (Figure 1). Furthermore, in this analysis, a well-known checklist, PRISMA, was also used as a recommendation (Moher et al., 2015).

Findings

These review results reveal that, although there are several evaluation tools, in their majority they are not considered adequate to help teachers and parents to evaluate the pedagogical affordances of educational apps correctly and easily. Indeed, most of these tools are considered outdated. With the emergence of new issues such as General Data Protection Regulation, the quality criteria and methods for assessing children's products need to be continuously updated and adapted (Stoyanov et al., 2015). Some of these tools might be considered as good beginnings, but their “limited dimensions make generalizable considerations about the worth of apps” (Cherner, Dix and Lee, 2014, p. 179). Thus, there is a strong need for effective evaluation tools to help parents and teachers when choosing educational apps (Callaghan and Reich, 2018).

Research limitations/implications

Even though this work is performed by following the systematic mapping guideline, threats to the validity of the results presented still exist. Although custom strings that contained a rich collection of data were used to search for papers, potentially relevant publications that would have been missed by the advanced search might exist. It is recommended that at least two different reviewers should independently review titles, abstracts and later full papers for exclusion (Thomé et al., 2016). In this study, only one reviewer – the author – selected the papers and did the review. In the case of a single researcher, Kitchenham (2004) recommends that the single reviewer should consider discussing included and excluded papers with an expert panel. The researcher, following this recommendation, discussed the inclusion and exclusion procedure with an expert panel of two professionals with research experience from the Department of (removed for blind review). To deal with publication bias, the researcher in conjunction with the expert panel used the search strategies identified by Kitchenham (2004) including: Grey literature, conference proceedings, communicating with experts working in the field for any unpublished literature.

Practical implications

The purpose of this study was not to advocate any evaluation tool. Instead, the study aims to make parents, educators and software developers aware of the various evaluation tools available and to focus on their strengths, weaknesses and credibility. This study also highlights the need for a standardized app evaluation (Green et al., 2014) via reliable tools, which will allow anyone interested to evaluate apps with relative ease (Lubniewski et al., 2018). Parents and educators need a reliable, fast and easy-to-use tool for the evaluation of educational apps that is more than a general guideline (Lee and Kim, 2015). A new generation of evaluation tools would also be used as a reference among the software developers, designers to create educational apps with real educational value.

Social implications

The results of this study point to the necessity of creating new evaluation tools based on research, either in the form of rubrics or checklists to help educators and parents to choose apps with real educational value.

Originality/value

However, to date, no systematic review has been published summarizing the available app evaluation tools. This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children.

Details

Interactive Technology and Smart Education, vol. 18 no. 1
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 11 April 2016

Maria del Carmen Suarez-Torrente, Patricia Conde-Clemente, Ana Belén Martínez and Aquilino A. Juan

The purpose of this paper is to improve and facilitate the work of developers and usability evaluators by providing an adaptable and effective support. A well-defined set…

1039

Abstract

Purpose

The purpose of this paper is to improve and facilitate the work of developers and usability evaluators by providing an adaptable and effective support. A well-defined set of criteria and a range of evaluation values for each criterion as well as a complete websites classification, will guide evaluators. A usability percentage and a list of prioritized criteria, adapted to the type of website by a new usability metric, will help developers to improve the website. This improvement will increase the degree of web user satisfaction.

Design/methodology/approach

Having established and validated a new usability evaluation framework, several usability tools have been analyzed. None of them totally fulfills the requirements of the evaluation framework. As a result of being unable to customize any of them, a new one has been developed. A study of 42 enterprise websites in an economically depressed region of Europe was performed using the new tool. This study involved 42 evaluators and 118 web users. Users have evaluated the websites before and after the redesign. A end-user computing satisfaction model-based questionary was used to collect data about end-user satisfaction. The results validate the proposal.

Findings

The study confirms that the proposed tool provides valuable information during the process of web development, evaluation and redesign. In adittion, it reveals that improving websites usability by ensuring criteria compliance has a positive effect on web users satisfaction.

Originality/value

Unlike previous purposes, the proposed tool allows to evaluate any type of website with a well-defined set of evaluation criteria and specific criteria values. As outcomes, the tool provides the website usability degree and a list of criteria ordered by priority repair. These results are adapted to the specific type of website. This makes easier and more effective the redesign of the evaluated website.

Details

Online Information Review, vol. 40 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 5 December 2016

Soroush Maghsoudi, Colin Duffield and David Wilson

This paper aims to develop a practical tool to evaluate the outcomes of innovative practices in the building and construction industry.

Abstract

Purpose

This paper aims to develop a practical tool to evaluate the outcomes of innovative practices in the building and construction industry.

Design/methodology/approach

A practical tool was proposed. It is an online tool programmed in a JavaScript environment. A previously developed and tested framework was the basis for this tool. Six case projects were used to test and validate the reliability of the tool. The outcomes of the building projects were categorized into six categories of economic, quality, social, environmental, satisfaction and soft and organizational impacts.

Findings

The most important finding of this research was that the evaluation of innovation in building and construction would be possible only if the subjective assessment is tolerated to include the non-monetary outcomes in the evaluation, as well as the monetary outcomes.

Research limitations/implications

The findings of this research are limited to the domestic and medium density building projects; thus, the outcomes might be generalized with appropriate care. The developed tool would assist practitioners in the field of building and construction to realize the impacts of innovation introduced into their projects. The project owners and developers could be the main audience of this tool.

Practical implications

The main contribution of the current study into the literature is the consideration of tangible and intangible outcomes of innovation together. In other words, this tool not only evaluates monetary outcomes but also takes into account non-monetary outcomes. It has been stated in the literature that 80 per cent of firms choose “non-numeric” project selection models (Meredith and Mantel, 2006). To provide a full representation of the reality, this model considers both numeric and non-numeric measures by applying both quantitative and qualitative evaluation methods. The project owners and developers could be the main audience of this tool. It is worth mentioning that this tool is the first attempt of its kind for building and construction projects, and it is applicable and fully practical.

Originality/value

This tool is the first attempt of its kind to evaluate practically the outcomes of innovation in the building and construction industry. The tool practicality and applicability in the real-world project is a privilege which gives more reliability and credibility to the proposed approach of innovation evaluation.

Details

International Journal of Innovation Science, vol. 8 no. 4
Type: Research Article
ISSN: 1757-2223

Keywords

Article
Publication date: 14 January 2022

Yael Cohen-Azaria

In 2012, the Israeli Ministry of Education and its Testing and Evaluation Department introduced a new tool to evaluate the quality of kindergarten teachers’ work. This…

Abstract

Purpose

In 2012, the Israeli Ministry of Education and its Testing and Evaluation Department introduced a new tool to evaluate the quality of kindergarten teachers’ work. This paper aims to identify how kindergarten teachers perceive the new multiple domains performance tool.

Design/methodology/approach

The study applied a qualitative paradigm of data collection and analysis. Data collection consisted of semi-structured in-depth interviews conducted with 36 kindergarten teachers.

Findings

Findings indicated that most kindergarten teachers perceive their work plan and the kindergarten climate as the most important evaluation domains, while perceiving involving parents as the least important and even an unnecessary domain. One-third of them indicated that an innovation domain should be added. Also, the kindergarten teachers perceived the use of the KT-MDPT as both positive and negative.

Originality/value

There is a clear dearth in scholarly literature dealing with the evaluation of the quality of kindergarten teachers’ work. This study is the first to reveal Israeli kindergarten teachers' attitudes regarding this new tool for work quality evaluation.

Details

Quality Assurance in Education, vol. 30 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 19 July 2011

Anu Kajamaa

The aim of this article is to examine whether the boundary between the separate worlds of evaluation and frontline work in a hospital can be overcome. The study provides…

Abstract

Purpose

The aim of this article is to examine whether the boundary between the separate worlds of evaluation and frontline work in a hospital can be overcome. The study provides an example of a rare, innovative creation process of an assessment tool in which the tool users and the tool producer participated. The article aims to widen the understanding of employee initiated organizational change efforts, co‐creation of boundary objects and organizational boundary breaking, which may lead to expansive learning.

Design/methodology/approach

The article takes an activity‐theoretical approach to organizational boundaries, viewing them as tension‐laden triggers for learning and change. The analysis of expansive learning actions is based on longitudinal ethnographic field data on a collaboration effort between nurses and evaluation professionals in a Finnish hospital.

Findings

The collaboration effort between two organizational worlds led to boundary breaking. Initially, expansive learning actions were taken, but then obstacles started to emerge, and the collaboration between the two worlds was not sustained. To be sustainable, the collaboration would have required both a shared assessment tool (i.e. a boundary object) and management support. In this case the latter was missing.

Originality/value

The study analyzes a solid boundary, which delimited organizational learning and development. Rather than boundary crossing, as understood in the current literature, the challenging collaboration effort took the shape of conflictual boundary breaking. The study contributes to the literature on organizational boundaries and learning and has implications on management of employee initiated change efforts, collective tool creation processes and development of quality work in the public sector.

Details

The Learning Organization, vol. 18 no. 5
Type: Research Article
ISSN: 0969-6474

Keywords

Article
Publication date: 14 September 2015

Iftekhar Ahmed and Esther Ruth Charlesworth

The purpose of this paper is to discuss the utility of a tool for assessing resilience of housing. After disasters, maximum resources are often allocated for housing…

Abstract

Purpose

The purpose of this paper is to discuss the utility of a tool for assessing resilience of housing. After disasters, maximum resources are often allocated for housing reconstruction, and most initiatives on disaster resilient housing have arisen after disasters. With widespread claims by agencies of having “built back better”, it is important to establish an evaluation framework that allows understanding to what extent resilience has been successfully achieved in such housing projects. This paper discusses such a tool developed by the authors.

Design/methodology/approach

In a study commissioned by the Australian Shelter Reference Group, the authors have developed an evaluation tool for assessing resilience in housing and tested it in several housing reconstruction projects in the Asia-Pacific region. Various evaluation frameworks were reviewed to develop the tool. An approach derived from the log frame was adapted in alignment with other key approaches. The tool is practical and targeted for agency staff involved in housing projects, evaluators of housing reconstruction projects and communities to assess their housing in terms of resilience. It comprises three main stages of an assessment process with guided activities at each stage.

Findings

The tool was tested in the Cook Islands and Sri Lanka, and the key findings of the test assessments are presented to demonstrate the prospects of the tool. While the case study projects all indicated achievement of a level of resilience, problems were evident in terms of designs issues and external factors.

Originality/value

Such a tool has the potential to be used more widely through advocacy to prioritise resilience in post-disaster housing reconstruction.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 6 no. 3
Type: Research Article
ISSN: 1759-5908

Keywords

Book part
Publication date: 13 July 2020

Loretta Newman-Ford, Sophie Leslie and Sue Tangney

This chapter discusses the pilot study of an Education for Sustainable Development Self-Evaluation Tool (ESD-SET), created by the Quality Enhancement Directorate (formerly…

Abstract

This chapter discusses the pilot study of an Education for Sustainable Development Self-Evaluation Tool (ESD-SET), created by the Quality Enhancement Directorate (formerly the Learning and Teaching Development Unit) at Cardiff Metropolitan University, as both a means of auditing the extent to which academic programs embed ESD and a catalyst for curriculum development.

The chapter evaluates the effectiveness and usefulness of the self-evaluation for both auditing ESD and curriculum development. Responses to the self-evaluation questions by Programme Directors were analyzed and follow-up interviews carried out with the Programme Directors to explore their experiences of the tool.

Results indicate that the self-evaluation tool is fit-for-purpose as a means of auditing the integration of ESD within academic programs. The self-evaluation exercise promoted team discussion around sustainability issues and raised staff awareness and understanding of the concept of ESD and how to effectively embed sustainability-related themes within their discipline. The exercise had a transformative impact on the way some program teams approached curriculum design and delivery. There was evidence that engagement with the tool contributed to further embedding of sustainability within curricula across all disciplines involved in the pilot study.

Details

Introduction to Sustainable Development Leadership and Strategies in Higher Education
Type: Book
ISBN: 978-1-78973-648-9

Keywords

Article
Publication date: 26 March 2010

Edeltraud Guenther and Vera Greschner Farkavcová

The environmental impacts of transportation are rarely brought into decision making frameworks. The purpose of this paper is to introduce a framework and tool to help in…

3923

Abstract

Purpose

The environmental impacts of transportation are rarely brought into decision making frameworks. The purpose of this paper is to introduce a framework and tool to help in these organizational decisions.

Design/methodology/approach

A survey analysis was carried out amongst waste‐management companies within the project ETIENNE. Using this information, drivers and hurdles to the environmentally motivated optimization of transport were identified.

Findings

This research framework presents results of perceived hurdles in transport process optimization. Results of the research are grounded with definitions and discussion of the relevance of logistic processes.

Research limitations/implications

Analysis provided in this paper shows that there is no practicable tool to consider environmental impacts in management decisions in transportation. Therefore, a software tool, the so‐called ETIENNE‐Tool, for the evaluation of transportation alternatives, is developed. Theoretical principles for the environmental evaluation of transport processes are also proposed.

Practical implications

Organizational tasks – such as the selection and planning of transportation relations; employment of means of transportation; planning of locations; finally performing of operative business – can be fulfilled in a more environmentally sound manner.

Originality/value

Based on the ETIENNE‐Tool and taking into consideration stakeholders' requirements, different transportation chains for one transportation task can be easily compared and transportation can be planned efficiently.

Details

Management Research Review, vol. 33 no. 4
Type: Research Article
ISSN: 2040-8269

Keywords

Article
Publication date: 23 May 2022

Nedra Ibrahim, Anja Habacha Chaibi and Henda Ben Ghézala

Given the magnitude of the literature, a researcher must be selective of research papers and publications in general. In other words, only papers that meet strict…

Abstract

Purpose

Given the magnitude of the literature, a researcher must be selective of research papers and publications in general. In other words, only papers that meet strict standards of academic integrity and adhere to reliable and credible sources should be referenced. The purpose of this paper is to approach this issue from the prism of scientometrics according to the following research questions: Is it necessary to judge the quality of scientific production? How do we evaluate scientific production? What are the tools to be used in evaluation?

Design/methodology/approach

This paper presents a comparative study of scientometric evaluation practices and tools. A systematic literature review is conducted based on articles published in the field of scientometrics between 1951 and 2022. To analyze data, the authors performed three different aspects of analysis: usage analysis based on classification and comparison between the different scientific evaluation practices, type and level analysis based on classifying different scientometric indicators according to their types and application levels and similarity analysis based on studying the correlation between different quantitative metrics to identify similarity between them.

Findings

This comparative study leads to classify different scientific evaluation practices into externalist and internalist approaches. The authors categorized the different quantitative metrics according to their types (impact, production and composite indicators), their levels of application (micro, meso and macro) and their use (internalist and externalist). Moreover, the similarity analysis has revealed a high correlation between several scientometric indicators such as author h-index, author publications, citations and journal citations.

Originality/value

The interest in this study lies deeply in identifying the strengths and weaknesses of research groups and guides their actions. This evaluation contributes to the advancement of scientific research and to the motivation of researchers. Moreover, this paper can be applied as a complete in-depth guide to help new researchers select appropriate measurements to evaluate scientific production. The selection of evaluation measures is made according to their types, usage and levels of application. Furthermore, our analysis shows the similarity between the different indicators which can limit the overuse of similar measures.

Details

VINE Journal of Information and Knowledge Management Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5891

Keywords

1 – 10 of over 99000