Search results

1 – 10 of over 145000
Article
Publication date: 1 December 1996

Abdullah M.Y. Sirajuddin and Farhan K. Al‐Bulaihed

The evaluation of maintenance tenders is a task that involves not only consideration of the prices offered, but the financial and technical expertise of the tenderers as well. The…

1034

Abstract

The evaluation of maintenance tenders is a task that involves not only consideration of the prices offered, but the financial and technical expertise of the tenderers as well. The evaluations can be highly complicated and lengthy when dealing with large projects and numerous tenderers. Presents an evaluation methodology in the form of a tabular procedure to be used by the evaluators of maintenance contracts. Develops seven tables to provide quantitative values for the submitted tenders. The first five tables are designed to evaluate the technical efficiency of the tenderers, while the sixth table is designed for the final evaluation including prices. In the first table, the tender compliance as per the requirement of the tender document is evaluated. Uses the second table to evaluate the support and maintenance plan. The third table presents an evaluation of the experience and the financial status of the tenderer. The fourth and fifth tables are used to evaluate the tenderer’s staffing proposal and how it is related to the requirements of the project. The sixth table summarizes the evaluation of the previous four tables for each evaluator. The seventh table illustrates the final evaluation of each tenderer, and hence a decision could be made. By adding the average technical score of each tenderer to his price score, the evaluator can easily rate each tenderer’s ability to carry out the work rather than depending only on price, which might not always mean a good choice.

Details

Journal of Quality in Maintenance Engineering, vol. 2 no. 4
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 9 March 2010

Seyhan Sipahi and Oner Esen

The purpose of this paper is to provide a multi‐criteria bidding evaluation model based on Istanbul 2010 PR selection problem to strike a balance among conflicting criteria and to…

1626

Abstract

Purpose

The purpose of this paper is to provide a multi‐criteria bidding evaluation model based on Istanbul 2010 PR selection problem to strike a balance among conflicting criteria and to aggregate opinions held by a group of decision makers.

Design/methodology/approach

In the study, analytic hierarchy process (AHP) methodology was used to settle the conflict properly. The evaluation criteria were transformed into a hierarchical form and their relative weights were calculated and synthesized for the final ranking of the bidders. Then a linear interpolation‐based spreadsheet model was combined with findings of the AHP to fairly select best bidders.

Findings

The paper demonstrates that the hierarchical structure of the AHP methodology can successfully resolve the conflict among evaluation criteria and measure relative importance of the criteria by taking into account the preference of the decision makers. Moreover, a linear interpolation methodology can evaluate quoted bid prices fairly and can help to make the best decision.

Originality/value

In all areas of business management, there is a great need for fair bid evaluation systems. The method presented in the paper will help future studies in designing more intriguing systems and resolving conflicts in the area of bid evaluation.

Details

Management Decision, vol. 48 no. 2
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 1 March 2004

Ty A. Randall, Heidi S. Brothers and Daniel T. Holt

Competitive sourcing is the government’s term for transferring the operation of an internal process or function to either an external supplier or a reengineered government team…

Abstract

Competitive sourcing is the government’s term for transferring the operation of an internal process or function to either an external supplier or a reengineered government team. The competitively sourced function is managed through performance metrics. These metrics must be thorough, appropriate and well designed to ensure the government is receiving the level of service required to fulfill its various missions. This research effort develops a performance metric evaluation system that was synthesized from metric design literature, Total Quality Management concepts, and the Government Performance Results Act. Use of the system in a case study is discussed along with how to evaluate the results. Results indicate that some Air Force performance metrics have insufficient and improperly designed metrics.

Details

Journal of Public Procurement, vol. 4 no. 2
Type: Research Article
ISSN: 1535-0118

Book part
Publication date: 30 October 2009

Barbara J. Stites

Changes in the format of library materials, increased amounts of information, and the speed at which information is being produced have created an unrelenting need for training…

Abstract

Changes in the format of library materials, increased amounts of information, and the speed at which information is being produced have created an unrelenting need for training for library staff members. Additionally, library employees are retiring in greater numbers and their accompanying expertise is being lost. The purpose of this study was to document evaluation practices currently used in library training and continuing education programs for library employees, including metrics used in calculating return-on-investment (ROI). This research project asked 272 library training professionals to identify how they evaluate training, what kind of training evaluation practices are in place, how they select programs to evaluate for ROI, and what criteria are important in determining an effective method for calculating ROI.

Details

Advances in Library Administration and Organization
Type: Book
ISBN: 978-1-84950-580-2

Article
Publication date: 15 June 2015

Li Si, Yueting Li, Xiaozhe Zhuang, Wenming Xing, Xiaoqin Hua, Xin Li and Juanjuan Xin

The purpose of this paper is to conduct performance evaluation of eight main scientific data sharing platforms in China and find existing problems, thus providing reference for…

Abstract

Purpose

The purpose of this paper is to conduct performance evaluation of eight main scientific data sharing platforms in China and find existing problems, thus providing reference for maximizing the value of scientific data and enhancing scientific research efficiency.

Design/methodology/approach

First, the authors built an evaluation indicator system for the performance of scientific data sharing platforms. Next, the analytic hierarchy process was employed to set indicator weights. Then, the authors use experts grading method to give scored for each indicator and calculated the scoring results of the scientific data sharing platform performance evaluation. Finally, an analysis of the results was conducted.

Findings

The performance evaluation of eight platforms is arranged by descending order by the value of F: the Data Sharing Infrastructure of Earth System Science (76.962), the Basic Science Data Sharing Center (76.595), the National Scientific Data Sharing Platform for Population and Health (71.577), the China Earthquake Data Center (66.296), the China Meteorological Data Sharing Service System (65.159), the National Agricultural Scientific Data Sharing Center (55.068), the Chinese Forestry Science Data Center (56.894) and the National Scientific Data Sharing & Service Network on Material Environmental Corrosion (Aging) (52.528). And some existing shortcomings such as the relevant policies and regulation, standards of data description and organization, data availability and the services should be improved.

Originality/value

This paper is mainly discussing about the performance evaluation system covering operation management, data resource, platform function, service efficiency and influence of eight scientific data sharing centers and made comparative analysis. It reflected the reality development of scientific data sharing in China.

Details

Library Hi Tech, vol. 33 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 3 August 2015

Lili-Anne Kihn and Eeva-Mari Ihantola

This paper aims to address the reporting of validation and evaluation criteria in qualitative management accounting studies, which is a topic of critical debate in qualitative…

2641

Abstract

Purpose

This paper aims to address the reporting of validation and evaluation criteria in qualitative management accounting studies, which is a topic of critical debate in qualitative social science research. The objective of this study is to investigate the ways researchers have reported the use of evaluation criteria in qualitative management accounting studies and whether they are associated with certain paradigmatic affiliations.

Design/methodology/approach

Building on the work of Eriksson and Kovalainen [Eriksson and Kovalainen (2008) Qualitative Methods in Business Research. London, Sage], the following three approaches are examined: the adoption of classic concepts of validity and reliability, the use of alternative concepts and the abandonment of general evaluation criteria. Content analysis of 212 case and field studies published during 2006 to February 2015 was conducted to be able to offer an analysis of the most recent frontiers of knowledge.

Findings

The key empirical results of this study provide partial support for the theoretical expectations. They specify and refine Eriksson and Kovalainen’s (2008) classification system, first, by identifying a new approach to evaluation and validation and, second, by showing mixed results on the paradigmatic consistency in the use of evaluation criteria.

Research limitations/implications

This paper is not necessarily exhaustive or representative of all the evaluation criteria developed; the authors focused on the explicit reporting of criteria only and the findings cannot be generalized. Somewhat different results might have been obtained if other journals, other fields of research or a longer period were considered.

Practical implications

The findings of this study enhance the knowledge of alternative approaches and criteria to validation and evaluation. The findings can aid both in the evaluation of management accounting research and in the selection of appropriate evaluation approaches and criteria.

Originality/value

This paper presents a synthesis of the literature (Table I) and new empirical findings that are potentially useful for both academic scholars and practitioners.

Details

Qualitative Research in Accounting & Management, vol. 12 no. 3
Type: Research Article
ISSN: 1176-6093

Keywords

Article
Publication date: 1 August 2016

Anoop Kumar Sahu, Saurav Datta and S.S. Mahapatra

The purpose of this paper is to adapt integrated hierarchical evaluation platform (associated with “green” performance indices) toward evaluation and selection of alternative…

Abstract

Purpose

The purpose of this paper is to adapt integrated hierarchical evaluation platform (associated with “green” performance indices) toward evaluation and selection of alternative suppliers under green supply chain (GSC) philosophy.

Design/methodology/approach

In this context, incompleteness, vagueness, imprecision, as well as inconsistency associated with subjective evaluation information aligned with ill-defined suppliers’ assessment indices has been tackled through logical exploration of fuzzy set theory. A fuzzy-based multi-level multi-criteria decision-making (FMLMCDM) approach as proposed by Chu and Varma (2012), has been case empirically studied in the context of green suppliers selection.

Findings

Result obtained thereof, has been compared to that of fuzzy-TOPSIS to validate application potential of the aforementioned FMLMCDM approach.

Originality/value

The proposed method has been found fruitful from managerial implication viewpoint.

Details

Benchmarking: An International Journal, vol. 23 no. 6
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 1 July 2002

Ahmad Al‐Athari and Mohamed Zairi

This paper is based on a study which examined the current training evaluation activity and challenges that face Kuwaiti organisations. The study sample was five UK organisations…

5927

Abstract

This paper is based on a study which examined the current training evaluation activity and challenges that face Kuwaiti organisations. The study sample was five UK organisations (recognised as best practice organisations in their T&D activities) and 77 Kuwaiti organisations (40 government and 37 private). Interviews and questionnaires were used. The study reveals that the majority of respondents, both in government and in private sectors, only evaluate their training programme occasionally. The most popular evaluation tools and technique used by government and private sectors were questionnaires. The most common model used by Kuwaiti organisations is the Kirkpatrick model, while the most common level of evaluation for both government and private sector is reaction type.

Details

Journal of European Industrial Training, vol. 26 no. 5
Type: Research Article
ISSN: 0309-0590

Keywords

Article
Publication date: 23 May 2008

Ahmad A. Abu‐Musa

The purpose of this paper is to investigate empirically the impact of emerging information technology (IT) on internal auditors' (IA) activities, and to examine whether the IT…

5600

Abstract

Purpose

The purpose of this paper is to investigate empirically the impact of emerging information technology (IT) on internal auditors' (IA) activities, and to examine whether the IT evaluations performed in Saudi organizations vary, based on evaluation objectives and organizational characteristics.

Design/methodology/approach

A survey, using a self‐administered questionnaire, is used to achieve these objectives. About 700 questionnaires were randomly distributed to a sample of Saudi organizations located in five main Saudi cities. In total, 218 valid and usable questionnaires – representing a 30.7 percent response rate – were collected and analyzed using Statistical Package for Social Sciences – SPSS version 15.

Findings

The results of the study reveal that IA need to enhance their knowledge and skills of computerized information systems (CIS) for the purpose of planning, directing, supervising and reviewing the work performed. The results of the study are consistent with Hermanson et al. that IA focus primarily on traditional IT risks and controls, such as IT data integrity, privacy and security, asset safeguarding and application processing. Less attention has been directed to system development and acquisition activities. The IA's performance of IT evaluations is associated with several factors including: the audit objectives, industry type, the number of IT audit specialists on the internal audit staff, and the existence of new CIS.

Practical implications

From a practical standpoint, managers, IA, IT auditors, and practitioners alike stand to gain from the findings of this study.

Originality/value

The findings of this study have important implications for managers and IA, enabling them to better understand and evaluate their computerized accounting systems.

Details

Managerial Auditing Journal, vol. 23 no. 5
Type: Research Article
ISSN: 0268-6902

Keywords

Article
Publication date: 1 January 2006

Vaughan C. Judd, Lucy I. Farrow and Betty J. Tims

The purpose of this paper is to discuss the attempt to find an evaluation instrument for undergraduate students to use to evaluate public web sites, the analysis of the variety of…

2356

Abstract

Purpose

The purpose of this paper is to discuss the attempt to find an evaluation instrument for undergraduate students to use to evaluate public web sites, the analysis of the variety of instruments discovered, the subsequent development of an appropriate instrument, and the application of the instrument in workshops with students.

Design/methodology/approach

The instrument was created based on the following criteria that the authors determined would meet the students' needs. It focuses exclusively on the information aspect of a web site, has some basis in theory or is based on an accepted model, is parsimonious, is quantitative, with both absolute and relative measures, and indicates whether or not the information should be accepted or rejected. The instrument was also developed with the goal of focusing on the process rather than the outcome.

Findings

Although a number of diverse evaluation instruments from the literature and from web‐based sources were examined, none was deemed suitable for students to use so the authors developed their own.

Originality/value

The authors concurred that, based on their assessment of the learning environment, the focus of an instrument should be on evaluation as a process.

Details

Reference Services Review, vol. 34 no. 1
Type: Research Article
ISSN: 0090-7324

Keywords

1 – 10 of over 145000