Search results

1 – 10 of over 81000
Article
Publication date: 1 January 1994

Susan Voge

Requests for tests and measuring instruments for use in class assignments and faculty and student research are both familiar and frustrating to most academic librarians. In…

Abstract

Requests for tests and measuring instruments for use in class assignments and faculty and student research are both familiar and frustrating to most academic librarians. In typical scenarios, an education student wants to measure aggression in children or a nursing student needs a test for patient mobility. Even the faculty member who may know the name of a scale may not know its author or how to obtain a copy. All are looking for a measure applicable to a specific situation and each has come to the library in hopes of walking away with a copy of the measure that day. Those familiar with measurement literature know that accessing measures can be time consuming, circuitous, and sometimes impossible. The standard test reference books, such as the Mental Measurements Yearbook and Tests in Print (both of which are published by the Buros Institute, University of Nebraska, Lincoln, Nebraska), are of limited use. These books typically do not include actual instruments or noncommercial tests from the journal and report literature. While these standard reference books are essential to a test literature collection, sole use of them would mean bypassing large numbers of instruments developed and published only in articles, reports, papers, and dissertations. Sources are available to locate additional measurements, tests, and instruments, but they are widely dispersed in the print and electronic literature.

Details

Reference Services Review, vol. 22 no. 1
Type: Research Article
ISSN: 0090-7324

Article
Publication date: 15 November 2011

Paulina Palttala and Marita Vos

The purpose of this paper is to test a measurement system with performance indicators to improve organizational learning about crisis communication by public organizations…

2123

Abstract

Purpose

The purpose of this paper is to test a measurement system with performance indicators to improve organizational learning about crisis communication by public organizations enhancing public safety in large scale emergencies. The tool can be used to conduct a preparedness audit or to evaluate communication performance in a real situation or in an emergency exercise. Evaluation is part of the strategic planning and development of crisis communication.

Design/methodology/approach

The construction of the instrument and its theoretical underpinnings are first explained, after which the series of empirical tests that were implemented to scrutinize the clarity and appropriateness of the indicators as well as the usability of the instrument are presented. The process approach to crisis management, in which the various phases of a crisis are seen as a continuum, and the stakeholder perspective, in which both the diversity of public groups and the network of response organizations are taken into account, are applied in the paper.

Findings

The tests of the instrument revealed much interest in its use, and it was seen as a potential tool for the improvement by public organizations of their crisis communication. The tests led to improvements in the structure as well as in the phrasing of the individual performance indicators and their explanation. The indicators were considered relevant and important but too many in number. Therefore, a possibility to use the instrument in three separate parts, relating respectively to the period before, during and after a crisis, should be offered.

Research limitations/implications

This study addresses the main factors relevant for crisis communication with respect to the approach chosen, but does not report all the literature and empirical findings that validate the individual indicators as this has been done in other publications. It also presents a series of first test findings but not as yet the results of improvements initiated by using the instrument.

Practical implications

The instrument developed shows weak and strong points in crisis communication on the level of single indicators, but also allows comparison of performance in different phases and for the various stakeholder groups, showing where more attention is needed. The instrument developed will be available on an open web site and users will be asked to make the measurement results, rendered anonymous, available for its further improvement.

Social implications

The paper contributes to effectiveness of emergency management by testing an instrument to facilitate learning about crisis communication.

Originality/value

Much of the crisis communication literature focuses on reputation crises. This paper discusses crisis communication supporting crisis management in the case of disasters and other emergencies that are handled by a response network instead a single organization. It provides a clear framework for analysing and assessing the quality of crisis communication and stimulates and thus enables learning and further improvement.

Article
Publication date: 1 February 2002

Maryellen Allen

Usability testing of Web interfaces for virtual libraries is a crucial factor in the continuing development and improvement of the user interface. In 1999, the University of South…

1797

Abstract

Usability testing of Web interfaces for virtual libraries is a crucial factor in the continuing development and improvement of the user interface. In 1999, the University of South Florida Libraries decided to embark on a usability study to coincide with the rollout of a new interface design. Because this type of study had not been conducted with the initial interface, its implementation and completion were paramount in the development of the new design. Details the preliminary activities, testing methodologies and results of usability testing by the USF virtual library project’s usability study group.

Details

Online Information Review, vol. 26 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 21 June 2019

Arturo Briseño, Ana R. Leal, Eduardo Aguiñaga and Alfonso López-Lira

In this paper, empirical evidence is presented regarding the translation of the learning tactics inventory (LTI) instrument that measures learning versatility in entrepreneurs…

Abstract

Purpose

In this paper, empirical evidence is presented regarding the translation of the learning tactics inventory (LTI) instrument that measures learning versatility in entrepreneurs with four main learning strategies: acting, thinking, feeling or access to others. The purpose of this paper is to show how translating instruments from other languages and for cross-cultural studies is not sufficient to achieve instrument validity, and the use of structural equation modeling can help to strengthen the process.

Design/methodology/approach

After using iterative and multi-technique strategies that involved close translation and adaptation, structural equation modeling was also performed to validate whether relationships exist among the constructs and their variables using a confirmatory analysis.

Findings

After careful translation, the Spanish version of the LTI instrument does not measure the intended constructs. Such evidence was uncovered by contrasting the different dimensions of the English and Spanish translated versions.

Research limitations/implications

Instruments in cross-cultural studies require more than translation strategies to adapt the instrument to the new context. This paper shows that, with the use of structural equation modeling, constructs may change in different international contexts and how misinterpretations of the instrument can occur if additional validity tests are ignored.

Originality/value

Consistent with the extant literature, the findings suggest that, when studying a complex phenomenon such as learning through a survey developed in a different country and language, cultural factors should be explained to maintain construct validity. Hence, in entrepreneurship and management research, instruments such as the LTI need to be validated with confirmatory analysis to accurately reflect the different learning strategies of entrepreneurs across cultures.

Objetivo

En este artículo se presenta evidencia empírica sobre la traducción del instrumento referente al Inventario de Tácticas de Aprendizaje, el cual mide la versatilidad del aprendizaje entre emprendedores con 4 principales estrategias: acción, pensamiento, sentimientos y acceso a los demás. El objetivo del artículo es el mostrar cómo la traducción en estudios multiculturales no es suficiente para lograr la validez del instrumento, y como el uso de Ecuaciones Estructurales puede fortalecer el proceso.

Diseño/metodología/enfoque

Tras utilizar múltiples técnicas e iterativas que involucran traducción cercana y adaptación, se utilizó la metodología de Ecuaciones Estructurales para validar relaciones entre constructos y variables mediante un análisis confirmatorio.

Resultados

Se ha demostrado que, aún y con una traducción cuidadosa, el instrumento no refleja los constructos de la versión en inglés. Dicha evidencia se encontró al analizar las diferentes dimensiones entre las versiones en inglés y en español.

Limitaciones de la investigación/implicaciones

Los instrumentos en estudios multiculturales requieren más que una traducción para adaptarse a nuevos contextos. El presente artículo muestra con el uso de Ecuaciones Estructurales los constructos pueden cambiar en contextos internacionales diversos; así como que las interpretaciones erróneas del instrumento pueden acontecer si los estudios de validez de constructo son ignorados.

Originalidad/valor

Alineado a la literatura existente, los resultados sugieren que al estudiar un fenómeno como el aprendizaje en un contexto diferente al de la creación del instrumento, los factores culturales deben tomarse en cuenta con el objetivo de mantener la validez de constructo. Por tanto, en investigaciones sobre emprendimiento y administración, instrumentos como el Inventario de Tácticas de Aprendizaje deben validarse con un análisis confirmatorio con el objetivo de reflejar adecuadamente las diferentes estrategias de aprendizaje entre culturas.

Palabras clave

Transcultural, México, Emprendedores, Aprendizaje, Pruebas, Traducción

Tipo de artículo

Artículo de investigación

Objetivo

Neste artigo, são apresentadas evidências empíricas sobre a tradução de um instrumento relacionado ao Inventário de Táticas de Aprendizagem, que mede a versatilidade de aprendizagem entre os empreendedores através de 4 principais estratégias: ação, pensamento, sentimentos e acesso a outros. O objetivo do artigo é mostrar como a tradução em estudos multiculturais não é suficiente para alcançar a validade do instrumento, de modo que o uso de Equações Estruturais pode fortalecer o processo.

Design/metodologia/abordagem

Além de utilizar múltiplas técnicas e estratégias iterativas envolvendo tradução e adaptação próximas, a metodologia da Equação Estrutural foi usada para validar as relações entre construtos e variáveis, envolvendo análises confirmatórias.

Resultados

Se demonstrou que, mesmo com uma tradução cuidadosa, o instrumento não reflete os construtos da versão em inglês. Essa evidência foi encontrada ao analisar as diferentes dimensões entre as versões em inglês e espanhol.

Limitações de pesquisa/implicações

Instrumentos em estudos multiculturais requerem mais do que uma tradução para se adaptar a novos contextos. O presente artigo mostra que, com o uso de equações estruturais, os construtos podem mudar em diversos contextos internacionais; assim como as interpretações errôneas do instrumento podem acontecer se os estudos de validade de construto forem ignorados.

Originalidade/valor

Alinhados à literatura existente, os resultados sugerem que, ao estudar um fenômeno como a aprendizagem em um contexto diferente da criação do instrumento, fatores culturais devem ser levados em conta para manter a validade de construto. Portanto, em pesquisas sobre empreendedorismo e administração, instrumentos como o Inventário de Táticas de Aprendizagem devem ser validados com uma análise confirmatória com o objetivo de referenciar adequadamente as diferentes estratégias de aprendizagem entre culturas.

Palavras-chave

Intercultural, México, Empreendedor, Aprendizagem, Testar, Tradução

Tipo de artigo

Artigo de pesquisa

Details

Management Research: Journal of the Iberoamerican Academy of Management, vol. 17 no. 1
Type: Research Article
ISSN: 1536-5433

Keywords

Article
Publication date: 4 November 2013

Jonathan Lough and Kathryn Von Treuer

The purpose of this paper is to critically examine the instruments used in the screening process, with particular attention given to supporting research validation. Psychological…

5623

Abstract

Purpose

The purpose of this paper is to critically examine the instruments used in the screening process, with particular attention given to supporting research validation. Psychological screening is a well-established process used in the selection of employees across public safety industries, particularly in police settings. Screening in and screening out are both possible, with screening out being the most commonly used method. Little attention, however, has been given to evaluating the comparative validities of the instruments used.

Design/methodology/approach

This review investigates literature supporting the use of the Minnesota Multiphasic Personality Inventory (MMPI), the California Personality Inventory (CPI), the Inwald Personality Inventory (IPI), the Australian Institute of Forensic Psychology's test battery (AIFP), and some other less researched tests. Research supporting the validity of each test is discussed.

Findings

It was found that no test possesses unequivocal research support, although the CPI and AIFP tests show promise. Most formal research into the validity of the instruments lacks appropriate experimental structure and is therefore less powerful as “evidence” of the utility of the instrument(s).

Practical implications

This research raises the notion that many current screening practices are likely to be adding minimal value to the selection process by way of using instruments that are not “cut out” for the job. This has implications for policy and practice at the recruitment stage of police employment.

Originality/value

This research provides a critical overview of the instruments and their validity studies rather than examining the general process of psychological screening. As such, it is useful to those working in selection who are facing the choice of psychological instrument. Possibilities for future research are presented, and development opportunities for a best practice instrument are discussed.

Details

Policing: An International Journal of Police Strategies & Management, vol. 36 no. 4
Type: Research Article
ISSN: 1363-951X

Keywords

Article
Publication date: 20 June 2023

Daramola Thompson Olapade, Tajudeen Bioye Aluko, Ademola Lateef Adisa and Adewale Adebanjo Abobarin

The Customary Land Delivery Institutions (CLDIs) provide the platform for the supply of developable land in most cities in sub-Saharan African countries. While there is a need to…

Abstract

Purpose

The Customary Land Delivery Institutions (CLDIs) provide the platform for the supply of developable land in most cities in sub-Saharan African countries. While there is a need to measure the effectiveness of CLDIs to compare their performance with others or themselves over time, there is however a dearth of evidence-based frameworks that could be adopted for such an assessment. This study developed a framework for the evaluation of the effectiveness of CLDIs. This is with a view to providing a tool for measuring the performance of land governance.

Design/methodology/approach

A total of 46 good governance criteria for measuring the various dimensions of CLDIs generated from the literature were transformed into a measurable scale which was validated by a panel of 16 experts through a modified Delphi approach. A pilot study was also conducted on 42 land-based professionals to assess the reliability of the framework. Content Validity Index (CVI) was calculated for relevancy scores while clarity was measured by clarity score. Cronbach alpha was also employed to measure the reliability of the framework.

Findings

The result of the 46 criteria validated by the experts revealed that 89.5% of items in the developed instrument have a content validity index (I-CVI) equal to or greater than the 0.85 threshold and a mean I-CVI of 0.90. With the CVI score and the analysis of the comments made by the experts, six items were removed from the instrument and a total of six new items were added. The final corrected instrument after a further iteration had a total of 46 items. The reliability test also revealed a Cronbach alpha score of 0.82.

Research limitations/implications

This paper provides a framework useful for developing countries, especially in the development of land delivery policies and provides a framework for the analysis of the important aspects thereof.

Originality/value

This paper demonstrates the development of a holistic framework for the assessment of CLDIs which hitherto were not in existence.

Details

Property Management, vol. 41 no. 5
Type: Research Article
ISSN: 0263-7472

Keywords

Article
Publication date: 16 May 2016

Nicholas G. Dagalakis, Jae-Myung Yoo and Thomas Oeste

The purpose of this paper is a description of DITCI, its drop loads and sensors, the impact tools, the robot dynamic impact safety artifacts, data analysis, and modeling of test…

Abstract

Purpose

The purpose of this paper is a description of DITCI, its drop loads and sensors, the impact tools, the robot dynamic impact safety artifacts, data analysis, and modeling of test results. The dynamic impact testing and calibration instrument (DITCI) is a simple instrument with a significant data collection and analysis capability that is used for the testing and calibration of biosimulant human tissue artifacts. These artifacts may be used to measure the severity of injuries caused in the case of a robot impact with a human.

Design/methodology/approach

In this paper, we describe the DITCI adjustable impact and flexible foundation mechanism, which allows the selection of a variety of impact force levels and foundation stiffness. The instrument can accommodate arrays of a variety of sensors and impact tools, simulating both real manufacturing tools and the testing requirements of standards setting organizations.

Findings

A computer data acquisition system may collect a variety of impact motion, force and torque data, which are used to develop a variety of mathematical model representations of the artifacts. Finally, we describe the fabrication and testing of human abdomen soft tissue artifacts with embedded markers, used to display the severity of impact injury tissue deformation.

Research limitations/implications

DITCI and the use of biosimulant human tissue artifacts will permit a better understanding of the severity of injury, which will be caused in the case of a robot impact with a human, without the use of expensive cadaver parts. The limitations are set by the ability to build artifacts with material properties similar to those of various parts of the human body.

Practical implications

This technology will be particularly useful for small manufacturing companies that cannot afford the use of expensive instrumentation and technical consultants.

Social implications

Impact tests were performed at maximum impact force and average pressure levels that are below, at and above the levels recommended by a proposed International Organization for Standardization standard. These test results will be used to verify whether the adopted safety standards will protect interactive robots human operators for various robot tools and control modes.

Originality/value

Various research groups have used human subjects to collect data on pain induced by industrial robots. Unfortunately, human safety testing is not an option for human–robot collaboration in industrial applications every time there is a change of a tool or control program, so the use of biosimulant artifacts is expected to be a good alternative.

Details

Industrial Robot: An International Journal, vol. 43 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 February 2024

Winston T. Su, Zach W.Y. Lee, Xinming He and Tommy K.H. Chan

The global market for cloud gaming is growing rapidly. How gamers evaluate the service quality of this emerging form of cloud service has become a critical issue for both…

110

Abstract

Purpose

The global market for cloud gaming is growing rapidly. How gamers evaluate the service quality of this emerging form of cloud service has become a critical issue for both researchers and practitioners. Building on the literature on service quality and software as a service, this study develops and validates a gamer-centric measurement instrument for cloud gaming service quality.

Design/methodology/approach

A three-step measurement instrument development process, including item generation, scale development and instrument testing, was adopted to conceptualize and operationalize cloud gaming service quality.

Findings

Cloud gaming service quality consists of two second-order constructs of support service quality and technical service quality with seven first-order dimensions, namely rapport, responsiveness, reliability, compatibility, ubiquity, smoothness and comprehensiveness. The instrument exhibits desirable psychometric properties.

Practical implications

Practitioners can use this new measurement instrument to evaluate gamers' perceptions toward their service and to identify areas for improvement.

Originality/value

This study contributes to the service quality literature by utilizing qualitative and quantitative approaches to develop and validate a new measurement instrument of service quality in the context of cloud gaming and by identifying new dimensions (compatibility, ubiquity, smoothness and comprehensiveness) specific to it.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 July 2014

Valerie Priscilla Goby and Catherine Nickerson

This paper aims to focus on the successful efforts made at a university business school in the Gulf region to develop an assessment tool to evaluate the communication skills of…

Abstract

Purpose

This paper aims to focus on the successful efforts made at a university business school in the Gulf region to develop an assessment tool to evaluate the communication skills of undergraduate students as part of satisfying the Association to Advance Collegiate Schools of Business (AACSB) accreditation requirements. We do not consider the validity of establishing learning outcomes or meeting these according to AACSB criteria. Rather, we address ourselves solely to the design of a testing instrument that can measure the degree of student learning within the parameters of university-established learning outcomes.

Design/methodology/approach

The testing of communication skills, as opposed to language, is notoriously complex, and we describe our identification of constituent items that make up the corpus of knowledge that business students need to attain. We discuss our development of a testing instrument which reflects the learning process of knowledge, comprehension and application.

Findings

Our work acted as a valid indicator of the effectiveness of teaching and learning as well as a component of accreditation requirements.

Originality/value

The challenge to obtain accreditation, supported by appropriate assessment procedures, is now a high priority for more and more universities in emerging, as well as in developed, economies. For business schools, the accreditation provided by AACSB remains perhaps the most sought after global quality assurance program, and our work illustrates how the required plotting and assessment of learning objectives can be accomplished.

Details

Quality Assurance in Education, vol. 22 no. 3
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 22 September 2021

Artur Meerits, Kurmet Kivipõld and Isaac Nana Akuffo

The purpose of this paper is twofold: to test existing Authentic Leadership (AL) instruments simultaneously in the same environment, and based on these, to propose an extended…

Abstract

Purpose

The purpose of this paper is twofold: to test existing Authentic Leadership (AL) instruments simultaneously in the same environment, and based on these, to propose an extended instrument for the assessment of AL intrapersonal and interpersonal competencies.

Design/methodology/approach

Three existing instruments of AL – Authentic Leadership Questionnaire (ALQ) (Walumbwa et al., 2008), Authentic Leadership Inventory (ALI) (Neider and Schriesheim, 2011) and the Three Pillar Model (TPM) (Beddoes-Jones and Swailes, 2015) – were tested, and an extended instrument was proposed based on the results. Two different samples were used – a homogeneous sample (N = 1021) from the military and a heterogeneous sample (N = 547) from retail, catering, public services and logistics industries. Construct validity for the instruments was assessed using a confirmatory factor analysis, and the internal consistency of the factors was analysed using Cronbach’s alpha.

Findings

From existing instruments, two out of three indicate issues with internal factor consistency and model fit. The internal consistency of factors and model fit of the extended instrument developed here is satisfactory and suitable for assessing authentic leadership competencies in a single organisation or industry.

Originality/value

This paper sees AL as the behaviour of leaders affected by leadership competencies. Three existing AL instruments were tested alongside a proposed extended instrument to assess AL intrapersonal and interpersonal competencies in the same context.

1 – 10 of over 81000