Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 10 May 2022

Lai Ma

This paper examines the socio-political affordances of metrics in research evaluation and the consequences of epistemic injustice in research practices and recorded knowledge.

1536

Abstract

Purpose

This paper examines the socio-political affordances of metrics in research evaluation and the consequences of epistemic injustice in research practices and recorded knowledge.

Design/methodology/approach

First, the use of metrics is examined as a mechanism that promotes competition and social acceleration. Second, it is argued that the use of metrics in a competitive research culture reproduces systemic inequalities and leads to epistemic injustice. The conceptual analysis draws on works of Hartmut Rosa and Miranda Fricker, amongst others.

Findings

The use of metrics is largely driven by competition such as university rankings and league tables. Not only that metrics are not designed to enrich academic and research culture, they also suppress the visibility and credibility of works by minorities. As such, metrics perpetuate epistemic injustice in knowledge practices; at the same time, the reliability of metrics for bibliometric and scientometric studies is put into question.

Social implications

As metrics leverage who can speak and who will be heard, epistemic injustice is reflected in recorded knowledge and what we consider to be information.

Originality/value

This paper contributes to the discussion of metrics beyond bibliometric studies and research evaluation. It argues that metrics-induced competition is antithetical to equality and diversity in research practices.

Details

Journal of Documentation, vol. 78 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 6 July 2020

Basma Makhlouf Shabou, Julien Tièche, Julien Knafou and Arnaud Gaudinat

This paper aims to describe an interdisciplinary and innovative research conducted in Switzerland, at the Geneva School of Business Administration HES-SO and supported by the…

4193

Abstract

Purpose

This paper aims to describe an interdisciplinary and innovative research conducted in Switzerland, at the Geneva School of Business Administration HES-SO and supported by the State Archives of Neuchâtel (Office des archives de l'État de Neuchâtel, OAEN). The problem to be addressed is one of the most classical ones: how to extract and discriminate relevant data in a huge amount of diversified and complex data record formats and contents. The goal of this study is to provide a framework and a proof of concept for a software that helps taking defensible decisions on the retention and disposal of records and data proposed to the OAEN. For this purpose, the authors designed two axes: the archival axis, to propose archival metrics for the appraisal of structured and unstructured data, and the data mining axis to propose algorithmic methods as complementary or/and additional metrics for the appraisal process.

Design/methodology/approach

Based on two axes, this exploratory study designs and tests the feasibility of archival metrics that are paired to data mining metrics, to advance, as much as possible, the digital appraisal process in a systematic or even automatic way. Under Axis 1, the authors have initiated three steps: first, the design of a conceptual framework to records data appraisal with a detailed three-dimensional approach (trustworthiness, exploitability, representativeness). In addition, the authors defined the main principles and postulates to guide the operationalization of the conceptual dimensions. Second, the operationalization proposed metrics expressed in terms of variables supported by a quantitative method for their measurement and scoring. Third, the authors shared this conceptual framework proposing the dimensions and operationalized variables (metrics) with experienced professionals to validate them. The expert’s feedback finally gave the authors an idea on: the relevance and the feasibility of these metrics. Those two aspects may demonstrate the acceptability of such method in a real-life archival practice. In parallel, Axis 2 proposes functionalities to cover not only macro analysis for data but also the algorithmic methods to enable the computation of digital archival and data mining metrics. Based on that, three use cases were proposed to imagine plausible and illustrative scenarios for the application of such a solution.

Findings

The main results demonstrate the feasibility of measuring the value of data and records with a reproducible method. More specifically, for Axis 1, the authors applied the metrics in a flexible and modular way. The authors defined also the main principles needed to enable computational scoring method. The results obtained through the expert’s consultation on the relevance of 42 metrics indicate an acceptance rate above 80%. In addition, the results show that 60% of all metrics can be automated. Regarding Axis 2, 33 functionalities were developed and proposed under six main types: macro analysis, microanalysis, statistics, retrieval, administration and, finally, the decision modeling and machine learning. The relevance of metrics and functionalities is based on the theoretical validity and computational character of their method. These results are largely satisfactory and promising.

Originality/value

This study offers a valuable aid to improve the validity and performance of archival appraisal processes and decision-making. Transferability and applicability of these archival and data mining metrics could be considered for other types of data. An adaptation of this method and its metrics could be tested on research data, medical data or banking data.

Details

Records Management Journal, vol. 30 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

Open Access
Article
Publication date: 30 March 2023

Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…

Abstract

Purpose

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.

Design/methodology/approach

This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.

Findings

This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.

Originality/value

The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 21 March 2022

Sergio Olavarrieta

Despite the general recommendation of using a combination of multiple criteria for research assessment and faculty promotion decisions, the raise of quantitative indicators is…

1530

Abstract

Purpose

Despite the general recommendation of using a combination of multiple criteria for research assessment and faculty promotion decisions, the raise of quantitative indicators is generating an emerging trend in Business Schools to use single journal impact factors (IFs) as key (unique) drivers for those relevant school decisions. This paper aims to investigate the effects of using single Web of Science (WoS)-based journal impact metrics when assessing research from two related disciplines: Business and Economics, and its potential impact for the strategic sustainability of a Business School.

Design/methodology/approach

This study collected impact indicators data for Business and Economics journals from the Clarivate Web of Science database. We concentrated on the IF indicators, the Eigenfactor and the article influence score (AIS). This study examined the correlations between these indicators and then ranked disciplines and journals using these different impact metrics.

Findings

Consistent with previous findings, this study finds positive correlations among these metrics. Then this study ranks the disciplines and journals using each impact metric, finding relevant and substantial differences, depending on the metric used. It is found that using AIS instead of the IF raises the relative ranking of Economics, while Business remains basically with the same rank.

Research limitations/implications

This study contributes to the research assessment literature by adding substantial evidence that given the sensitivity of journal rankings to particular indicators, the selection of a single impact metric for assessing research and hiring/promotion and tenure decisions is risky and too simplistic. This research shows that biases may be larger when assessment involves researchers from related disciplines – like Business and Economics – but with different research foundations and traditions.

Practical implications

Consistent with the literature, given the sensibility of journal rankings to particular indicators, the selection of a single impact metric for assessing research, assigning research funds and hiring/promotion and tenure decisions is risky and simplistic. However, this research shows that risks and biases may be larger when assessment involves researchers from related disciplines – like Business and Economics – but with different research foundations and trajectories. The use of multiple criteria is advised for such purposes.

Originality/value

This is an applied work using real data from WoS that addresses a practical case of comparing the use of different journal IFs to rank-related disciplines like Business and Economics, with important implications for faculty tenure and promotion committees and for research funds granting institutions and decision-makers.

Details

Journal of Economics, Finance and Administrative Science, vol. 27 no. 53
Type: Research Article
ISSN: 2218-0648

Keywords

Open Access
Article
Publication date: 28 November 2017

Mansoor Alghamdi and William Teahan

The aim of this paper is to experimentally evaluate the effectiveness of the state-of-the-art printed Arabic text recognition systems to determine open areas for future…

6583

Abstract

Purpose

The aim of this paper is to experimentally evaluate the effectiveness of the state-of-the-art printed Arabic text recognition systems to determine open areas for future improvements. In addition, this paper proposes a standard protocol with a set of metrics for measuring the effectiveness of Arabic optical character recognition (OCR) systems to assist researchers in comparing different Arabic OCR approaches.

Design/methodology/approach

This paper describes an experiment to automatically evaluate four well-known Arabic OCR systems using a set of performance metrics. The evaluation experiment is conducted on a publicly available printed Arabic dataset comprising 240 text images with a variety of resolution levels, font types, font styles and font sizes.

Findings

The experimental results show that the field of character recognition for printed Arabic still requires further research to reach an efficient text recognition method for Arabic script.

Originality/value

To the best of the authors’ knowledge, this is the first work that provides a comprehensive automated evaluation of Arabic OCR systems with respect to the characteristics of Arabic script and, in addition, proposes an evaluation methodology that can be used as a benchmark by researchers and therefore will contribute significantly to the enhancement of the field of Arabic script recognition.

Details

PSU Research Review, vol. 1 no. 3
Type: Research Article
ISSN: 2399-1747

Keywords

Open Access
Article
Publication date: 15 January 2021

Clare Edwards and Dominic Gilroy

This paper aims to demonstrate the approach taken in delivering the quality and impact elements of Knowledge for Healthcare, the strategic development framework for National…

1726

Abstract

Purpose

This paper aims to demonstrate the approach taken in delivering the quality and impact elements of Knowledge for Healthcare, the strategic development framework for National Health Service (NHS) library and knowledge services in England. It examines the work undertaken to enhance quality and demonstrate the value and impact of health library and knowledge services. It describes the interventions developed and implemented over a five-year period 2015–2020 and the move towards an outcome rather than process approach to impact and quality.

Design/methodology/approach

The case study illustrates a range of interventions that have been developed, including the outcomes of implementation to date. The methodology behind each intervention is informed by the evidence base and includes professional engagement.

Findings

The outcomes approach to the development and implementation of quality and impact interventions and assets provides evidence to demonstrate the value of library and knowledge staff to the NHS in England to both high-level decision-makers and service users.

Originality/value

The interventions are original concepts developed within the NHS to demonstrate system-wide impacts and change. The Evaluation Framework has been developed based on the impact planning and assessment (IPA) methodology. The interventions can be applied to other healthcare systems, and the generic learning is transferable to other library and knowledge sectors, such as higher education.

Details

Performance Measurement and Metrics, vol. 22 no. 2
Type: Research Article
ISSN: 1467-8047

Keywords

Open Access
Article
Publication date: 11 July 2022

Afreen Khan, Swaleha Zubair and Samreen Khan

This study aimed to assess the potential of the Clinical Dementia Rating (CDR) Scale in the prognosis of dementia in elderly subjects.

Abstract

Purpose

This study aimed to assess the potential of the Clinical Dementia Rating (CDR) Scale in the prognosis of dementia in elderly subjects.

Design/methodology/approach

Dementia staging severity is clinically an essential task, so the authors used machine learning (ML) on the magnetic resonance imaging (MRI) features to locate and study the impact of various MR readings onto the classification of demented and nondemented patients. The authors used cross-sectional MRI data in this study. The designed ML approach established the role of CDR in the prognosis of inflicted and normal patients. Moreover, the pattern analysis indicated CDR as a strong cohort amongst the various attributes, with CDR to have a significant value of p < 0.01. The authors employed 20 ML classifiers.

Findings

The mean prediction accuracy varied with the various ML classifier used, with the bagging classifier (random forest as a base estimator) achieving the highest (93.67%). A series of ML analyses demonstrated that the model including the CDR score had better prediction accuracy and other related performance metrics.

Originality/value

The results suggest that the CDR score, a simple clinical measure, can be used in real community settings. It can be used to predict dementia progression with ML modeling.

Details

Arab Gulf Journal of Scientific Research, vol. 40 no. 1
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 20 July 2020

Mohd Faizan, Raees Ahmad Khan and Alka Agrawal

Cryptomarkets on the dark web have emerged as a hub for the sale of illicit drugs. They have made it easier for the customers to get access to illicit drugs online while ensuring…

1186

Abstract

Cryptomarkets on the dark web have emerged as a hub for the sale of illicit drugs. They have made it easier for the customers to get access to illicit drugs online while ensuring their anonymity. The easy availability of potentially harmful drugs has resulted in a significant impact on public health. Consequently, law enforcement agencies put a lot of effort and resources into shutting down online markets on the dark web. A lot of research work has also been conducted to understand the working of customers and vendors involved in the cryptomarkets that may help the law enforcement agencies. In this research, we present a ranking methodology to identify and rank top markets dealing in harmful illicit drugs. Using named entity recognition, a harm score of a drug market is calculated to indicate the degree of threat followed by the ranking of drug markets. The top-ranked markets are the ones selling the most harmful drugs. The rankings thus obtained can be helpful to law enforcement agencies by locating specific markets selling harmful illicit drugs and their further monitoring.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 23 May 2023

Myungjoo Kang, Inwook Song and Seiwan Kim

This study aims to empirically analyze the asset allocation capabilities of Outsourced Chief Investment Officers (OCIOs) in Korea. The empirical analysis used data from 35 funds…

Abstract

This study aims to empirically analyze the asset allocation capabilities of Outsourced Chief Investment Officers (OCIOs) in Korea. The empirical analysis used data from 35 funds that were evaluated by the Ministry of Strategy and Finance from 2012 to 2020. The results of the analysis are summarized as follows. First, this study found that funds that adopted OCIO improved their asset allocation performance. Second, the sensitivity between risk-taking and performance decreased for funds that adopted OCIO. Third, it is found that OCIO adoption improves a fund's asset management execution (tactical capabilities). This study has methodological limitations in which the methodology used in this study is not based on theoretical prior research, but on practical applications. However, considering the need to clearly analyze the capabilities of OCIO and the timeliness of the topic, this study is valuable and can provide meaningful information to funders who are considering adopting OCIO in the future.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 31 no. 2
Type: Research Article
ISSN: 1229-988X

Keywords

Open Access
Article
Publication date: 1 December 2023

Francois Du Rand, André Francois van der Merwe and Malan van Tonder

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without…

Abstract

Purpose

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without the need for specialised computational hardware. The idea is to develop this system by making use of more traditional machine learning (ML) models instead of using computationally intensive deep learning (DL) models.

Design/methodology/approach

The approach that is used by this study is to use traditional image processing and classification techniques that can be applied to captured layer images to detect and classify defects without the need for DL algorithms.

Findings

The study proved that a defect classification algorithm could be developed by making use of traditional ML models with a high degree of accuracy and the images could be processed at higher speeds than typically reported in literature when making use of DL models.

Originality/value

This paper addresses a need that has been identified for a high-speed defect classification algorithm that can detect and classify defects without the need for specialised hardware that is typically used when making use of DL technologies. This is because when developing closed-loop feedback systems for these additive manufacturing machines, it is important to detect and classify defects without inducing additional delays to the control system.

Details

Rapid Prototyping Journal, vol. 29 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

1 – 10 of over 1000