Search results

1 – 8 of 8
Article
Publication date: 5 July 2024

Nouhaila Bensalah, Habib Ayad, Abdellah Adib and Abdelhamid Ibn El Farouk

The paper aims to enhance Arabic machine translation (MT) by proposing novel approaches: (1) a dimensionality reduction technique for word embeddings tailored for Arabic text…

Abstract

Purpose

The paper aims to enhance Arabic machine translation (MT) by proposing novel approaches: (1) a dimensionality reduction technique for word embeddings tailored for Arabic text, optimizing efficiency while retaining semantic information; (2) a comprehensive comparison of meta-embedding techniques to improve translation quality; and (3) a method leveraging self-attention and Gated CNNs to capture token dependencies, including temporal and hierarchical features within sentences, and interactions between different embedding types. These approaches collectively aim to enhance translation quality by combining different embedding schemes and leveraging advanced modeling techniques.

Design/methodology/approach

Recent works on MT in general and Arabic MT in particular often pick one type of word embedding model. In this paper, we present a novel approach to enhance Arabic MT by addressing three key aspects. Firstly, we propose a new dimensionality reduction technique for word embeddings, specifically tailored for Arabic text. This technique optimizes the efficiency of embeddings while retaining their semantic information. Secondly, we conduct an extensive comparison of different meta-embedding techniques, exploring the combination of static and contextual embeddings. Through this analysis, we identify the most effective approach to improve translation quality. Lastly, we introduce a novel method that leverages self-attention and Gated convolutional neural networks (CNNs) to capture token dependencies, including temporal and hierarchical features within sentences, as well as interactions between different types of embeddings. Our experimental results demonstrate the effectiveness of our proposed approach in significantly enhancing Arabic MT performance. It outperforms baseline models with a BLEU score increase of 2 points and achieves superior results compared to state-of-the-art approaches, with an average improvement of 4.6 points across all evaluation metrics.

Findings

The proposed approaches significantly enhance Arabic MT performance. The dimensionality reduction technique improves the efficiency of word embeddings while preserving semantic information. Comprehensive comparison identifies effective meta-embedding techniques, with the contextualized dynamic meta-embeddings (CDME) model showcasing competitive results. Integration of Gated CNNs with the transformer model surpasses baseline performance, leveraging both architectures' strengths. Overall, these findings demonstrate substantial improvements in translation quality, with a BLEU score increase of 2 points and an average improvement of 4.6 points across all evaluation metrics, outperforming state-of-the-art approaches.

Originality/value

The paper’s originality lies in its departure from simply fine-tuning the transformer model for a specific task. Instead, it introduces modifications to the internal architecture of the transformer, integrating Gated CNNs to enhance translation performance. This departure from traditional fine-tuning approaches demonstrates a novel perspective on model enhancement, offering unique insights into improving translation quality without solely relying on pre-existing architectures. The originality in dimensionality reduction lies in the tailored approach for Arabic text. While dimensionality reduction techniques are not new, the paper introduces a specific method optimized for Arabic word embeddings. By employing independent component analysis (ICA) and a post-processing method, the paper effectively reduces the dimensionality of word embeddings while preserving semantic information which has not been investigated before especially for MT task.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 2 July 2024

Pedro Arturo Flores-Gómez and Héctor Hugo Pérez-Villarreal

This paper aims to focus on the evolution of nonprofit cultural institutions in Mexico and their relationship with Spain, regarding the four traditional elements of a marketing…

Abstract

Purpose

This paper aims to focus on the evolution of nonprofit cultural institutions in Mexico and their relationship with Spain, regarding the four traditional elements of a marketing mix. Specifically, this paper examines marketing advancements in the digital environment, placing emphasis on the virtual exhibition Códices de México: Memorias y Saberes, as well as the marketing activities related to prehispanic and novohispanic codices between 2010 and 2022.

Design/methodology/approach

The first part of the present study provides a chronological framework based on the four components of a marketing mix, illustrating the transition of Mexican and Spanish public cultural institutions from their foundations to current times. It particularly provides insight into their recent accomplishments in the digital environment, underscoring potential networking areas. The second part offers an in-depth examination of the exhibition Códices de México: Memorias y Saberes (INAH 2015) and a review of digital sources from Mexican government entities to investigate marketing activities related to prehispanic and novohispanic codices.

Findings

Due to the historical approach used to document the transition of nonprofit cultural institutions in Mexico and Spain to the digital era, this article sheds lights on co-joint efforts in the digital marketing domain around prehispanic and novohispanic codices. Additionally, it illustrates the activities used by Mexican cultural institutions during the past two decades to disseminate knowledge on codices.

Research limitations/implications

Regarding the methodological aspects of using historical resources through digital archives, this study solely comprised marketing activities reported in the records available on the official portal of cultural institutions.

Originality/value

This study argues for the utility of the four components rooted in a traditional marketing mix as a tool to illustrate the evolution of marketing practices within the cultural heritage domain. It also highlights the role played by cultural institutions in Mexico and Spain in the digital environment to strategically network around cultural heritage. Additionally, it sheds light on the implementation of methods for presenting Mexican codices grounded in virtual terrain.

Details

Journal of Historical Research in Marketing, vol. 16 no. 3
Type: Research Article
ISSN: 1755-750X

Keywords

Article
Publication date: 19 July 2024

A.K. Mahbubul Hye, Nurakmal Ahmad Ahmad and Md. Mamun Habib

This exploratory study illustrated an integrated academic library supply chain (IALSC) model to design the strategic planning management tool of the academic library. The supply…

Abstract

Purpose

This exploratory study illustrated an integrated academic library supply chain (IALSC) model to design the strategic planning management tool of the academic library. The supply chain (SC) model has been widely used in manufacturing industries and has also been applied in many service industries with the same objectives. However, very few studies for academic libraries, particularly the implementation of the integrated SC model, are being executed, although it has been proven that SC management in practice can enhance stakeholder satisfaction, increase revenues and decrease total costs. The academic library also needs to be successful in providing quality products, services and information to fulfil the library users’ needs within the library budget. This research aims to develop a verified model of the integrated SC for the academic library.

Design/methodology/approach

This research used both qualitative and quantitative approaches to achieve its objectives. The proposed conceptual SC model, named as IALSC, for the academic library has been developed using the system thinking method; eventually, it has been validated through the fuzzy Delphi method, an expert judgement technique.

Findings

The research findings could contribute to academic library management in planning and formulating a roadmap for the library to increase its quality services for all stakeholders.

Originality/value

The conceptual model would have a high potential to be proposed as the strategic decision-making tool for an academic library, i.e. the flow of funds through the operations of the library, the library stakeholders’ satisfaction measurement, the decision process currently made by the library management team on the purchase of new library resources, the library resource suppliers, etc.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 18 September 2024

Cheryl J. Craig

Located at the place where excessive entitlement and the “best-loved self” intersect, this research illustrates what happens when the excessive entitlement of one educator trumps…

Abstract

Located at the place where excessive entitlement and the “best-loved self” intersect, this research illustrates what happens when the excessive entitlement of one educator trumps that of another. Then, in a perverse sort of way, those who are excessively entitled may even imply that the other is acting excessively entitled. This is how the “not getting your due is your due” theme emerged in the two exemplary cases that are spotlighted. Excessive entitlement is the belief that one's voice, opinion, and assessment hold more weight than others, whereas the best-loved self is the image to which educators ideally aspire. Given the contested nature of universities, it is not surprising that tensions occur around due – with due being the scholarly attention one legitimately expects to receive. The two featured narratives of experience present “amalgams of experience” lived in multiple academic contexts – with both narrative accounts not turning out as expected. The first story chronicles the choosing of an outstanding doctoral student for a prestigious award; the second one tells how a professor who received two national honors was celebrated at her institution. Through using narrative inquiry as both a research method and a form of representation, the researcher also was able to suggest how people might move beyond excessive entitlement. Narrative inquiry's well-known interpretive tools of fictionalization, broadening, burrowing, and storying and restorying, employed repeatedly throughout this chapter, produced deeper meanings and richer understandings that could result to more generous and informed actions for everyone involved.

Details

After Excessive Teacher and Faculty Entitlement
Type: Book
ISBN: 978-1-83797-877-9

Keywords

Article
Publication date: 25 January 2024

Besiki Stvilia and Dong Joon Lee

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data…

Abstract

Purpose

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data quality assurance (DQA) activities. Its findings can help develop operational DQA models and best practice guides and identify opportunities for innovation in the DQA activities.

Design/methodology/approach

The study analyzed 122 data repositories' applications for the Core Trustworthy Data Repositories, interview transcripts of 32 curators and repository managers and data curation-related webpages of their repository websites. The combined dataset represented 146 unique RDRs. The study was guided by a theoretical framework comprising activity theory and an information quality evaluation framework.

Findings

The study provided a theory-based examination of the DQA practices of RDRs summarized as a conceptual model. The authors identified three DQA activities: evaluation, intervention and communication and their structures, including activity motivations, roles played and mediating tools and rules and standards. When defining data quality, study participants went beyond the traditional definition of data quality and referenced seven facets of ethical and effective information systems in addition to data quality. Furthermore, the participants and RDRs referenced 13 dimensions in their DQA models. The study revealed that DQA activities were prioritized by data value, level of quality, available expertise, cost and funding incentives.

Practical implications

The study's findings can inform the design and construction of digital research data curation infrastructure components on university campuses that aim to provide access not just to big data but trustworthy data. Communities of practice focused on repositories and archives could consider adding FAIR operationalizations, extensions and metrics focused on data quality. The availability of such metrics and associated measurements can help reusers determine whether they can trust and reuse a particular dataset. The findings of this study can help to develop such data quality assessment metrics and intervention strategies in a sound and systematic way.

Originality/value

To the best of the authors' knowledge, this paper is the first data quality theory guided examination of DQA practices in RDRs.

Details

Journal of Documentation, vol. 80 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 17 September 2024

Iván Manuel De la Vega Hernández, Juan Díaz Amorin and Rodolfo Fernández-Gomez

The purpose of this study focused on a global longitudinal bibliometric mapping of research in the field of health biotechnology between 1990 and 2023 to determine who is leading…

Abstract

Purpose

The purpose of this study focused on a global longitudinal bibliometric mapping of research in the field of health biotechnology between 1990 and 2023 to determine who is leading this field of knowledge and to estimate the sub-disciplines that are emerging and project those that will prevail in the future.

Design/methodology/approach

The study identified the most relevant countries, institutions and researchers, as well as the type of scientific collaborations. The applied steps applied in the study were the following: identification and selection of keyword terms by a panel of experts; design and application of an algorithm to identify these selected keywords in titles, abstracts and keywords using Web of Science terms to contrast them; performance of JCR data processing during 2023 using R, Python and VOSviewer.

Findings

Among the most relevant conclusions of the study are the following exponential growth has been observed in the study period; new branches of knowledge have emerged in which the subjects have been acquiring their own autonomous capabilities; and R&D in this field is still concentrated in a small group of core countries, and the trend is for it to remain so due to the capacity needs required.

Originality/value

This contribution seeks to systematize the existing scientific knowledge in the field of biotechnology, specifically in the area of health, using the technique of scientific mapping based on a logical model of indicators that aims to determine potential thematic ramifications.

Details

Journal of Science and Technology Policy Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2053-4620

Keywords

Open Access
Article
Publication date: 18 April 2024

Joseph Nockels, Paul Gooding and Melissa Terras

This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI)…

1324

Abstract

Purpose

This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI). With HTR now achieving high levels of accuracy, we consider its potential impact on our near-future information environment and knowledge of the past.

Design/methodology/approach

In undertaking a more constructivist analysis, we identified gaps in the current literature through a Grounded Theory Method (GTM). This guided an iterative process of concept mapping through writing sprints in workshop settings. We identified, explored and confirmed themes through group discussion and a further interrogation of relevant literature, until reaching saturation.

Findings

Catalogued as part of our GTM, 120 published texts underpin this paper. We found that HTR facilitates accurate transcription and dataset cleaning, while facilitating access to a variety of historical material. HTR contributes to a virtuous cycle of dataset production and can inform the development of online cataloguing. However, current limitations include dependency on digitisation pipelines, potential archival history omission and entrenchment of bias. We also cite near-future HTR considerations. These include encouraging open access, integrating advanced AI processes and metadata extraction; legal and moral issues surrounding copyright and data ethics; crediting individuals’ transcription contributions and HTR’s environmental costs.

Originality/value

Our research produces a set of best practice recommendations for researchers, data providers and memory institutions, surrounding HTR use. This forms an initial, though not comprehensive, blueprint for directing future HTR research. In pursuing this, the narrative that HTR’s speed and efficiency will simply transform scholarship in archives is deconstructed.

Article
Publication date: 17 January 2023

Kevin K.W. Ho, Ning Li and Kristina C. Sayama

This research uses a multifaceted approach to develop an MPA/MPP curriculum to support a data science track within the existing MPA/MPP programs by identifying the core and…

Abstract

Purpose

This research uses a multifaceted approach to develop an MPA/MPP curriculum to support a data science track within the existing MPA/MPP programs by identifying the core and elective areas needed.

Design/methodology/approach

The approach includes (1) identifying a suitable structure for MPA/MPP programs which can allow the program to develop its capacity to train students with the data science and general public administration skills to solve public policy problems and leave explicit space for local experimentation and modification; (2) defining bridging modules and required modules for the MPA/MPP programs; and (3) developing of data science track thought to make suggestions for the inclusion of suitable data science modules into the data science track and benchmarking the data science modules suggested with the best practices developed by other professional bodies. The authors review 46 NASPAA-accredited MPA/MPP programs from 40 (or 22.7%) schools to identify the suitable required modules and some potential data science and analytics courses that MPA/MPP programs currently provide as electives.

Findings

The proposal includes a three-course (six–nine credits, not counted in the program but as prerequisites) bridging module, a nine-course (27 credits) required module and a five-course (15 credits) data science track/concentration.

Originality/value

This work can provide a starting point for the public administration education community to develop graduate programs focusing on data science to cater to the needs of both public managers and society at large.

1 – 8 of 8