Search results

1 – 10 of 156
Article
Publication date: 17 August 2023

Allan Farias Fávaro, Roderval Marcelino and Cristian Cechinel

This paper presents a review of the state of the art on the application of blockchain and smart contracts to the peer-review process of scientific papers. The paper seeks to…

Abstract

Purpose

This paper presents a review of the state of the art on the application of blockchain and smart contracts to the peer-review process of scientific papers. The paper seeks to analyse how the main characteristics of the existing blockchain solutions in this field to detect opportunities for the improvement of future applications.

Design/methodology/approach

A systematic review of the literature on the subject was carried out in three databases recognized by the research community (IEEE Xplore, Scopus and Web of Science) and the Frontiers in Blockchain journal. A total of 1,967 articles were initially found, and after the exclusion process, the 26 remaining articles were classified according to the following dimensions: System Type, Open Access, Review Type, Reviewer Incentive, Token Economy, Blockchain Access, Blockchain Identification, Blockchain Used, Paper Storage, Anonymity and Maturity of the solution.

Findings

Results show that the solutions are normally concerned on offering incentives to the reviewers' work (often monetary). Other common general preferences among the solutions are the adoption of open reviews, the use of Ethereum, the implementation of publishing ecosystems and the use of InterPlanetary File System to the storage of the papers.

Originality/value

There are currently no studies covering the main aspects of blockchain solutions in the field of scientific peer review. The present study provides an overall review of the topic, summarizing important information on the current research and helping new adopters to develop solutions grounded on the existing literature.

Details

Data Technologies and Applications, vol. 58 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Abstract

Details

The Impact of ChatGPT on Higher Education
Type: Book
ISBN: 978-1-83797-648-5

Content available
Article
Publication date: 28 June 2023

Javaid Ahmad Wani, Taseef Ayub Sofi, Ishrat Ayub Sofi and Shabir Ahmad Ganaie

Open-access repositories (OARs) are essential for openly disseminating intellectual knowledge on the internet and providing free access to it. The current study aims to evaluate…

Abstract

Purpose

Open-access repositories (OARs) are essential for openly disseminating intellectual knowledge on the internet and providing free access to it. The current study aims to evaluate the growth and development of OARs in the field of technology by investigating several characteristics such as coverage, OA policies, software type, content type, yearly growth, repository type and geographic contribution.

Design/methodology/approach

The directory of OARs acts as the source for data harvesting, which provides a quality-assured list of OARs across the globe.

Findings

The study found that 125 nations contributed a total of 4,045 repositories in the field of research, with the USA leading the list with the most repositories. Maximum repositories were operated by institutions having multidisciplinary approaches. The DSpace and Eprints were the preferred software types for repositories. The preferred upload content by contributors was “research articles” and “electronic thesis and dissertations”.

Research limitations/implications

The study is limited to the subject area technology as listed in OpenDOAR; therefore, the results may differ in other subject areas.

Practical implications

The work can benefit researchers across disciplines and, interested researchers can take this study as a base for evaluating online repositories. Moreover, policymakers and repository managers could also get benefitted from this study.

Originality/value

The study is the first of its kind, to the best of the authors’ knowledge, to investigate the repositories of subject technology in the open-access platform.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 17 April 2024

Dirk H.R. Spennemann, Jessica Biles, Lachlan Brown, Matthew F. Ireland, Laura Longmore, Clare L. Singh, Anthony Wallis and Catherine Ward

The use of generative artificial intelligence (genAi) language models such as ChatGPT to write assignment text is well established. This paper aims to assess to what extent genAi…

Abstract

Purpose

The use of generative artificial intelligence (genAi) language models such as ChatGPT to write assignment text is well established. This paper aims to assess to what extent genAi can be used to obtain guidance on how to avoid detection when commissioning and submitting contract-written assignments and how workable the offered solutions are.

Design/methodology/approach

Although ChatGPT is programmed not to provide answers that are unethical or that may cause harm to people, ChatGPT’s can be prompted to answer with inverted moral valence, thereby supplying unethical answers. The authors tasked ChatGPT to generate 30 essays that discussed the benefits of submitting contract-written undergraduate assignments and outline the best ways of avoiding detection. The authors scored the likelihood that ChatGPT’s suggestions would be successful in avoiding detection by markers when submitting contract-written work.

Findings

While the majority of suggested strategies had a low chance of escaping detection, recommendations related to obscuring plagiarism and content blending as well as techniques related to distraction have a higher probability of remaining undetected. The authors conclude that ChatGPT can be used with success as a brainstorming tool to provide cheating advice, but that its success depends on the vigilance of the assignment markers and the cheating student’s ability to distinguish between genuinely viable options and those that appear to be workable but are not.

Originality/value

This paper is a novel application of making ChatGPT answer with inverted moral valence, simulating queries by students who may be intent on escaping detection when committing academic misconduct.

Details

Interactive Technology and Smart Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 1 April 2024

Xiaoxian Yang, Zhifeng Wang, Qi Wang, Ke Wei, Kaiqi Zhang and Jiangang Shi

This study aims to adopt a systematic review approach to examine the existing literature on law and LLMs.It involves analyzing and synthesizing relevant research papers, reports…

Abstract

Purpose

This study aims to adopt a systematic review approach to examine the existing literature on law and LLMs.It involves analyzing and synthesizing relevant research papers, reports and scholarly articles that discuss the use of LLMs in the legal domain. The review encompasses various aspects, including an analysis of LLMs, legal natural language processing (NLP), model tuning techniques, data processing strategies and frameworks for addressing the challenges associated with legal question-and-answer (Q&A) systems. Additionally, the study explores potential applications and services that can benefit from the integration of LLMs in the field of intelligent justice.

Design/methodology/approach

This paper surveys the state-of-the-art research on law LLMs and their application in the field of intelligent justice. The study aims to identify the challenges associated with developing Q&A systems based on LLMs and explores potential directions for future research and development. The ultimate goal is to contribute to the advancement of intelligent justice by effectively leveraging LLMs.

Findings

To effectively apply a law LLM, systematic research on LLM, legal NLP and model adjustment technology is required.

Originality/value

This study contributes to the field of intelligent justice by providing a comprehensive review of the current state of research on law LLMs.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 19 April 2024

Andrew Dudash and Jacob E. Gordon

The purpose of this case study was to complement existing weeding and retention criteria beyond the most used methods in academic libraries and to consider citation counts in the…

Abstract

Purpose

The purpose of this case study was to complement existing weeding and retention criteria beyond the most used methods in academic libraries and to consider citation counts in the identification of important scholarly works.

Design/methodology/approach

Using a small sample of items chosen for withdrawal from a small liberal arts college library, this case study looks at the use of Google Scholar citation counts as a metric for identification of notable monographs in the social sciences and mathematics.

Findings

Google Scholar citation counts are a quick indicator of classic, foundational or discursive monographs in a particular field and should be given more consideration in weeding and retention analysis decisions that impact scholarly collections. Higher citation counts can be an indicator of higher circulation counts.

Originality/value

The authors found little indication in the literature that Google Scholar citation counts are being used as a metric for identification of notable works or for retention of monographs in academic libraries.

Details

Collection and Curation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9326

Keywords

Article
Publication date: 9 April 2024

Alexander O. Smith, Jeff Hemsley and Zhasmina Y. Tacheva

Our purpose is to reconnect memetics to information, a persistent and unclear association. Information can contribute across a span of memetic research. Its obscurity restricts…

Abstract

Purpose

Our purpose is to reconnect memetics to information, a persistent and unclear association. Information can contribute across a span of memetic research. Its obscurity restricts conversations about “information flow,” the connections between “form” and “content,” as well as many other topics. As information is involved in cultural activity, its clarification could focus memetic theories and applications.

Design/methodology/approach

Our design captures theoretical nuance in memetics by considering a long standing conceptual issue in memetics: information. A systematic review of memetics is provided by making use of the term information across literature. We additionally provide a citation analysis and close readings of what “information” means within the corpus.

Findings

Our initial corpus is narrowed to 128 pivotal memetic publications. From these publications, we provide a citation analysis of memetic studies. Theoretical directions of memetics in the informational context are outlined and developed. We outline two main discussion spaces, survey theoretical interests and describe where and when information is important to memetic discussion. We also find that there are continuities in goals which connect Dawkins’s meme with internet meme studies.

Originality/value

To our knowledge, this is the broadest, most inclusive review of memetics conducted, making use of a unique approach to studying information-oriented discourse across a corpus. In doing so, we provide information researchers areas in which they might contribute theoretical clarity in diverse memetic approaches. Additionally, we borrow the notion of “conceptual troublemakers” to contribute a corpus collection strategy which might be valuable for future literature reviews with conceptual difficulties arising from interdisciplinary study.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 15 February 2024

Songlin Bao, Tiantian Li and Bin Cao

In the era of big data, various industries are generating large amounts of text data every day. Simplifying and summarizing these data can effectively serve users and improve…

Abstract

Purpose

In the era of big data, various industries are generating large amounts of text data every day. Simplifying and summarizing these data can effectively serve users and improve efficiency. Recently, zero-shot prompting in large language models (LLMs) has demonstrated remarkable performance on various language tasks. However, generating a very “concise” multi-document summary is a difficult task for it. When conciseness is specified in the zero-shot prompting, the generated multi-document summary still contains some unimportant information, even with the few-shot prompting. This paper aims to propose a LLMs prompting for multi-document summarization task.

Design/methodology/approach

To overcome this challenge, the authors propose chain-of-event (CoE) prompting for multi-document summarization (MDS) task. In this prompting, the authors take events as the center and propose a four-step summary reasoning process: specific event extraction; event abstraction and generalization; common event statistics; and summary generation. To further improve the performance of LLMs, the authors extend CoE prompting with the example of summary reasoning.

Findings

Summaries generated by CoE prompting are more abstractive, concise and accurate. The authors evaluate the authors’ proposed prompting on two data sets. The experimental results over ChatGLM2-6b show that the authors’ proposed CoE prompting consistently outperforms other typical promptings across all data sets.

Originality/value

This paper proposes CoE prompting to solve MDS tasks by the LLMs. CoE prompting can not only identify the key events but also ensure the conciseness of the summary. By this method, users can access the most relevant and important information quickly, improving their decision-making processes.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Book part
Publication date: 8 April 2024

Jana Janoušková and Šárka Sobotovičová

It is important to consider economic and political factors when designing the tax mix and setting the level of corporate taxation. Increasing corporate taxation can be seen as an…

Abstract

It is important to consider economic and political factors when designing the tax mix and setting the level of corporate taxation. Increasing corporate taxation can be seen as an inefficient way to raise revenue for the state, as it can have a negative impact on investment and the competitiveness of firms. However, lowering corporate taxation can encourage investment and job creation, but it can also be perceived as supporting large corporations. The aim of this chapter is to evaluate corporate taxation, its position in the tax mix and its potential impact on economic growth. The revenues of corporate income tax (CIT) have an increasing tendency even though the tax rate was reduced from 41% to 19%. Revenues are influenced by both legislative changes and economic cycles. The level of taxation is also influenced by deductions, which include asset depreciations, research and development expenses, or loss deductions. The Pearson Correlation Coefficient was used to examine the correlation between the selected factors. A moderately strong positive correlation was found between GDP growth and CIT as a percentage of total taxes, as well as between GDP growth and CIT as a percentage of GDP.

Details

Modeling Economic Growth in Contemporary Czechia
Type: Book
ISBN: 978-1-83753-841-6

Keywords

1 – 10 of 156