Search results
1 – 10 of 108Ankush Jasyal, Khushi Rawat, Atasi Sinhababu and Rupak Chakravarty
This paper explores the adoption of cloud computing for digital preservation, focusing on the case of Preservica, a leading provider of software-as-a-service (SaaS)-based digital…
Abstract
Purpose
This paper explores the adoption of cloud computing for digital preservation, focusing on the case of Preservica, a leading provider of software-as-a-service (SaaS)-based digital preservation solutions. This study aims to analyse Preservica’s approach to long-term active digital preservation (LTADP) based on internationally accepted standards.
Design/methodology/approach
The research paper conducts an in-depth analysis of Preservica’s SaaS-based cloud computing solution in relation to ISO 14721:2012 (OAIS) and ISO 9001 standards. It examines the steps followed by Preservica for LTADP and highlights the importance of adherence to relevant standards.
Findings
Preservica’s adoption of cloud computing and the SaaS model offers scalable, cost-effective and flexible solutions for digital preservation. By adhering to internationally accepted standards and following a comprehensive set of LTADP steps, including file format identification, compatibility checks, checksum creation, secure storage, preservation status tracking, change monitoring, format migration and periodic testing, Preservica ensures the long-term preservation and accessibility of digital content.
Originality/value
This research contributes to the understanding of how cloud computing can be effectively used for digital preservation by examining Preservica’s approach. It emphasizes the significance of adherence to internationally accepted standards and highlights the benefits of integrating acceptance models to optimize the implementation and user acceptance of Preservica.
Details
Keywords
Joseph Nockels, Paul Gooding and Melissa Terras
This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI)…
Abstract
Purpose
This paper focuses on image-to-text manuscript processing through Handwritten Text Recognition (HTR), a Machine Learning (ML) approach enabled by Artificial Intelligence (AI). With HTR now achieving high levels of accuracy, we consider its potential impact on our near-future information environment and knowledge of the past.
Design/methodology/approach
In undertaking a more constructivist analysis, we identified gaps in the current literature through a Grounded Theory Method (GTM). This guided an iterative process of concept mapping through writing sprints in workshop settings. We identified, explored and confirmed themes through group discussion and a further interrogation of relevant literature, until reaching saturation.
Findings
Catalogued as part of our GTM, 120 published texts underpin this paper. We found that HTR facilitates accurate transcription and dataset cleaning, while facilitating access to a variety of historical material. HTR contributes to a virtuous cycle of dataset production and can inform the development of online cataloguing. However, current limitations include dependency on digitisation pipelines, potential archival history omission and entrenchment of bias. We also cite near-future HTR considerations. These include encouraging open access, integrating advanced AI processes and metadata extraction; legal and moral issues surrounding copyright and data ethics; crediting individuals’ transcription contributions and HTR’s environmental costs.
Originality/value
Our research produces a set of best practice recommendations for researchers, data providers and memory institutions, surrounding HTR use. This forms an initial, though not comprehensive, blueprint for directing future HTR research. In pursuing this, the narrative that HTR’s speed and efficiency will simply transform scholarship in archives is deconstructed.
Details
Keywords
Javaid Ahmad Wani, Taseef Ayub Sofi, Ishrat Ayub Sofi and Shabir Ahmad Ganaie
Open-access repositories (OARs) are essential for openly disseminating intellectual knowledge on the internet and providing free access to it. The current study aims to evaluate…
Abstract
Purpose
Open-access repositories (OARs) are essential for openly disseminating intellectual knowledge on the internet and providing free access to it. The current study aims to evaluate the growth and development of OARs in the field of technology by investigating several characteristics such as coverage, OA policies, software type, content type, yearly growth, repository type and geographic contribution.
Design/methodology/approach
The directory of OARs acts as the source for data harvesting, which provides a quality-assured list of OARs across the globe.
Findings
The study found that 125 nations contributed a total of 4,045 repositories in the field of research, with the USA leading the list with the most repositories. Maximum repositories were operated by institutions having multidisciplinary approaches. The DSpace and Eprints were the preferred software types for repositories. The preferred upload content by contributors was “research articles” and “electronic thesis and dissertations”.
Research limitations/implications
The study is limited to the subject area technology as listed in OpenDOAR; therefore, the results may differ in other subject areas.
Practical implications
The work can benefit researchers across disciplines and, interested researchers can take this study as a base for evaluating online repositories. Moreover, policymakers and repository managers could also get benefitted from this study.
Originality/value
The study is the first of its kind, to the best of the authors’ knowledge, to investigate the repositories of subject technology in the open-access platform.
Details
Keywords
The purpose of this study is to show that the neo-documentary – or complimentary – approach in Library and Information Science by no means is conservative, but highly necessary…
Abstract
Purpose
The purpose of this study is to show that the neo-documentary – or complimentary – approach in Library and Information Science by no means is conservative, but highly necessary also in today's digitized media landscape. An example from a digitized photo archive is chosen to demonstrate the importance of a complimentary analysis that considers both material aspects as well as social and mental ones.
Design/methodology/approach
By taking Jenna Hartel's description of the neo-documentary turn as point of departure, the paper focuses on one case, the portrait of Johannes Abrahamsen Motka taken by Sophus Tromholt in 1883 and discusses different versions of the photograph from glass plate negatives to digitized versions in different contexts and media.
Findings
Many of the same paratextual elements can be found in different versions, also the digitized ones, to help the viewer to establish a historical context, but the images exhibited today are nevertheless no longer the same ones taken by Tromholt at the end of the 19th century. Not only have the material properties changed, but also – and probably even more important in most cases – the social and mental aspects. More re-contextualization is needed for today's audiences to recognize and understand a historical photograph taken in a colonial context. Focusing on document's material elements is not novel within the LIS-field, but the so-called neo-documentary turn was also a reaction on political and technological developments during the 1980s and 1990s. The increased focus on understanding a document in a complimentary way has demonstrated its impact during the last decades and is, at the same time, still work in progress.
Research limitations/implications
As a scholar in the humanities the author can only relate to and therefore analyze what the author can experience and observe on screen level.
Originality/value
In providing a case study, this article illustrates the necessity of employing a complimentary approach when analyzing documents. This also implicates the claim that the neo-documentary turn – or complimentary as it rather should be called – by no means is a conservative one, but a highly necessary one in today's digitized media landscape.
Details
Keywords
Musediq Tunji Bashorun, Yusuf Ayodeji Ajani and Olaronke Oyinlola Fagbola
This paper aims to explore the deep Web as a solution for displacement and replacement challenges in libraries, addressing the challenges, benefits, strategies and case studies.
Abstract
Purpose
This paper aims to explore the deep Web as a solution for displacement and replacement challenges in libraries, addressing the challenges, benefits, strategies and case studies.
Design/methodology/approach
The paper synthesizes existing literature on deep Web integration in libraries, providing a comprehensive analysis of insights from scholarly articles, case studies and expert opinions.
Findings
The deep Web grants libraries access to unique content, improving information access, fostering collaboration and enabling personalized content. However, security, privacy, ethics and data protection must be considered.
Originality/value
This paper contributes to the literature by providing a comprehensive examination of deep Web integration in libraries, offering valuable recommendations for navigating the changing landscape and leveraging the deep Web’s potential.
Details
Keywords
Anna Smith, Jennifer Higgs, José Ramón Lizárraga and Vaughn W.M. Watson
In order to better optimize the internal management system of book publishing and to cope with the changes in the external market environment, the purpose of this paper is to…
Abstract
Purpose
In order to better optimize the internal management system of book publishing and to cope with the changes in the external market environment, the purpose of this paper is to carry out cross-border publishing with the help of a transmedia storytelling model to realize the transformation and upgrading of the industry. Focusing on the relationship between the book publishing transmedia storytelling model and business performance, the moderating effect of the innovation environment on different variables is assessed.
Design/methodology/approach
This paper proposes several feasible hypotheses based on existing research. The research data came from 365 managers of Chinese book publishing organizations, and the scale was validated by Cronbach’s a, composite reliability (CR) and average variance extracted (AVE). Reliability and validity were verified, and correlation and regression analyses were used to test the impact of the book publishing transmedia storytelling model on business performance and to analyze the moderating role of the innovation environment.
Findings
The results show that the book publishing transmedia storytelling model (content production, technology integration, organizational innovation, marketing integration) helps to improve business performance (market performance, financial performance), and the innovation environment has a positive moderating effect on the relationship between the book publishing transmedia storytelling model and business performance, which provides a guarantee for the transformation and upgrading of book publishing. The market information reflected in the innovation environment has a certain role in promoting the innovation and business performance of the book publishing transmedia storytelling model.
Research limitations/implications
The empirical evidence provides a theoretical link between the book publishing transmedia storytelling model and business performance, but there are still some shortcomings, and more factors, such as equity structure, government subsidies and research and development investment, should be included in future research. In addition, the scope of the research should be broadened on this basis to make the results of the data analysis more objective.
Practical implications
This paper introduces the transmedia storytelling model and deeply analyzes the relationship between the book publishing transmedia storytelling model and business performance, which is of great practical significance for optimizing the application and service quality of book publishing, prolonging the industrial chain, enhancing the interaction and participation of users and perfecting the business management system of the book publishing industry.
Originality/value
The application and research of the book publishing transmedia storytelling model are imperfect. Therefore, this paper not only helps to promote the innovation of book publishing organizational structure and improve the management system of business performance, but also may help to improve the innovation environment of book publishing enterprises and promote the diversification of industrial structure.
Details
Keywords
Fayaz Ahmad Loan, Aasif Mohammad Khan, Syed Aasif Ahmad Andrabi, Sozia Rashid Sozia and Umer Yousuf Parray
The purpose of the present study is to identify the active and dead links of uniform resource locators (URLs) associated with web references and to compare the effectiveness of…
Abstract
Purpose
The purpose of the present study is to identify the active and dead links of uniform resource locators (URLs) associated with web references and to compare the effectiveness of Chrome, Google and WayBack Machine in retrieving the dead URLs.
Design/methodology/approach
The web references of the Library Hi Tech from 2004 to 2008 were selected for analysis to fulfill the set objectives. The URLs were extracted from the articles to verify their accessibility in terms of persistence and decay. The URLs were then executed directly in the internet browser (Chrome), search engine (Google) and Internet Archive (WayBack Machine). The collected data were recorded in an excel file and presented in tables/diagrams for further analysis.
Findings
From the total of 1,083 web references, a maximum number was retrieved by the WayBack Machine (786; 72.6 per cent) followed by Google (501; 46.3 per cent) and the lowest by Chrome (402; 37.1 per cent). The study concludes that the WayBack Machine is more efficient, retrieves a maximum number of missing web citations and fulfills the mission of preservation of web sources to a larger extent.
Originality/value
A good number of studies have been conducted to analyze the persistence and decay of web-references; however, the present study is unique as it compared the dead URL retrieval effectiveness of internet explorer (Chrome), search engine giant (Google) and WayBack Machine of the Internet Archive.
Research limitations/implications
The web references of a single journal, namely, Library Hi Tech, were analyzed for 5 years only. A major study across disciplines and sources may yield better results.
Practical implications
URL decay is becoming a major problem in the preservation and citation of web resources. The study has some healthy recommendations for authors, editors, publishers, librarians and web designers to improve the persistence of web references.
Details
Keywords
Lucinda McKnight and Cara Shipp
The purpose of this paper is to share findings from empirically driven conceptual research into the implications for English teachers of understanding generative AI as a “tool”…
Abstract
Purpose
The purpose of this paper is to share findings from empirically driven conceptual research into the implications for English teachers of understanding generative AI as a “tool” for writing.
Design/methodology/approach
The paper reports early findings from an Australian National Survey of English teachers and interrogates the notion of the AI writer as “tool” through intersectional feminist discursive-material analysis of the metaphorical entailments of the term.
Findings
Through this work, the authors have developed the concept of “coloniser tool-thinking” and juxtaposed it with First Nations and feminist understandings of “tools” and “objects” to demonstrate risks to the pursuit of social and planetary justice through understanding generative AI as a tool for English teachers and students.
Originality/value
Bringing together white and First Nations English researchers in dialogue, the paper contributes a unique perspective to challenge widespread and common-sense use of “tool” for generative AI services.
Details
Keywords
Abhijit Thakuria, Indranil Chakraborty and Dipen Deka
Websites, search engines, recommender systems, artificial intelligence and digital libraries have the potential to support serendipity for unexpected interaction with information…
Abstract
Purpose
Websites, search engines, recommender systems, artificial intelligence and digital libraries have the potential to support serendipity for unexpected interaction with information and ideas which would lead to favored information discoveries. This paper aims to explore the current state of research into serendipity particularly related to information encountering.
Design/methodology/approach
This study provides bibliometric review of 166 studies on serendipity extracted from the Web of Science. Two bibliometric analysis tools HisCite and RStudio (Biblioshiny) are used on 30 years of data. Citation counts and bibliographic records of the papers are assessed using HisCite. Moreover, visualization of prominent sources, countries, keywords and the collaborative networks of authors and institutions are assessed using RStudio (Biblioshiny) software. A total of 166 papers on serendipity were found from the period 1989 to 2022, and the most influential authors, articles, journals, institutions and countries among these were determined.
Findings
The highest numbers of 11 papers were published in the year 2019. Makri and Erdelez are the most influential authors for contributing studies on serendipity. “Journal of Documentation” is the top-ranking journal. University College London is the prominent affiliation contributing highest number of studies on serendipity. The UK and the USA are the prominent nations contributing highest number of research. Authorship pattern for research on serendipity reveals involvement of single author in majority of the studies. OA Green model is the most preferred model for archiving of research articles by the authors who worked on serendipity. In addition, majority of the research outputs have received a citation ranging from 0 to 50.
Originality/value
To the best of the authors’ knowledge, this paper may be the first bibliometric analysis on serendipity research using bibliometric tools in library and information science studies. The paper would definitely open new avenues for other serendipity researchers.
Details