Search results

1 – 10 of over 5000
Article
Publication date: 24 January 2023

Li Si, Li Liu and Yi He

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a…

Abstract

Purpose

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a theoretical basis for the improvement and optimization of the policy system.

Design/methodology/approach

China's scientific data management policies were obtained through various channels such as searching government websites and policy and legal database, and 209 policies were finally identified as the sample for analysis after being screened and integrated. A three-dimensional framework was constructed based on the perspective of policy tools, combining stakeholder and lifecycle theories. And the content of policy texts was coded and quantitatively analyzed according to this framework.

Findings

China's scientific data management policies can be divided into four stages according to the time sequence: infancy, preliminary exploration, comprehensive promotion and key implementation. The policies use a combination of three types of policy tools: supply-side, environmental-side and demand-side, involving multiple stakeholders and covering all stages of the lifecycle. But policy tools and their application to stakeholders and lifecycle stages are imbalanced. The development of future scientific data management policy should strengthen the balance of policy tools, promote the participation of multiple subjects and focus on the supervision of the whole lifecycle.

Originality/value

This paper constructs a three-dimensional analytical framework and uses content analysis to quantitatively analyze scientific data management policy texts, extending the research perspective and research content in the field of scientific data management. The study identifies policy focuses and proposes several strategies that will help optimize the scientific data management policy.

Details

Aslib Journal of Information Management, vol. 76 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 25 March 2024

Yusuf Ayodeji Ajani, Emmanuel Kolawole Adefila, Shuaib Agboola Olarongbe, Rexwhite Tega Enakrire and Nafisa Rabiu

This study aims to examine Big Data and the management of libraries in the era of the Fourth Industrial Revolution and its implications for policymakers in Nigeria.

Abstract

Purpose

This study aims to examine Big Data and the management of libraries in the era of the Fourth Industrial Revolution and its implications for policymakers in Nigeria.

Design/methodology/approach

A qualitative methodology was used, involving the administration of open-ended questionnaires to librarians from six selected federal universities located in Southwest Nigeria.

Findings

The findings of this research highlight that a significant proportion of librarians are well-acquainted with the relevance of big data and its potential to positively revolutionize library services. Librarians generally express favorable opinions concerning the relevance of big data, acknowledging its capacity to enhance decision-making, optimize services and deliver personalized user experiences.

Research limitations/implications

This study exclusively focuses on the Nigerian context, overlooking insights from other African countries. As a result, it may not be possible to generalize the study’s findings to the broader African library community.

Originality/value

To the best of the authors’ knowledge, this study is unique because the paper reported that librarians generally express favorable opinions concerning the relevance of big data, acknowledging its capacity to enhance decision-making, optimize services and deliver personalized user experiences.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Open Access
Article
Publication date: 11 April 2024

Anna Prenestini, Stefano Calciolari and Arianna Rota

During the 1990s, Italian healthcare organisations (HOs) underwent a process of corporatisation, and the most innovative HOs introduced the balanced scorecard (BSC) to address the…

Abstract

Purpose

During the 1990s, Italian healthcare organisations (HOs) underwent a process of corporatisation, and the most innovative HOs introduced the balanced scorecard (BSC) to address the need for broader accountability. Currently, there is a limited understanding of the dynamics and outcomes of such a process. Therefore, this study aims to explore whether the BSC is still considered an effective performance management tool and analyse the factors driving and hindering its evolution and endurance in public and non-profit HOs.

Design/methodology/approach

We conducted a retrospective longitudinal analysis of two pioneering cases in the adoption of the BSC: one in a public hospital and the other in a non-profit hospital. Data collection relied on accessing institutional documents and reports from the early 2000s to the present, as well as conducting semi-structured interviews with the internal sponsors of the BSC.

Findings

We found evidence of three main categories of factors that trigger or hinder the adoption and development of the BSC: (1) the role of the internal sponsor and professionals’ commitment; (2) information technology and the controller’s technological skills; and (3) the relationship between the management and professionalism logics during the implementation process. At the same time, there is no evidence to suggest that specific technical features of the BSC influence its endurance.

Originality/value

The paper contributes to the debate on the key factors for implementing and sustaining multidimensional control systems in professional organisations. It emphasises the importance of knowledge-based assets and distinctive internal capabilities for the success of the business. The implications of the BSC legacy are discussed, along with future developments of multidimensional control tools aimed at supporting strategy execution.

Details

Journal of Health Organization and Management, vol. 38 no. 9
Type: Research Article
ISSN: 1477-7266

Keywords

Article
Publication date: 1 February 2024

Ismael Gómez-Talal, Lydia González-Serrano, José Luis Rojo-Álvarez and Pilar Talón-Ballestero

This study aims to address the global food waste problem in restaurants by analyzing customer sales information provided by restaurant tickets to gain valuable insights into…

Abstract

Purpose

This study aims to address the global food waste problem in restaurants by analyzing customer sales information provided by restaurant tickets to gain valuable insights into directing sales of perishable products and optimizing product purchases according to customer demand.

Design/methodology/approach

A system based on unsupervised machine learning (ML) data models was created to provide a simple and interpretable management tool. This system performs analysis based on two elements: first, it consolidates and visualizes mutual and nontrivial relationships between information features extracted from tickets using multicomponent analysis, bootstrap resampling and ML domain description. Second, it presents statistically relevant relationships in color-coded tables that provide food waste-related recommendations to restaurant managers.

Findings

The study identified relationships between products and customer sales in specific months. Other ticket elements have been related, such as products with days, hours or functional areas and products with products (cross-selling). Big data (BD) technology helped analyze restaurant tickets and obtain information on product sales behavior.

Research limitations/implications

This study addresses food waste in restaurants using BD and unsupervised ML models. Despite limitations in ticket information and lack of product detail, it opens up research opportunities in relationship analysis, cross-selling, productivity and deep learning applications.

Originality/value

The value and originality of this work lie in the application of BD and unsupervised ML technologies to analyze restaurant tickets and obtain information on product sales behavior. Better sales projection can adjust product purchases to customer demand, reducing food waste and optimizing profits.

Book part
Publication date: 23 April 2024

Omar Arabiat

This study offers an in-depth examination of Google Bard, an advanced artificial intelligence chatbot created by Google, focusing specifically on its potential impact on academic…

Abstract

This study offers an in-depth examination of Google Bard, an advanced artificial intelligence chatbot created by Google, focusing specifically on its potential impact on academic research. This discussion aims to comprehensively explore the features of Google Bard, highlighting its capabilities in data management, facilitating collaborative discussions, and enhancing accessibility to complex research. In addition to the aforementioned positive characteristics, we will also delve into the limitations and ethical considerations associated with this innovative device. The functionality of the system is constrained by the limitations imposed by its pre-established algorithms and training data. In addition, there are significant concerns regarding data privacy, potential biases in its responses stemming from its training data, and the wider societal implications associated with a heavy reliance on machine-generated content. Ensuring responsible and ethical utilization of Bard necessitates Google's provision of transparent communication regarding its development process. In light of the prominent functionalities demonstrated by Google Bard, it is imperative for researchers to engage in a rigorous examination of the information it presents, thereby safeguarding against the inadvertent propagation of misinformation or biased viewpoints. This will lay the groundwork for its effective integration into the academic research methodology.

Details

Technological Innovations for Business, Education and Sustainability
Type: Book
ISBN: 978-1-83753-106-6

Keywords

Article
Publication date: 22 February 2024

Ranjeet Kumar Singh

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The…

58

Abstract

Purpose

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The purpose of this study is to propose a solution to this problem.

Design/methodology/approach

The current study identifies relevant literature and provides a review of big data adoption in libraries. It also presents a step-by-step guide for the development of a BDA platform using the Apache Hadoop Ecosystem. To test the system, an analysis of library big data using Apache Pig, which is a tool from the Apache Hadoop Ecosystem, was performed. It establishes the effectiveness of Apache Hadoop Ecosystem as a powerful BDA solution in libraries.

Findings

It can be inferred from the literature that libraries and librarians have not taken the possibility of big data services in libraries very seriously. Also, the literature suggests that there is no significant effort made to establish any BDA architecture in libraries. This study establishes the Apache Hadoop Ecosystem as a possible solution for delivering BDA services in libraries.

Research limitations/implications

The present work suggests adapting the idea of providing various big data services in a library by developing a BDA platform, for instance, providing assistance to the researchers in understanding the big data, cleaning and curation of big data by skilled and experienced data managers and providing the infrastructural support to store, process, manage, analyze and visualize the big data.

Practical implications

The study concludes that Apache Hadoops’ Hadoop Distributed File System and MapReduce components significantly reduce the complexities of big data storage and processing, respectively, and Apache Pig, using Pig Latin scripting language, is very efficient in processing big data and responding to queries with a quick response time.

Originality/value

According to the study, there are significantly fewer efforts made to analyze big data from libraries. Furthermore, it has been discovered that acceptance of the Apache Hadoop Ecosystem as a solution to big data problems in libraries are not widely discussed in the literature, although Apache Hadoop is regarded as one of the best frameworks for big data handling.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 14 February 2024

Batuhan Kocaoglu and Mehmet Kirmizi

This study aims to develop a modular and prescriptive digital transformation maturity model whose constituent elements have conceptual integrity as well as reveal the priority…

Abstract

Purpose

This study aims to develop a modular and prescriptive digital transformation maturity model whose constituent elements have conceptual integrity as well as reveal the priority weights of maturity model components.

Design/methodology/approach

A literature review with a concept-centric analysis enlightens the characteristics of constituent parts and reveals the gaps for each component. Therefore, the interdependency network among model dimensions and priority weights are identified using decision-making trial and evaluation laboratory (DEMATEL)-based analytic network process (ANP) method, including 19 industrial experts, and the results are robustly validated with three different analyses. Finally, the applicability of the developed maturity model and the constituent elements are validated in the context of the manufacturing industry with two case applications through a strict protocol.

Findings

Results obtained from DEMATEL-based ANP suggest that smart processes with a priority weight of 17.91% are the most important subdimension for reaching higher digital maturity. Customer integration and value, with a priority weight of 17.30%, is the second most important subdimension and talented employee, with 16.24%, is the third most important subdimension.

Research limitations/implications

The developed maturity model enables companies to make factual assessments with specially designed measurement instrument including incrementally evolved questions, prioritize action fields and investment strategies according to maturity index calculations and adapt to the dynamic change in the environment with spiral maturity level identification.

Originality/value

A novel spiral maturity level identification is proposed with conceptual consistency for evolutionary progress to adapt to dynamic change. A measurement instrument that is incrementally structured with 234 statements and a measurement method that is based on the priority weights and leads to calculating the maturity index are designed to assess digital maturity, create an improvement roadmap to reach higher maturity levels and prioritize actions and investments without any external support and assistance.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 19 April 2023

Aasif Mohammad Khan, Fayaz Ahmad Loan, Umer Yousuf Parray and Sozia Rashid

Data sharing is increasingly being recognized as an essential component of scholarly research and publishing. Sharing data improves results and propels research and discovery…

Abstract

Purpose

Data sharing is increasingly being recognized as an essential component of scholarly research and publishing. Sharing data improves results and propels research and discovery forward. Given the importance of data sharing, the purpose of the study is to unveil the present scenario of research data repositories (RDR) and sheds light on strategies and tactics followed by different countries for efficient organization and optimal use of scientific literature.

Design/methodology/approach

The data for the study is collected from registry of RDR (re3data registry) (re3data.org), which covers RDR from different academic disciplines and provides filtration options “Search” and “Browse” to access the repositories. Using these filtration options, the researchers collected metadata of repositories i.e. country wise contribution, content-type data, repository language interface, software usage, metadata standards and data access type. Furthermore, the data was exported to Google Sheets for analysis and visualization.

Findings

The re3data registry holds a rich and diverse collection of data repositories from the majority of countries all over the world. It is revealed that English is the dominant language, and the most widely used software for the creation of data repositories are “DataVerse”, followed by “Dspace” and “MySQL”. The most frequently used metadata standards are “Dublin Core” and “Datacite metadata schema”. The majority of repositories are open, with more than half of the repositories being “disciplinary” in nature, and the most significant data sources include “scientific and statistical data” followed by “standard office documents”.

Research limitations/implications

The main limitation of the study is that the findings are based on the data collected through a single registry of repositories, and only a few characteristic features were investigated.

Originality/value

The study will benefit all countries with a small number of data repositories or no repositories at all, with tools and techniques used by the top repositories to ensure long-term storage and accessibility to research data. In addition to this, the study provides a global overview of RDR and its characteristic features.

Details

Information Discovery and Delivery, vol. 52 no. 1
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 1 June 2023

Laura Zapata, Gerardo Ibarra and Pierre-Henri Blancher

New ways of working have rapidly increased in organizations, promising employees a better control over their work time, space, and more autonomy. The present study analyzes the…

Abstract

Purpose

New ways of working have rapidly increased in organizations, promising employees a better control over their work time, space, and more autonomy. The present study analyzes the relationship between new ways of working and employee engagement and productivity.

Design/methodology/approach

A survey was conducted to evaluate organizational practices developed based on flexible schemes and the relevance of employee engagement for better productivity based on digital tools. Data were analyzed using structural equation modeling.

Findings

New ways of work requires an integration of workspaces' design, social interaction, and individual wellness. Organizations need to recognize how employees' trust, commitment, and passion are fundamental to face current and future changes. Flexibility in time and space and digital tools for work are critical.

Practical implications

A personalization of organizational practices to support individual well-being and flexible and hybrid schemes of work are needed. Developing policies collaboratively to work together respectfully in a hybrid environment is necessary.

Social implications

Hybrid work format is allowing women to balance career and childcare, reducing the wage gap with men. The green imperative has also played a role reducing the amount of carbon monoxide produced via commuter.

Originality/value

The present study exposes how organizational practices must ensure employee well-being and autonomy to perform their tasks. In this regard, employees need to be recognized as individuals, physically and mentally. Attempting to force a one-size-fit-all solution can have detrimental effects on the workforce, particularly on women, people of lower socioeconomic status, and people in less advanced economies. Personalization requests empowerment and democratization at work.

Details

Journal of Organizational Effectiveness: People and Performance, vol. 11 no. 1
Type: Research Article
ISSN: 2051-6614

Keywords

Article
Publication date: 28 February 2024

Mushahid Hussain Baig, Jin Xu, Faisal Shahzad and Rizwan Ali

This study aims to investigate the association of FinTech innovation (FinTechINN) and firm performance (FP) by considering the role of knowledge assets (KA) as a causal mechanism…

Abstract

Purpose

This study aims to investigate the association of FinTech innovation (FinTechINN) and firm performance (FP) by considering the role of knowledge assets (KA) as a causal mechanism underlying the FinTechINN – FP association.

Design/methodology/approach

In this study, the authors consider panel data of 1,049 Chinese A-listed firm and construct a structural model for corporate FinTech innovation, knowledge assets and firm performance while considering endogeneity issues in analyses over the period of 2014–2022. The modified value added intellectual capital (VAIC) and research and development (R&D) expenses are used as a proxy measure for knowledge assets, considering governance and corporate performance measures.

Findings

According to the findings of this study FinTech innovation (FinTechINN) has a positive significant effect on firm performance. Particularly; the findings disclose that FinTech innovations has a link with knowledge assets, FinTech innovations indirectly affects firm performance, and the association between FinTech innovation and firm performance is partially mediated by knowledge assets (MVAIC and R&D expenses).

Originality/value

Rooted in the dynamic capability and resource-based view, this study pioneers an empirical exploration of the association of FinTech innovation with firm performance. Moreover, it introduces the novel dimension of knowledge assets (on firm-level), acting as a mediating factor with in this relationship.

Details

International Journal of Innovation Science, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-2223

Keywords

1 – 10 of over 5000