Search results

1 – 10 of over 5000
Article
Publication date: 24 January 2023

Li Si, Li Liu and Yi He

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a…

Abstract

Purpose

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a theoretical basis for the improvement and optimization of the policy system.

Design/methodology/approach

China's scientific data management policies were obtained through various channels such as searching government websites and policy and legal database, and 209 policies were finally identified as the sample for analysis after being screened and integrated. A three-dimensional framework was constructed based on the perspective of policy tools, combining stakeholder and lifecycle theories. And the content of policy texts was coded and quantitatively analyzed according to this framework.

Findings

China's scientific data management policies can be divided into four stages according to the time sequence: infancy, preliminary exploration, comprehensive promotion and key implementation. The policies use a combination of three types of policy tools: supply-side, environmental-side and demand-side, involving multiple stakeholders and covering all stages of the lifecycle. But policy tools and their application to stakeholders and lifecycle stages are imbalanced. The development of future scientific data management policy should strengthen the balance of policy tools, promote the participation of multiple subjects and focus on the supervision of the whole lifecycle.

Originality/value

This paper constructs a three-dimensional analytical framework and uses content analysis to quantitatively analyze scientific data management policy texts, extending the research perspective and research content in the field of scientific data management. The study identifies policy focuses and proposes several strategies that will help optimize the scientific data management policy.

Details

Aslib Journal of Information Management, vol. 76 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 22 February 2024

Ranjeet Kumar Singh

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The…

54

Abstract

Purpose

Although the challenges associated with big data are increasing, the question of the most suitable big data analytics (BDA) platform in libraries is always significant. The purpose of this study is to propose a solution to this problem.

Design/methodology/approach

The current study identifies relevant literature and provides a review of big data adoption in libraries. It also presents a step-by-step guide for the development of a BDA platform using the Apache Hadoop Ecosystem. To test the system, an analysis of library big data using Apache Pig, which is a tool from the Apache Hadoop Ecosystem, was performed. It establishes the effectiveness of Apache Hadoop Ecosystem as a powerful BDA solution in libraries.

Findings

It can be inferred from the literature that libraries and librarians have not taken the possibility of big data services in libraries very seriously. Also, the literature suggests that there is no significant effort made to establish any BDA architecture in libraries. This study establishes the Apache Hadoop Ecosystem as a possible solution for delivering BDA services in libraries.

Research limitations/implications

The present work suggests adapting the idea of providing various big data services in a library by developing a BDA platform, for instance, providing assistance to the researchers in understanding the big data, cleaning and curation of big data by skilled and experienced data managers and providing the infrastructural support to store, process, manage, analyze and visualize the big data.

Practical implications

The study concludes that Apache Hadoops’ Hadoop Distributed File System and MapReduce components significantly reduce the complexities of big data storage and processing, respectively, and Apache Pig, using Pig Latin scripting language, is very efficient in processing big data and responding to queries with a quick response time.

Originality/value

According to the study, there are significantly fewer efforts made to analyze big data from libraries. Furthermore, it has been discovered that acceptance of the Apache Hadoop Ecosystem as a solution to big data problems in libraries are not widely discussed in the literature, although Apache Hadoop is regarded as one of the best frameworks for big data handling.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 6 February 2023

Eric Zanghi, Milton Brown Do Coutto Filho and Julio Cesar Stacchini de Souza

The current and modern electrical distribution networks, named smart grids (SGs), use advanced technologies to accomplish all the technical and nontechnical challenges naturally…

Abstract

Purpose

The current and modern electrical distribution networks, named smart grids (SGs), use advanced technologies to accomplish all the technical and nontechnical challenges naturally demanded by energy applications. Energy metering collecting is one of these challenges ranging from the most basic (i.e., visual assessment) to the expensive advanced metering infrastructure (AMI) using intelligent meters networks. The AMIs’ data acquisition and system monitoring environment require enhancing some routine tasks. This paper aims to propose a methodology that uses a distributed and sustainable approach to manage wide-range metering networks, focused on using current public or private telecommunication infrastructure, optimizing the implementation and operation, increasing reliability and decreasing costs.

Design/methodology/approach

Inspired by blockchain technology, a collaborative metering system architecture is conceived, managing massive data sets collected from the grid. The use of cryptography handles data integrity and security issues.

Findings

A robust proof-of-concept simulation results are presented concerning the resilience and performance of the proposed distributed remote metering system.

Originality/value

The methodology proposed in this work is an innovative AMI solution related to SGs. Regardless of the implementation, operation and maintenance of AMIs, the proposed solution is unique, using legacy and new technologies together in a reliable way.

Details

International Journal of Innovation Science, vol. 16 no. 2
Type: Research Article
ISSN: 1757-2223

Keywords

Article
Publication date: 20 November 2023

Laksmi Laksmi, Muhammad Fadly Suhendra, Shamila Mohamed Shuhidan and Umanto Umanto

This study aims to identify the readiness of institutional repositories in Indonesia to implement digital humanities (DH) data curation. Data curation is a method of managing…

Abstract

Purpose

This study aims to identify the readiness of institutional repositories in Indonesia to implement digital humanities (DH) data curation. Data curation is a method of managing research data that maintains the data’s accuracy and makes it available for reuse. It requires controlled data management.

Design/methodology/approach

The study uses a qualitative approach. Data collection was carried out through a focus group discussion in September–October 2022, interviews and document analysis. The informants came from four institutions in Indonesia.

Findings

The findings reveal that the national research repository has implemented data curation, albeit not optimally. Within the case study, one of the university repositories diligently curates its humanities data and has established networks extending to various ASEAN countries. Both the national archive repository and the other university repository have implemented rudimentary data curation practices but have not prioritized them. In conclusion, the readiness of the national research repository and the university repository stand at the high-capacity stage, while the national archive repository and the other university repository are at the established and early stages of data curation, respectively.

Research limitations/implications

This study examined only four repositories due to time constraints. Nonetheless, the four institutions were able to provide a comprehensive picture of their readiness for DH data curation management.

Practical implications

This study provides insight into strategies for developing DH data curation activities in institutional repositories. It also highlights the need for professional development for curators so they can devise and implement stronger ownership policies and data privacy to support a data-driven research agenda.

Originality/value

This study describes the preparations that must be considered by institutional repositories in the development of DH data curation activities.

Article
Publication date: 30 January 2024

Li Si and Xianrui Liu

This research aims to explore the research data ethics governance framework and collaborative network to optimize research data ethics governance practices, to balance the…

Abstract

Purpose

This research aims to explore the research data ethics governance framework and collaborative network to optimize research data ethics governance practices, to balance the relationship between data development and utilization, open sharing, data security and to reduce the ethical risks that may arise from data sharing and utilization.

Design/methodology/approach

This study explores the framework and collaborative network of research data ethics policies by using the UK as an example. 78 policies from the UK government, university, research institution, funding agency, publisher, database, library and third-party organization are obtained. Adopting grounded theory (GT) and social network analysis (SNA), Nvivo12 is used to analyze these samples and summarize the research data ethics governance framework. Ucinet and Netdraw are used to reveal collaborative networks in policy.

Findings

Results indicate that the framework covers governance context, subject and measure. The content of governance context contains context description and data ethics issues analysis. Governance subject consists of defining subjects and facilitating their collaboration. Governance measure includes governance guidance and ethics governance initiatives in the data lifecycle. The collaborative network indicates that research institution plays a central role in ethics governance. The core of the governance content are ethics governance initiatives, governance guidance and governance context description.

Research limitations/implications

This research provides new insights for policy analysis by combining GT and SNA methods. Research data ethics and its governance are conceptualized to complete data governance and research ethics theory.

Practical implications

A research data ethics governance framework and collaborative network are revealed, and actionable guidance for addressing essential aspects of research data ethics and multiple subjects to confer their functions in collaborative governance is provided.

Originality/value

This study analyzes policy text using qualitative and quantitative methods, ensuring fine-grained content profiling and improving policy research. A typical research data ethics governance framework is revealed. Various stakeholders' roles and priorities in collaborative governance are explored. These contribute to improving governance policies and governance levels in both theory and practice.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Expert briefing
Publication date: 6 February 2024

This creates a paradox, since, while AI-generated solutions are crucial to help solve the climate emergency, their very deployment is also adding to the problem. To tackle this…

Details

DOI: 10.1108/OXAN-DB285037

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 13 April 2023

Ahmet Coşkun Yıldırım and Erkan Erdil

This study aims to understand the impacts of Covid-19 on the progression of digitalization of banks in an emerging market. For this purpose, business model canvas (BMC) is used as…

Abstract

Purpose

This study aims to understand the impacts of Covid-19 on the progression of digitalization of banks in an emerging market. For this purpose, business model canvas (BMC) is used as a theoretical framework to explore these effects on each business elements of Turkish Banks’ business strategies.

Design/methodology/approach

Data are collected through structured interviews with the top managers of seven diversified banks. Interview questions are designed based on BMC.

Findings

The results show that the onset of the Covid-19 is a shock that has made digitalization a strategic issue that necessitates an urgent change in many business elements of banks such as customer relationships, communication channels, resource allocation, partnerships and financing. Further, it has stimulated redefining value proposition and collaboration/interaction among all financial institutions through digital platforms.

Practical implications

BMC can be used to explain decision-making and business processes of banks for exploring the effect of recent and/or unexpected developments in the business environment of an emerging economy. The results provide insights and recommendations to managers of financial institutions into the impacts of Covid-19 on banks’ operational and strategic processes. That allows financial institutions, including Fintechs, to use this information for taking precautions and proactive actions against shocks.

Originality/value

This study is an initial attempt to explore the impacts of the Covid-19 on banks in an emerging economy by using BMC. With that, this study contributes to the literature by explaining the effect of progression of digitalization in banking from a strategic business model perspective using a qualitative research method.

Details

Qualitative Research in Financial Markets, vol. 16 no. 1
Type: Research Article
ISSN: 1755-4179

Keywords

Article
Publication date: 7 December 2023

Leanne Bowler, Irene Lopatovska and Mark S. Rosin

The purpose of this study is to explore teen-adult dialogic interactions during the co-design of data literacy activities in order to determine the nature of teen thinking, their…

Abstract

Purpose

The purpose of this study is to explore teen-adult dialogic interactions during the co-design of data literacy activities in order to determine the nature of teen thinking, their emotions, level of engagement, and the power of relationships between teens and adults in the context of data literacy. This study conceives of co-design as a learning space for data literacy. It investigates the teen–adult dialogic interactions and what these interactions say about the nature of teen thinking, their emotions, level of engagement and the power relationships between teens and adults.

Design/methodology/approach

The study conceives of co-design as a learning space for teens. Linguistic Inquiry and Word Count (LIWC-22), a natural language processing (NLP) software tool, was used to examine the linguistic measures of Analytic Thinking, Clout, Authenticity, and Emotional Tone using transcriptions of recorded Data Labs with teens and adults. Linguistic Inquiry and Word Count (LIWC-22), a natural language processing (NLP) software tool, was used to examine the linguistic measures of Analytic Thinking, Clout, Authenticity and Emotional Tone using transcriptions of recorded Data Labs with teens and adults.

Findings

LIWC-22 scores on the linguistic measures Analytic Thinking, Clout, Authenticity and Emotional Tone indicate that teens had a high level of friendly engagement, a relatively low sense of power compared with the adult co-designers, medium levels of spontaneity and honesty and the prevalence of positive emotions during the co-design sessions.

Practical implications

This study provides a concrete example of how to apply NLP in the context of data literacy in the public library, mapping the LIWC-22 findings to STEM-focused informal learning. It adds to the understanding of assessment/measurement tools and methods for designing data literacy education, stimulating further research and discussion on the ways to empower youth to engage more actively in informal learning about data.

Originality/value

This study applies a novel approach for exploring teen engagement within a co-design project tasked with the creation of youth-oriented data literacy activities.

Details

Information and Learning Sciences, vol. 125 no. 3/4
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 12 February 2024

Lei Ma, Ben Zhang, Kaitong Liang, Yang Cheng and Chaonan Yi

The embedding of digital technology and the fuzzy organizational boundary have changed the operation of platform innovation ecosystem (PIE). Specifically, as an important energy…

Abstract

Purpose

The embedding of digital technology and the fuzzy organizational boundary have changed the operation of platform innovation ecosystem (PIE). Specifically, as an important energy of PIE, the internal logic of knowledge flow needs to be reconsidered in the context of digital age, which will be helpful to select the cultivation and governance strategy of PIE.

Design/methodology/approach

A dual case-analysis is applied to open the “black box” of knowledge flow in the PIE from the perspective of enabled by digital technology, by taking the intellectual property (IP) operation platform as cases.

Findings

The research findings are as follow: (1) The knowledge flow mechanism of PIE is mainly demonstrated through the processes of knowledge acquisition, knowledge integration and knowledge spillover. During this process, connectivity empowerment and scenario empowerment realize the digital empowerment of the platform. (2) Connectivity empowerment provides a channel of knowledge acquisition for the digital connection between participants in PIE. In the process of knowledge integration, scenario empowerment improves the opportunities for accurate matching and collaborative innovation between knowledge supplier and demander, and enhance the value of knowledge. The dual effect of connectivity empowerment and scenario empowerment has accelerated the knowledge spillover in PIE. Particularly, connectivity empowerment expands the range of knowledge spillover, and scenario empowerment affects the generativity of the platform, resulting in the enhancement of platform’s capability to embed and expand its value network. (3) Participants have been benefitted from the PIE enabled by digital technology through three key modules (knowledge acquisition, knowledge integration and knowledge spillover), as the result of knowledge flow.

Originality/value

This study focuses on the knowledge flow mechanism of PIE enabled by digital technology, which enriches the PIE theory, and has enlightenments for the cultivation of digital platform ecosystem.

Details

European Journal of Innovation Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1460-1060

Keywords

Article
Publication date: 24 October 2022

Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…

Abstract

Purpose

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.

Design/methodology/approach

This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.

Findings

The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.

Research limitations/implications

The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.

Practical implications

To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.

Social implications

To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.

Originality/value

The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

1 – 10 of over 5000