Search results
1 – 10 of over 17000Chong Guan, Ding Ding, Jiancang Guo and Yun Teng
This paper reviews the extant research on Web3.0 published between 2003 and 2022.
Abstract
Purpose
This paper reviews the extant research on Web3.0 published between 2003 and 2022.
Design/methodology/approach
This study uses a topic modeling procedure latent Dirichlet allocation to uncover the research themes and the key phrases associated with each theme.
Findings
This study uncovers seven research themes that have been featured in the existing research. In particular, the study highlights the interaction among the research themes that contribute to the understanding of a number of solutions, applications and use cases, such as metaverse and non-fungible tokens.
Research limitations/implications
Despite the relatively small data size of the study, the results remain significant as they contribute to a more profound comprehension of the relevant field and offer guidance for future research directions. The previous analysis revealed that the current Web3.0 technology is still encountering several challenges. Building upon the pioneering research in the field of blockchain, decentralized networks, smart contracts and algorithms, the study proposes an exploratory agenda for future research from an ecosystem approach, targeting to enhance the current state of affairs.
Originality/value
Although topics around Web3.0 have been discussed intensively among the crypto community and technological enthusiasts, there is limited research that provides a comprehensive description of all the related issues and an in-depth analysis of their real-world implications from an ecosystem perspective.
Details
Keywords
Xinghua Shan, Zhiqiang Zhang, Fei Ning, Shida Li and Linlin Dai
With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent…
Abstract
Purpose
With the yearly increase of mileage and passenger volume in China's high-speed railway, the problems of traditional paper railway tickets have become increasingly prominent, including complexity of business handling process, low efficiency of ticket inspection and high cost of usage and management. This paper aims to make extensive references to successful experiences of electronic ticket applications both domestically and internationally. The research on key technologies and system implementation of railway electronic ticket with Chinese characteristics has been carried out.
Design/methodology/approach
Research in key technologies is conducted including synchronization technique in distributed heterogeneous database system, the grid-oriented passenger service record (PSR) data storage model, efficient access to massive PSR data under high concurrency condition, the linkage between face recognition service platforms and various terminals in large scenarios, and two-factor authentication of the e-ticket identification code based on the key and the user identity information. Focusing on the key technologies and architecture the of existing ticketing system, multiple service resources are expanded and developed such as electronic ticket clusters, PSR clusters, face recognition clusters and electronic ticket identification code clusters.
Findings
The proportion of paper ticket printed has dropped to 20%, saving more than 2 billion tickets annually since the launch of the application of E-ticketing nationwide. The average time for passengers to pass through the automatic ticket gates has decreased from 3 seconds to 1.3 seconds, significantly improving the efficiency of passenger transport organization. Meanwhile, problems of paper ticket counterfeiting, reselling and loss have been generally eliminated.
Originality/value
E-ticketing has laid a technical foundation for the further development of railway passenger transport services in the direction of digitalization and intelligence.
Details
Keywords
Every user of the World Wide Web understands why the WWW is often ridiculed as the World Wide Wait. The WWW and other applications on the Internet have been developed with a…
Abstract
Every user of the World Wide Web understands why the WWW is often ridiculed as the World Wide Wait. The WWW and other applications on the Internet have been developed with a client‐server orientation that, in its simplest form, involves a centralized information repository to which users (clients) send requests. This single‐server model suffers from performance problems when clients are too numerous, when clients are physically far away in the Network, when the materials being delivered become very large and hence stress the wide‐area bandwidth, and when the information has a real‐time delivery component as with streaming audio and video materials. Engineering information delivery solutions that break the single‐site model has become an important aspect of next‐generation WWW delivery systems. Intends to help the information professional understand what new directions the delivery infrastructure of the WWW is taking and why these technical changes will impact users around the globe, especially in bandwidth‐poor areas of the Internet.
Details
Keywords
Sergio Barile, Cristina Simone and Mario Calabrese
This paper aims to focus on distributed technologies with the aim of highlighting their economic-organizational dimensions. In particular, the contribution first presents a deeper…
Abstract
Purpose
This paper aims to focus on distributed technologies with the aim of highlighting their economic-organizational dimensions. In particular, the contribution first presents a deeper understanding of the nature and the dynamics of the economies and diseconomies that arise from the adoption and diffusion of distributed technologies. Second, it aims to shed light on the increasing tension between the hierarchy-based model of production and peer-to-peer (p2p) production, which involves the pervasive diffusion of distributed technologies.
Design/methodology/approach
Adopting an economic-organizational perspective, which is deeply rooted in the related extant literature, an analytically consistent model is developed to simultaneously take into account the following variables: adoption density independent variable) and economies of knowledge integration and organizational diseconomies (the costs of a loss of control and the costs of organizational decoupling and recoupling) as dependent variables.
Findings
Distributed technologies allow access to a large quantity and a wide variety of cognitive slacks that have not been possible until now. In doing so, they are leading the transition towards p2p. This is an emerging production paradigm that is characterized – with respect to mass production – by a shift in the relative importance of cognitive slack in comparison with tangible slack. Nevertheless, the unrestrainable diffusion of distributed technologies is not neutral for organizations. On the one hand, these technologies allow for the integration of economies of knowledge, and on the other hand, they involve organizational diseconomies that should not be ignored by managers and researchers.
Originality/value
This paper fills a gap in the literature by developing a consistent analytical framework that simultaneously takes into account the economies of knowledge integration and potential organizational diseconomies (the costs of coordination and the loss of control) that arise from the adoption and diffusion of distributed technologies.
Details
Keywords
A hybrid storage assignment (combination class-volume-based) framework considering quality proximity, customer and material categorization are key distinguished contents of this…
Abstract
Purpose
A hybrid storage assignment (combination class-volume-based) framework considering quality proximity, customer and material categorization are key distinguished contents of this paper. In spite of using individual storage allocation approach, the hybrid allocation policy performs better under certain environment. The paper aims to discuss these issues.
Design/methodology/approach
Although it has been proved that every storage assignment policy has their advantages and limitations, one or more storage assignment policies with combination of zoning and layout design can be used together for further improvement. The authors have conducted this study at warehouse of a manufacturing firm that produce only single product with varieties of material and quality criteria. Picking optimization includes elimination of non-value-added activities like unwanted forklift and package movements, time and distance traveled for retrieval as well as storage. Other allied operations with respect to customer acceptance level and resource utilization are also considered.
Findings
The time and distance from manufacturing point to storage location are accountable as it also contributes to picking performance.
Originality/value
Quality-based cluster analysis is carried out to find out closeness among customers, which is used to propose algorithm with new layout design, zoning and storage allocation policy.
Details
Keywords
This paper aims to propose a system dynamics model of blockchain online community knowledge sharing, with the following goals: to reveal the internal mechanism of blockchain…
Abstract
Purpose
This paper aims to propose a system dynamics model of blockchain online community knowledge sharing, with the following goals: to reveal the internal mechanism of blockchain technology on community knowledge sharing; to show the impact of blockchain technology on knowledge sharing; and to promote knowledge sharing and the self-development of blockchain online communities.
Design/methodology/approach
Based on the core characteristic factors of blockchain technology, including incentive mechanism, trust mechanism, information protection mechanism, etc., a knowledge sharing analysis framework is established. Through the use of the Vensim PLE (Personal Learning Version) software, according to the steps of “putting forward a dynamic hypothesis”, “establishing a system dynamic equations”, and then “model testing” and “simulation”, the article analyzes in depth the process and extent of the impact of the above features on online community knowledge sharing.
Findings
The results show that the blockchain incentive mechanism, trust mechanism and information protection mechanism all contribute to promoting an increase in the number of community knowledge sharing users, as well as in the total amount of knowledge shared. The results also show that the token reward in the incentive mechanism has in fact a higher degree of influence than the trust and information protection mechanisms.
Originality/value
At present, no research on the internal mechanism of knowledge sharing in blockchain online communities has been carried out. This article plays a complementary role in research in this field, and offers significant guidance for promoting online community knowledge management and online community development.
Details
Keywords
Artificial intelligence (AI) reasoning is fuelled by high-quality, detailed behavioural data. These can usually be obtained by the biometrical sensors embedded in smart devices…
Abstract
Purpose
Artificial intelligence (AI) reasoning is fuelled by high-quality, detailed behavioural data. These can usually be obtained by the biometrical sensors embedded in smart devices. The currently used data collecting approach, where data ownership and property rights are taken by the data scientists, designers of a device or a related application, delivers multiple ethical, sociological and governance concerns. In this paper, the author is opening a systemic examination of a data sharing concept in which data producers execute their data property rights.
Design/methodology/approach
Since data sharing concept delivers a substantially different alternative, it needs to be thoroughly examined from multiple perspectives, among them: the ethical, social and feasibility. At this stage, theoretical examination modes in the form of literature analysis and mental model development are being performed.
Findings
Data sharing concepts, framework, mechanisms and swift viability are examined. The author determined that data sharing could lead to virtuous data science by augmenting data producers' capacity to govern their data and regulators' capacity to interact in the process. Truly interdisciplinary research is proposed to follow up on this research.
Research limitations/implications
Since the research proposal is theoretical, the proposal may not provide direct applicative value but is largely focussed on fuelling the research directions.
Practical implications
For the researchers, data sharing concepts will provide an alternative approach and help resolve multiple ethical considerations related to the internet of things (IoT) data collecting approach. For the practitioners in data science, it will provide numerous new challenges, such as distributed data storing, distributed data analysis and intelligent data sharing protocols.
Social implications
Data sharing may post significant implications in research and development. Since ethical, legislative moral and trust-related issues are managed in the negotiation process, data can be shared freely, which in a practical sense expands the data pool for virtuous research in social sciences.
Originality/value
The paper opens new research directions of data sharing concepts and space for a new field of research.
Details
Keywords
The purpose of this paper is to explore the driving forces moving Sakai to join the new era of social applications by adopting a content‐focused methodology.
Abstract
Purpose
The purpose of this paper is to explore the driving forces moving Sakai to join the new era of social applications by adopting a content‐focused methodology.
Design/methodology/approach
The exploration is performed by looking at the way in which the role of content has developed through various phases of the internet, and how educational computing has leveraged those developments. The paper then goes on to relate these developments to the way in which initiatives like OpenSocial are changing the nature of application development and hosting, discussing the impact of the new world of cloud‐aware and cloud‐based applications on the development of Sakai.
Findings
Shifting to a modern web development paradigm that includes heavy use of client‐side programming methodologies such as AJAX, a content‐centric architecture, and an implementation of social networking capabilities will increase student satisfaction, while reducing development time and cost for systems like Sakai.
Originality/value
The paper will be of particular interest to those readers considering the tensions between institutionally provisioned applications and global free‐to‐use web services.
Details
Keywords
Marcela Mejia, Néstor Peña, José L. Muñoz and Oscar Esparza
Mobile ad hoc networks rely on cooperation to perform essential network mechanisms such as routing. Therefore, network performance depends to a great extent on giving…
Abstract
Purpose
Mobile ad hoc networks rely on cooperation to perform essential network mechanisms such as routing. Therefore, network performance depends to a great extent on giving participating nodes an incentive for cooperation. The level of trust among nodes is the most frequently used parameter for promoting cooperation in distributed systems. There are different models for representing trust, each of which is suited to a particular context and leads to different procedures for computing and propagating trust. The goal of this study is to analyze the most representative approaches for mobile ad hoc networks. It aims to obtain a qualitative comparison of the modeling approaches, according to the three basic components of a trust model: information gathering, information scoring and ranking, and action execution.
Design/methodology/approach
The paper identifies the different tasks required by a trust system and compares the way they are implemented when the system model itself is based on information theory, social networks, cluster concept, graph theory and game theory. It also provides a common nomenclature for the models. The study concentrates exclusively on the trust models themselves, without taking into account other aspects of the original articles that are beyond the scope of this analysis.
Findings
The study identifies the main components that a trust model must provide, and compares the way they are implemented. It finds that the lack of unity in the different proposed approaches makes it difficult to conduct an objective comparison. Finally, it also notices that most of the models do not properly manage node reintegration.
Originality/value
The best of our knowledge, the study is the first that uses information scoring and ranking as classification key. According to this key, approaches can be classified as based on information theory, clusters and social network theory, and cooperative and non‐cooperative game theory. It also provides a common nomenclature for all of them. Finally, the main contribution of the paper is to provide an analysis of the most representative approaches and present a novel qualitative comparison.
Details
Keywords
Jianpeng Zhang and Mingwei Lin
The purpose of this paper is to make an overview of 6,618 publications of Apache Hadoop from 2008 to 2020 in order to provide a conclusive and comprehensive analysis for…
Abstract
Purpose
The purpose of this paper is to make an overview of 6,618 publications of Apache Hadoop from 2008 to 2020 in order to provide a conclusive and comprehensive analysis for researchers in this field, as well as a preliminary knowledge of Apache Hadoop for interested researchers.
Design/methodology/approach
This paper employs the bibliometric analysis and visual analysis approaches to systematically study and analyze publications about Apache Hadoop in the Web of Science database. This study aims to investigate the topic of Apache Hadoop by means of bibliometric analysis with the aid of visualization applications. Through the bibliometric analysis of the collected documents, this paper analyzes the main statistical characteristics and cooperation networks. Research themes, research hotspots and future development trends are also investigated through the keyword analysis.
Findings
The research on Apache Hadoop is still the top priority in the future, and how to improve the performance of Apache Hadoop in the era of big data is one of the research hotspots.
Research limitations/implications
This paper makes a comprehensive analysis of Apache Hadoop with methods of bibliometrics, and it is valuable for researchers can quickly grasp the hot topics in this area.
Originality/value
This paper draws the structural characteristics of the publications in this field and summarizes the research hotspots and trends in this field in recent years, aiming to understand the development status and trends in this field and inspire new ideas for researchers.
Details