Search results
1 – 10 of over 1000This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a…
Abstract
Purpose
This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a theoretical basis for the improvement and optimization of the policy system.
Design/methodology/approach
China's scientific data management policies were obtained through various channels such as searching government websites and policy and legal database, and 209 policies were finally identified as the sample for analysis after being screened and integrated. A three-dimensional framework was constructed based on the perspective of policy tools, combining stakeholder and lifecycle theories. And the content of policy texts was coded and quantitatively analyzed according to this framework.
Findings
China's scientific data management policies can be divided into four stages according to the time sequence: infancy, preliminary exploration, comprehensive promotion and key implementation. The policies use a combination of three types of policy tools: supply-side, environmental-side and demand-side, involving multiple stakeholders and covering all stages of the lifecycle. But policy tools and their application to stakeholders and lifecycle stages are imbalanced. The development of future scientific data management policy should strengthen the balance of policy tools, promote the participation of multiple subjects and focus on the supervision of the whole lifecycle.
Originality/value
This paper constructs a three-dimensional analytical framework and uses content analysis to quantitatively analyze scientific data management policy texts, extending the research perspective and research content in the field of scientific data management. The study identifies policy focuses and proposes several strategies that will help optimize the scientific data management policy.
Details
Keywords
Lin Kang, Junjie Chen, Jie Wang and Yaqi Wei
In order to meet the different quality of service (QoS) requirements of vehicle-to-infrastructure (V2I) and multiple vehicle-to-vehicle (V2V) links in vehicle networks, an…
Abstract
Purpose
In order to meet the different quality of service (QoS) requirements of vehicle-to-infrastructure (V2I) and multiple vehicle-to-vehicle (V2V) links in vehicle networks, an efficient V2V spectrum access mechanism is proposed in this paper.
Design/methodology/approach
A long-short-term-memory-based multi-agent hybrid proximal policy optimization (LSTM-H-PPO) algorithm is proposed, through which the distributed spectrum access and continuous power control of V2V link are realized.
Findings
Simulation results show that compared with the baseline algorithm, the proposed algorithm has significant advantages in terms of total system capacity, payload delivery success rate of V2V link and convergence speed.
Originality/value
The LSTM layer uses the time sequence information to estimate the accurate system state, which ensures the choice of V2V spectrum access based on local observation effective. The hybrid PPO framework shares training parameters among agents which speeds up the entire training process. The proposed algorithm adopts the mode of centralized training and distributed execution, so that the agent can achieve the optimal spectrum access based on local observation information with less signaling overhead.
Details
Keywords
Tao Pang, Wenwen Xiao, Yilin Liu, Tao Wang, Jie Liu and Mingke Gao
This paper aims to study the agent learning from expert demonstration data while incorporating reinforcement learning (RL), which enables the agent to break through the…
Abstract
Purpose
This paper aims to study the agent learning from expert demonstration data while incorporating reinforcement learning (RL), which enables the agent to break through the limitations of expert demonstration data and reduces the dimensionality of the agent’s exploration space to speed up the training convergence rate.
Design/methodology/approach
Firstly, the decay weight function is set in the objective function of the agent’s training to combine both types of methods, and both RL and imitation learning (IL) are considered to guide the agent's behavior when updating the policy. Second, this study designs a coupling utilization method between the demonstration trajectory and the training experience, so that samples from both aspects can be combined during the agent’s learning process, and the utilization rate of the data and the agent’s learning speed can be improved.
Findings
The method is superior to other algorithms in terms of convergence speed and decision stability, avoiding training from scratch for reward values, and breaking through the restrictions brought by demonstration data.
Originality/value
The agent can adapt to dynamic scenes through exploration and trial-and-error mechanisms based on the experience of demonstrating trajectories. The demonstration data set used in IL and the experience samples obtained in the process of RL are coupled and used to improve the data utilization efficiency and the generalization ability of the agent.
Details
Keywords
Prashant Jain, Dhanraj P. Tambuskar and Vaibhav Narwane
The advancements in internet technologies and the use of sophisticated digital devices in supply chain operations incessantly generate enormous amounts of data, which is termed as…
Abstract
Purpose
The advancements in internet technologies and the use of sophisticated digital devices in supply chain operations incessantly generate enormous amounts of data, which is termed as big data (BD). The BD technologies have brought about a paradigm shift in the supply chain decision-making towards profitability and sustainability. The aim of this work is to address the issue of implementation of the big data analytics (BDA) in sustainable supply chain management (SSCM) by identifying the relevant factors and developing a structural model for this purpose.
Design/methodology/approach
Through a comprehensive literature review and experts’ opinion, the crucial factors are found using the PESTEL framework, which covers political, economic, social, technological, environmental and legal factors. The structural model is developed based on the results of the total interpretive structural modelling (TISM) procedure and MICMAC analysis.
Findings
The policy support regarding IT, culture of data-based decision-making, inappropriate selection of BDA technologies and the laws related to data security and privacy are found to affect most of the other factors. Also, the company’s vision towards environmental performance and willingness for material and energy optimization are found to be crucial for the environmental and social sustainability of the supply chain.
Research limitations/implications
The study is focused on the manufacturing supply chain in emerging economies. It may be extended to other industry sectors and geographical areas. Also, additional factors may be included to make the model more robust.
Practical implications
The proposed model imparts an understanding of the relative importance and interrelationship of factors. This may be useful to managers to assess their strengths and weaknesses and ascertain their priorities in the context of their organization for developing a suitable investment plan.
Social implications
The study establishes the importance of BDA for conservation and management of energy and material. This is crucial to develop strategies for enhancing eco-efficiency of the supply chain, which in turn enhances the economic returns for the society.
Originality/value
This study addresses the implementation of BDA in SSCM in the context of emerging economies. It uses the PESTEL framework for identifying the factors, which is a comprehensive framework for strategic planning and decision-making. This study makes use of the TISM methodology for model development and deliberates on the social and environmental implications too, apart from theoretical and managerial implications.
Details
Keywords
Ji Fang, Vincent C.S. Lee and Haiyan Wang
This paper explores optimal service resource management strategy, a continuous challenge for health information service to enhance service performance, optimise service resource…
Abstract
Purpose
This paper explores optimal service resource management strategy, a continuous challenge for health information service to enhance service performance, optimise service resource utilisation and deliver interactive health information service.
Design/methodology/approach
An adaptive optimal service resource management strategy was developed considering a value co-creation model in health information service with a focus on collaborative and interactive with users. The deep reinforcement learning algorithm was embedded in the Internet of Things (IoT)-based health information service system (I-HISS) to allocate service resources by controlling service provision and service adaptation based on user engagement behaviour. The simulation experiments were conducted to evaluate the significance of the proposed algorithm under different user reactions to the health information service.
Findings
The results indicate that the proposed service resource management strategy, considering user co-creation in the service delivery, process improved both the service provider’s business revenue and users' individual benefits.
Practical implications
The findings may facilitate the design and implementation of health information services that can achieve a high user service experience with low service operation costs.
Originality/value
This study is amongst the first to propose a service resource management model in I-HISS, considering the value co-creation of the user in the service-dominant logic. The novel artificial intelligence algorithm is developed using the deep reinforcement learning method to learn the adaptive service resource management strategy. The results emphasise user engagement in the health information service process.
Details
Keywords
Xiaojie Xu and Yun Zhang
Understandings of house prices and their interrelationships have undoubtedly drawn a great amount of attention from various market participants. This study aims to investigate the…
Abstract
Purpose
Understandings of house prices and their interrelationships have undoubtedly drawn a great amount of attention from various market participants. This study aims to investigate the monthly newly-built residential house price indices of seventy Chinese cities during a 10-year period spanning January 2011–December 2020 for understandings of issues related to their interdependence and synchronizations.
Design/methodology/approach
Analysis here is facilitated through network analysis together with topological and hierarchical characterizations of price comovements.
Findings
This study determines eight sectoral groups of cities whose house price indices are directly connected and the price synchronization within each group is higher than that at the national level, although each shows rather idiosyncratic patterns. Degrees of house price comovements are generally lower starting from 2018 at the national level and for the eight sectoral groups. Similarly, this study finds that the synchronization intensity associated with the house price index of each city generally switches to a lower level starting from early 2019.
Originality/value
Results here should be of use to policy design and analysis aiming at housing market evaluations and monitoring.
Details
Keywords
Nishant Kulshrestha, Saurabh Agrawal and Deep Shree
Spare Parts Management (SPM) and Industry 4.0 has proven their importance. However, employment of Industry 4.0 solutions for SPM is at emerging stage. To address the issue, this…
Abstract
Purpose
Spare Parts Management (SPM) and Industry 4.0 has proven their importance. However, employment of Industry 4.0 solutions for SPM is at emerging stage. To address the issue, this article is aimed toward a systematic literature review on SPM in Industry 4.0 era and identification of research gaps in the field with prospects.
Design/methodology/approach
Research articles were reviewed and analyzed through a content-based analysis using four step process model. The proposed framework consists of five categories such as Inventory Management, Types of Spares, Circularity based on 6Rs, Performance Indicators and Strategic and Operational. Based on these categories, a total of 118 research articles published between 1998 and 2022 were reviewed.
Findings
The technological solutions of Industry 4.0 concepts have provided numerous opportunities for SPM. Industry 4.0 hi-tech solutions can enhance agility, operational efficiency, quality of product and service, customer satisfaction, sustainability and profitability.
Research limitations/implications
The review of articles provides an integrated framework which recognizes implementation issues and challenges in the field. The proposed framework will support academia and practitioners toward implementation of technological solutions of Industry 4.0 in SPM. Implementation of Industry 4.0 in SPM may help in improving the triple bottom line aspect of sustainability which can make significant contribution to academia, practitioners and society.
Originality/value
The examination uncovered a scarcity of research in the intersection of SPM and Industry 4.0 concepts, suggesting a significant opportunity for additional investigative efforts.
Details
Keywords
Rong Jiang, Bin He, Zhipeng Wang, Xu Cheng, Hongrui Sang and Yanmin Zhou
Compared with traditional methods relying on manual teaching or system modeling, data-driven learning methods, such as deep reinforcement learning and imitation learning, show…
Abstract
Purpose
Compared with traditional methods relying on manual teaching or system modeling, data-driven learning methods, such as deep reinforcement learning and imitation learning, show more promising potential to cope with the challenges brought by increasingly complex tasks and environments, which have become the hot research topic in the field of robot skill learning. However, the contradiction between the difficulty of collecting robot–environment interaction data and the low data efficiency causes all these methods to face a serious data dilemma, which has become one of the key issues restricting their development. Therefore, this paper aims to comprehensively sort out and analyze the cause and solutions for the data dilemma in robot skill learning.
Design/methodology/approach
First, this review analyzes the causes of the data dilemma based on the classification and comparison of data-driven methods for robot skill learning; Then, the existing methods used to solve the data dilemma are introduced in detail. Finally, this review discusses the remaining open challenges and promising research topics for solving the data dilemma in the future.
Findings
This review shows that simulation–reality combination, state representation learning and knowledge sharing are crucial for overcoming the data dilemma of robot skill learning.
Originality/value
To the best of the authors’ knowledge, there are no surveys that systematically and comprehensively sort out and analyze the data dilemma in robot skill learning in the existing literature. It is hoped that this review can be helpful to better address the data dilemma in robot skill learning in the future.
Details
Keywords
S. P. Sreenivas Padala and Prabhanjan M. Skanda
The purpose of this paper is to develop a building information modelling (BIM)-based multi-objective optimization (MOO) framework for volumetric analysis of buildings during early…
Abstract
Purpose
The purpose of this paper is to develop a building information modelling (BIM)-based multi-objective optimization (MOO) framework for volumetric analysis of buildings during early design stages. The objective is to optimize volumetric spaces (3D) instead of 2D spaces to enhance space utilization, thermal comfort, constructability and rental value of buildings
Design/methodology/approach
The integration of two fundamental concepts – BIM and MOO, forms the basis of proposed framework. In the early design phases of a project, BIM is used to generate precise building volume data. The non-sorting genetic algorithm-II, a MOO algorithm, is then used to optimize extracted volume data from 3D BIM models, considering four objectives: space utilization, thermal comfort, rental value and construction cost. The framework is implemented in context of a school of architecture building project.
Findings
The findings of case study demonstrate significant improvements resulting from MOO of building volumes. Space utilization increased by 30%, while thermal comfort improved by 20%, and construction costs were reduced by 10%. Furthermore, rental value of the case study building increased by 33%.
Practical implications
The proposed framework offers practical implications by enabling project teams to generate optimal building floor layouts during early design stages, thereby avoiding late costly changes during construction phase of project.
Originality/value
The integration of BIM and MOO in this study provides a unique approach to optimize building volumes considering multiple factors during early design stages of a project
Details
Keywords
Armando Di Meglio, Nicola Massarotti and Perumal Nithiarasu
In this study, the authors propose a novel digital twinning approach specifically designed for controlling transient thermal systems. The purpose of this study is to harness the…
Abstract
Purpose
In this study, the authors propose a novel digital twinning approach specifically designed for controlling transient thermal systems. The purpose of this study is to harness the combined power of deep learning (DL) and physics-based methods (PBM) to create an active virtual replica of the physical system.
Design/methodology/approach
To achieve this goal, we introduce a deep neural network (DNN) as the digital twin and a Finite Element (FE) model as the physical system. This integrated approach is used to address the challenges of controlling an unsteady heat transfer problem with an integrated feedback loop.
Findings
The results of our study demonstrate the effectiveness of the proposed digital twinning approach in regulating the maximum temperature within the system under varying and unsteady heat flux conditions. The DNN, trained on stationary data, plays a crucial role in determining the heat transfer coefficients necessary to maintain temperatures below a defined threshold value, such as the material’s melting point. The system is successfully controlled in 1D, 2D and 3D case studies. However, careful evaluations should be conducted if such a training approach, based on steady-state data, is applied to completely different transient heat transfer problems.
Originality/value
The present work represents one of the first examples of a comprehensive digital twinning approach to transient thermal systems, driven by data. One of the noteworthy features of this approach is its robustness. Adopting a training based on dimensionless data, the approach can seamlessly accommodate changes in thermal capacity and thermal conductivity without the need for retraining.
Details