Search results

1 – 10 of 86
Open Access
Article
Publication date: 3 August 2022

Wolfgang Kaltenbrunner, Stephen Pinfield, Ludo Waltman, Helen Buckley Woods and Johanna Brumberg

The study aims to provide an analytical overview of current innovations in peer review and their potential impacts on scholarly communication.

2068

Abstract

Purpose

The study aims to provide an analytical overview of current innovations in peer review and their potential impacts on scholarly communication.

Design/methodology/approach

The authors created a survey that was disseminated among publishers, academic journal editors and other organizations in the scholarly communication ecosystem, resulting in a data set of 95 self-defined innovations. The authors ordered the material using a taxonomy that compares innovation projects according to five dimensions. For example, what is the object of review? How are reviewers recruited, and does the innovation entail specific review foci?

Findings

Peer review innovations partly pull in mutually opposed directions. Several initiatives aim to make peer review more efficient and less costly, while other initiatives aim to promote its rigor, which is likely to increase costs; innovations based on a singular notion of “good scientific practice” are at odds with more pluralistic understandings of scientific quality; and the idea of transparency in peer review is the antithesis to the notion that objectivity requires anonymization. These fault lines suggest a need for better coordination.

Originality/value

This paper presents original data that were analyzed using a novel, inductively developed, taxonomy. Contrary to earlier research, the authors do not attempt to gauge the extent to which peer review innovations increase the “reliability” or “quality” of reviews (as defined according to often implicit normative criteria), nor are they trying to measure the uptake of innovations in the routines of academic journals. Instead, they focus on peer review innovation activities as a distinct object of analysis.

Details

Journal of Documentation, vol. 78 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 7 December 2020

Jing Wang, Yinghan Wang, Yichuan Peng and Jian John Lu

The operation safety of the high-speed railway has been widely concerned. Due to the joint influence of the environment, equipment, personnel and other factors, accidents are…

Abstract

Purpose

The operation safety of the high-speed railway has been widely concerned. Due to the joint influence of the environment, equipment, personnel and other factors, accidents are inevitable in the operation process. However, few studies focused on identifying contributing factors affecting the severity of high-speed railway accidents because of the difficulty in obtaining field data. This study aims to investigate the impact factors affecting the severity of the general high-speed railway.

Design/methodology/approach

A total of 14 potential factors were examined from 475 data. The severity level is categorized into four levels by delay time and the number of subsequent trains that are affected by the accident. The partial proportional odds model was constructed to relax the constraint of the parallel line assumption.

Findings

The results show that 10 factors are found to significantly affect accident severity. Moreover, the factors including automation train protection (ATP) system fault, platform screen door and train door fault, traction converter fault and railway clearance intrusion by objects have an effect on reducing the severity level. On the contrary, the accidents caused by objects hanging on the catenary, pantograph fault, passenger misconducting or sudden illness, personnel intrusion of railway clearance, driving on heavy rain or snow and train collision against objects tend to be more severe.

Originality/value

The research results are very useful for mitigating the consequences of high-speed rail accidents.

Details

Smart and Resilient Transportation, vol. 3 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 2 January 2024

Eylem Thron, Shamal Faily, Huseyin Dogan and Martin Freer

Railways are a well-known example of complex critical infrastructure, incorporating socio-technical systems with humans such as drivers, signallers, maintainers and passengers at…

Abstract

Purpose

Railways are a well-known example of complex critical infrastructure, incorporating socio-technical systems with humans such as drivers, signallers, maintainers and passengers at the core. The technological evolution including interconnectedness and new ways of interaction lead to new security and safety risks that can be realised, both in terms of human error, and malicious and non-malicious behaviour. This study aims to identify the human factors (HF) and cyber-security risks relating to the role of signallers on the railways and explores strategies for the improvement of “Digital Resilience” – for the concept of a resilient railway.

Design/methodology/approach

Overall, 26 interviews were conducted with 21 participants from industry and academia.

Findings

The results showed that due to increased automation, both cyber-related threats and human error can impact signallers’ day-to-day operations – directly or indirectly (e.g. workload and safety-critical communications) – which could disrupt the railway services and potentially lead to safety-related catastrophic consequences. This study identifies cyber-related problems, including external threats; engineers not considering the human element in designs when specifying security controls; lack of security awareness among the rail industry; training gaps; organisational issues; and many unknown “unknowns”.

Originality/value

The authors discuss socio-technical principles through a hexagonal socio-technical framework and training needs analysis to mitigate against cyber-security issues and identify the predictive training needs of the signallers. This is supported by a systematic approach which considers both, safety and security factors, rather than waiting to learn from a cyber-attack retrospectively.

Details

Information & Computer Security, vol. 32 no. 2
Type: Research Article
ISSN: 2056-4961

Keywords

Open Access
Article
Publication date: 23 October 2023

Jan Svanberg, Tohid Ardeshiri, Isak Samsten, Peter Öhman, Presha E. Neidermeyer, Tarek Rana, Frank Maisano and Mats Danielson

The purpose of this study is to develop a method to assess social performance. Traditionally, environment, social and governance (ESG) rating providers use subjectively weighted…

Abstract

Purpose

The purpose of this study is to develop a method to assess social performance. Traditionally, environment, social and governance (ESG) rating providers use subjectively weighted arithmetic averages to combine a set of social performance (SP) indicators into one single rating. To overcome this problem, this study investigates the preconditions for a new methodology for rating the SP component of the ESG by applying machine learning (ML) and artificial intelligence (AI) anchored to social controversies.

Design/methodology/approach

This study proposes the use of a data-driven rating methodology that derives the relative importance of SP features from their contribution to the prediction of social controversies. The authors use the proposed methodology to solve the weighting problem with overall ESG ratings and further investigate whether prediction is possible.

Findings

The authors find that ML models are able to predict controversies with high predictive performance and validity. The findings indicate that the weighting problem with the ESG ratings can be addressed with a data-driven approach. The decisive prerequisite, however, for the proposed rating methodology is that social controversies are predicted by a broad set of SP indicators. The results also suggest that predictively valid ratings can be developed with this ML-based AI method.

Practical implications

This study offers practical solutions to ESG rating problems that have implications for investors, ESG raters and socially responsible investments.

Social implications

The proposed ML-based AI method can help to achieve better ESG ratings, which will in turn help to improve SP, which has implications for organizations and societies through sustainable development.

Originality/value

To the best of the authors’ knowledge, this research is one of the first studies that offers a unique method to address the ESG rating problem and improve sustainability by focusing on SP indicators.

Details

Sustainability Accounting, Management and Policy Journal, vol. 14 no. 7
Type: Research Article
ISSN: 2040-8021

Keywords

Open Access
Article
Publication date: 19 January 2024

Fuzhao Chen, Zhilei Chen, Qian Chen, Tianyang Gao, Mingyan Dai, Xiang Zhang and Lin Sun

The electromechanical brake system is leading the latest development trend in railway braking technology. The tolerance stack-up generated during the assembly and production…

Abstract

Purpose

The electromechanical brake system is leading the latest development trend in railway braking technology. The tolerance stack-up generated during the assembly and production process catalyzes the slight geometric dimensioning and tolerancing between the motor stator and rotor inside the electromechanical cylinder. The tolerance leads to imprecise brake control, so it is necessary to diagnose the fault of the motor in the fully assembled electromechanical brake system. This paper aims to present improved variational mode decomposition (VMD) algorithm, which endeavors to elucidate and push the boundaries of mechanical synchronicity problems within the realm of the electromechanical brake system.

Design/methodology/approach

The VMD algorithm plays a pivotal role in the preliminary phase, employing mode decomposition techniques to decompose the motor speed signals. Afterward, the error energy algorithm precision is utilized to extract abnormal features, leveraging the practical intrinsic mode functions, eliminating extraneous noise and enhancing the signal’s fidelity. This refined signal then becomes the basis for fault analysis. In the analytical step, the cepstrum is employed to calculate the formant and envelope of the reconstructed signal. By scrutinizing the formant and envelope, the fault point within the electromechanical brake system is precisely identified, contributing to a sophisticated and accurate fault diagnosis.

Findings

This paper innovatively uses the VMD algorithm for the modal decomposition of electromechanical brake (EMB) motor speed signals and combines it with the error energy algorithm to achieve abnormal feature extraction. The signal is reconstructed according to the effective intrinsic mode functions (IMFS) component of removing noise, and the formant and envelope are calculated by cepstrum to locate the fault point. Experiments show that the empirical mode decomposition (EMD) algorithm can effectively decompose the original speed signal. After feature extraction, signal enhancement and fault identification, the motor mechanical fault point can be accurately located. This fault diagnosis method is an effective fault diagnosis algorithm suitable for EMB systems.

Originality/value

By using this improved VMD algorithm, the electromechanical brake system can precisely identify the rotational anomaly of the motor. This method can offer an online diagnosis analysis function during operation and contribute to an automated factory inspection strategy while parts are assembled. Compared with the conventional motor diagnosis method, this improved VMD algorithm can eliminate the need for additional acceleration sensors and save hardware costs. Moreover, the accumulation of online detection functions helps improve the reliability of train electromechanical braking systems.

Open Access
Article
Publication date: 7 February 2023

Roberto De Luca, Antonino Ferraro, Antonio Galli, Mosè Gallo, Vincenzo Moscato and Giancarlo Sperlì

The recent innovations of Industry 4.0 have made it possible to easily collect data related to a production environment. In this context, information about industrial equipment …

1764

Abstract

Purpose

The recent innovations of Industry 4.0 have made it possible to easily collect data related to a production environment. In this context, information about industrial equipment – gathered by proper sensors – can be profitably used for supporting predictive maintenance (PdM) through the application of data-driven analytics based on artificial intelligence (AI) techniques. Although deep learning (DL) approaches have proven to be a quite effective solutions to the problem, one of the open research challenges remains – the design of PdM methods that are computationally efficient, and most importantly, applicable in real-world internet of things (IoT) scenarios, where they are required to be executable directly on the limited devices’ hardware.

Design/methodology/approach

In this paper, the authors propose a DL approach for PdM task, which is based on a particular and very efficient architecture. The major novelty behind the proposed framework is to leverage a multi-head attention (MHA) mechanism to obtain both high results in terms of remaining useful life (RUL) estimation and low memory model storage requirements, providing the basis for a possible implementation directly on the equipment hardware.

Findings

The achieved experimental results on the NASA dataset show how the authors’ approach outperforms in terms of effectiveness and efficiency the majority of the most diffused state-of-the-art techniques.

Research limitations/implications

A comparison of the spatial and temporal complexity with a typical long-short term memory (LSTM) model and the state-of-the-art approaches was also done on the NASA dataset. Despite the authors’ approach achieving similar effectiveness results with respect to other approaches, it has a significantly smaller number of parameters, a smaller storage volume and lower training time.

Practical implications

The proposed approach aims to find a compromise between effectiveness and efficiency, which is crucial in the industrial domain in which it is important to maximize the link between performance attained and resources allocated. The overall accuracy performances are also on par with the finest methods described in the literature.

Originality/value

The proposed approach allows satisfying the requirements of modern embedded AI applications (reliability, low power consumption, etc.), finding a compromise between efficiency and effectiveness.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 4
Type: Research Article
ISSN: 1741-038X

Keywords

Open Access
Article
Publication date: 10 May 2022

Jindong Song, Jingbao Zhu and Shanyou Li

Using the strong motion data of K-net in Japan, the continuous magnitude prediction method based on support vector machine (SVM) was studied.

Abstract

Purpose

Using the strong motion data of K-net in Japan, the continuous magnitude prediction method based on support vector machine (SVM) was studied.

Design/methodology/approach

In the range of 0.5–10.0 s after the P-wave arrival, the prediction time window was established at an interval of 0.5 s. 12 P-wave characteristic parameters were selected as the model input parameters to construct the earthquake early warning (EEW) magnitude prediction model (SVM-HRM) for high-speed railway based on SVM.

Findings

The magnitude prediction results of the SVM-HRM model were compared with the traditional magnitude prediction model and the high-speed railway EEW current norm. Results show that at the 3.0 s time window, the magnitude prediction error of the SVM-HRM model is obviously smaller than that of the traditional τc method and Pd method. The overestimation of small earthquakes is obviously improved, and the construction of the model is not affected by epicenter distance, so it has generalization performance. For earthquake events with the magnitude range of 3–5, the single station realization rate of the SVM-HRM model reaches 95% at 0.5 s after the arrival of P-wave, which is better than the first alarm realization rate norm required by “The Test Method of EEW and Monitoring System for High-Speed Railway.” For earthquake events with magnitudes ranging from 3 to 5, 5 to 7 and 7 to 8, the single station realization rate of the SVM-HRM model is at 0.5 s, 1.5 s and 0.5 s after the P-wave arrival, respectively, which is better than the realization rate norm of multiple stations.

Originality/value

At the latest, 1.5 s after the P-wave arrival, the SVM-HRM model can issue the first earthquake alarm that meets the norm of magnitude prediction realization rate, which meets the accuracy and continuity requirements of high-speed railway EEW magnitude prediction.

Details

Railway Sciences, vol. 1 no. 2
Type: Research Article
ISSN: 2755-0907

Keywords

Open Access
Article
Publication date: 5 April 2023

Tomás Lopes and Sérgio Guerreiro

Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error…

2684

Abstract

Purpose

Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error while also providing improvement insights for the business process modeling activity. The primary purposes of this paper are to conduct a literature review of Business Process Model and Notation (BPMN) testing and formal verification and to propose the Business Process Evaluation and Research Framework for Enhancement and Continuous Testing (bPERFECT) framework, which aims to guide business process testing (BPT) research and implementation. Secondary objectives include (1) eliciting the existing types of testing, (2) evaluating their impact on efficiency and (3) assessing the formal verification techniques that complement testing.

Design/methodology/approach

The methodology used is based on Kitchenham's (2004) original procedures for conducting systematic literature reviews.

Findings

Results of this study indicate that three distinct business process model testing types can be found in the literature: black/gray-box, regression and integration. Testing and verification approaches differ in aspects such as awareness of test data, coverage criteria and auxiliary representations used. However, most solutions pose notable hindrances, such as BPMN element limitations, that lead to limited practicality.

Research limitations/implications

The databases selected in the review protocol may have excluded relevant studies on this topic. More databases and gray literature could also be considered for inclusion in this review.

Originality/value

Three main originality aspects are identified in this study as follows: (1) the classification of process model testing types, (2) the future trends foreseen for BPMN model testing and verification and (3) the bPERFECT framework for testing business processes.

Details

Business Process Management Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Open Access
Article
Publication date: 17 November 2023

Peiman Tavakoli, Ibrahim Yitmen, Habib Sadri and Afshin Taheri

The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin…

Abstract

Purpose

The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin smart and sustainable built environment (DT) for predictive asset management (PAM) in building facilities.

Design/methodology/approach

Qualitative research data were collected through a comprehensive scoping review of secondary sources. Additionally, primary data were gathered through interviews with industry specialists. The analysis of the data served as the basis for developing blockchain-based DT data provenance models and scenarios. A case study involving a conference room in an office building in Stockholm was conducted to assess the proposed data provenance model. The implementation utilized the Remix Ethereum platform and Sepolia testnet.

Findings

Based on the analysis of results, a data provenance model on blockchain-based DT which ensures the reliability and trustworthiness of data used in PAM processes was developed. This was achieved by providing a transparent and immutable record of data origin, ownership and lineage.

Practical implications

The proposed model enables decentralized applications (DApps) to publish real-time data obtained from dynamic operations and maintenance processes, enhancing the reliability and effectiveness of data for PAM.

Originality/value

The research presents a data provenance model on a blockchain-based DT, specifically tailored to PAM in building facilities. The proposed model enhances decision-making processes related to PAM by ensuring data reliability and trustworthiness and providing valuable insights for specialists and stakeholders interested in the application of blockchain technology in asset management and data provenance.

Details

Smart and Sustainable Built Environment, vol. 13 no. 1
Type: Research Article
ISSN: 2046-6099

Keywords

Open Access
Article
Publication date: 3 October 2017

Tristan Gerrish, Kirti Ruikar, Malcolm Cook, Mark Johnson and Mark Phillip

The aim of this paper is to demonstrate the use of historical building performance data to identify potential issues with the build quality and operation of a building, as a means…

2769

Abstract

Purpose

The aim of this paper is to demonstrate the use of historical building performance data to identify potential issues with the build quality and operation of a building, as a means of narrowing the scope of in-depth further review.

Design/methodology/approach

The response of a room to the difference between internal and external temperatures is used to demonstrate patterns in thermal response across monitored rooms in a single building, to clearly show where rooms are under-performing in terms of their ability to retain heat during unconditioned hours. This procedure is applied to three buildings of different types, identifying the scope and limitation of this method and indicating areas of building performance deficiency.

Findings

The response of a single space to changing internal and external temperatures can be used to determine whether it responds differently to other monitored buildings. Spaces where thermal bridging and changes in use from design were encountered exhibit noticeably different responses.

Research limitations/implications

Application of this methodology is limited to buildings where temperature monitoring is undertaken both internally for a variety of spaces, and externally, and where knowledge of the uses of monitored spaces is available. Naturally ventilated buildings would be more suitable for analysis using this method.

Originality/value

This paper contributes to the understanding of building energy performance from a data-driven perspective, to the knowledge on the disparity between building design intent and reality, and to the use of basic commonly recorded performance metrics for analysis of potentially detrimental building performance issues.

1 – 10 of 86