Search results

1 – 10 of 443
Article
Publication date: 19 May 2023

Michail Katsigiannis, Minas Pantelidakis and Konstantinos Mykoniatis

With hybrid simulation techniques getting popular for systems improvement in multiple fields, this study aims to provide insight on the use of hybrid simulation to assess the…

Abstract

Purpose

With hybrid simulation techniques getting popular for systems improvement in multiple fields, this study aims to provide insight on the use of hybrid simulation to assess the effect of lean manufacturing (LM) techniques on manufacturing facilities and the transition of a mass production (MP) facility to incorporating LM techniques.

Design/methodology/approach

In this paper, the authors apply a hybrid simulation approach to improve an educational automotive assembly line and provide guidelines for implementing different LM techniques. Specifically, the authors describe the design, development, verification and validation of a hybrid discrete-event and agent-based simulation model of a LEGO® car assembly line to analyze, improve and assess the system’s performance. The simulation approach examines the base model (MP) and an alternative scenario (just-in-time [JIT] with Heijunka).

Findings

The hybrid simulation approach effectively models the facility. The alternative simulation scenario (implementing JIT and Heijunka LM techniques) improved all examined performance metrics. In more detail, the system’s lead time was reduced by 47.37%, the throughput increased by 5.99% and the work-in-progress for workstations decreased by up to 56.73%.

Originality/value

This novel hybrid simulation approach provides insight and can be potentially extrapolated to model other manufacturing facilities and evaluate transition scenarios from MP to LM.

Details

International Journal of Lean Six Sigma, vol. 15 no. 2
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 15 February 2024

Williams E. Nwagwu

This study was carried out to examine the volume and annual growth pattern of research on e-health literacy research, investigate the open-access types of e-health literacy…

Abstract

Purpose

This study was carried out to examine the volume and annual growth pattern of research on e-health literacy research, investigate the open-access types of e-health literacy research and perform document production by country and by sources. The study also mapped the keywords used by authors to represent e-health literacy research and performed an analysis of the clusters of the keywords to reveal the thematic focus of research in the area.

Design/methodology/approach

The research was guided by a bibliometric approach involving visualization using VosViewer. Data were sourced from Scopus database using a syntax that was tested and verified to be capable of yielding reliable data on the subject matter. The analysis in this study was based on bibliographic data and keywords.

Findings

A total number of 1,176 documents were produced during 2006 and 2022. The majority of the documents (18.90%) were published based on hybrid open-access processes, and the USA has the highest contributions. The Journal of Medical Internet Research is the venue for most of the documents on the subject. The 1,176 documents were described by 5,047 keywords, 4.29 keywords per document, and the keywords were classified into five clusters that aptly capture the thematic structure of research in the area.

Research limitations/implications

e-Health literacy has experienced significant growth in research production from 2006 to 2022, with an average of 69 documents per year. Research on e-health literacy initially had low output but began to increase in 2018. The majority of e-health literacy documents are available through open access, with the USA being the leading contributor. The analysis of keywords reveals the multifaceted nature of e-health literacy, including access to information, attitudes, measurement tools, awareness, age factors and communication. Clusters of keywords highlight different aspects of e-health literacy research, such as accessibility, attitudes, awareness, measurement tools and the importance of age, cancer, caregivers and effective communication in healthcare.

Practical implications

This study has practical implications for health promotion. There is also the element of patient empowerment in which case patients are allowed to take an active role in their healthcare. By understanding their health information and having access to resources that help them manage their conditions, patients can make informed decisions about their healthcare. Finally, there is the issue of improved health outcomes which can be achieved by improving patients' e-health literacy. Visualisation of e-health literacy can help bridge the gap between patients and healthcare providers, promote patient-centered care and improve health outcomes.

Originality/value

Research production on e-Health literacy has experienced significant growth from 2006 to 2022, with an average of 69 documents per year. Many e-health literacy documents are available through open access, and the USA is the leading contributor. The analysis of keywords reveals the nature of e-health literacy, including access to information, attitudes, measurement tools, awareness and communication. The clusters of keywords highlight different aspects of e-health literacy research, such as accessibility, attitudes, awareness, measurement tools and the importance of age, cancer, caregivers, and effective communication in healthcare.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 November 2023

Felix Preshanth Santhiapillai and R.M. Chandima Ratnayake

The purpose of this study is to investigate the integrated application of business process modeling and notation (BPMN) and value stream mapping (VSM) to improve knowledge work…

Abstract

Purpose

The purpose of this study is to investigate the integrated application of business process modeling and notation (BPMN) and value stream mapping (VSM) to improve knowledge work performance and productivity in police services. In order to explore the application of the hybrid BPMN-VSM approach in police services, this study uses the department of digital crime investigation (DCI) in one Norwegian police district as a case study.

Design/methodology/approach

Service process identification was the next step after selecting an appropriate organizational unit for the case study. BPMN-VSM-based current state mapping, including time and waste analyses, was used to determine cycle and lead time and identify value-adding and nonvalue-adding activities. Subsequently, improvement opportunities were identified, and the current state process was re-designed and constructed through future state mapping.

Findings

The study results indicate a 44.4% and 83.0% reduction in process cycle and lead time, respectively. This promising result suggests that the hybrid BPMN-VSM approach can support the visualization of bottlenecks and possible causes of increased lead times, followed by the systematic identification and proposals of avenues for future improvement and innovation to remedy the discovered inefficiencies in a complex knowledge-work environment.

Research limitations/implications

This study focused on one department in a Norwegian police district. However, the experience gained can support researchers and practitioners in understanding lean implementation through an integrated BPMN and VSM model, offering a unique insight into the ability to investigate complex systems.

Originality/value

Complex knowledge work processes generally characterize police services due to a high number of activities, resources and stakeholder involvement. Implementing lean thinking in this context is significantly challenging, and the literature on this topic is limited. This study addresses the applicability of the hybrid BPMN-VSM approach in police services with an original public sector case study in Norway.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 November 2023

Salam Abdallah and Ashraf Khalil

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two…

115

Abstract

Purpose

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two techniques – text mining and manual review. The proposed methodology would aid researchers in identifying key concepts and research gaps, which in turn, will help them to establish the theoretical background supporting their empirical research objective.

Design/methodology/approach

This paper explores a hybrid methodology for literature review (HMLR), using text mining prior to systematic manual review.

Findings

The proposed rapid methodology is an effective tool to automate and speed up the process required to identify key and emerging concepts and research gaps in any specific research domain while conducting a systematic literature review. It assists in populating a research knowledge graph that does not reach all semantic depths of the examined domain yet provides some science-specific structure.

Originality/value

This study presents a new methodology for conducting a literature review for empirical research articles. This study has explored an “HMLR” that combines text mining and manual systematic literature review. Depending on the purpose of the research, these two techniques can be used in tandem to undertake a comprehensive literature review, by combining pieces of complex textual data together and revealing areas where research might be lacking.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 16 April 2024

Jinwei Zhao, Shuolei Feng, Xiaodong Cao and Haopei Zheng

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and…

Abstract

Purpose

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and systems developed specifically for monitoring health and fitness metrics.

Design/methodology/approach

In recent decades, wearable sensors for monitoring vital signals in sports and health have advanced greatly. Vital signals include electrocardiogram, electroencephalogram, electromyography, inertial data, body motions, cardiac rate and bodily fluids like blood and sweating, making them a good choice for sensing devices.

Findings

This report reviewed reputable journal articles on wearable sensors for vital signal monitoring, focusing on multimode and integrated multi-dimensional capabilities like structure, accuracy and nature of the devices, which may offer a more versatile and comprehensive solution.

Originality/value

The paper provides essential information on the present obstacles and challenges in this domain and provide a glimpse into the future directions of wearable sensors for the detection of these crucial signals. Importantly, it is evident that the integration of modern fabricating techniques, stretchable electronic devices, the Internet of Things and the application of artificial intelligence algorithms has significantly improved the capacity to efficiently monitor and leverage these signals for human health monitoring, including disease prediction.

Details

Sensor Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 14 August 2023

Usman Tariq, Ranjit Joy, Sung-Heng Wu, Muhammad Arif Mahmood, Asad Waqar Malik and Frank Liou

This study aims to discuss the state-of-the-art digital factory (DF) development combining digital twins (DTs), sensing devices, laser additive manufacturing (LAM) and subtractive…

Abstract

Purpose

This study aims to discuss the state-of-the-art digital factory (DF) development combining digital twins (DTs), sensing devices, laser additive manufacturing (LAM) and subtractive manufacturing (SM) processes. The current shortcomings and outlook of the DF also have been highlighted. A DF is a state-of-the-art manufacturing facility that uses innovative technologies, including automation, artificial intelligence (AI), the Internet of Things, additive manufacturing (AM), SM, hybrid manufacturing (HM), sensors for real-time feedback and control, and a DT, to streamline and improve manufacturing operations.

Design/methodology/approach

This study presents a novel perspective on DF development using laser-based AM, SM, sensors and DTs. Recent developments in laser-based AM, SM, sensors and DTs have been compiled. This study has been developed using systematic reviews and meta-analyses (PRISMA) guidelines, discussing literature on the DTs for laser-based AM, particularly laser powder bed fusion and direct energy deposition, in-situ monitoring and control equipment, SM and HM. The principal goal of this study is to highlight the aspects of DF and its development using existing techniques.

Findings

A comprehensive literature review finds a substantial lack of complete techniques that incorporate cyber-physical systems, advanced data analytics, AI, standardized interoperability, human–machine cooperation and scalable adaptability. The suggested DF effectively fills this void by integrating cyber-physical system components, including DT, AM, SM and sensors into the manufacturing process. Using sophisticated data analytics and AI algorithms, the DF facilitates real-time data analysis, predictive maintenance, quality control and optimal resource allocation. In addition, the suggested DF ensures interoperability between diverse devices and systems by emphasizing standardized communication protocols and interfaces. The modular and adaptable architecture of the DF enables scalability and adaptation, allowing for rapid reaction to market conditions.

Originality/value

Based on the need of DF, this review presents a comprehensive approach to DF development using DTs, sensing devices, LAM and SM processes and provides current progress in this domain.

Article
Publication date: 4 October 2022

Dhruba Jyoti Borgohain, Raj Kumar Bhardwaj and Manoj Kumar Verma

Artificial Intelligence (AI) is an emerging technology and turned into a field of knowledge that has been consistently displacing technologies for a change in human life. It is…

2000

Abstract

Purpose

Artificial Intelligence (AI) is an emerging technology and turned into a field of knowledge that has been consistently displacing technologies for a change in human life. It is applied in all spheres of life as reflected in the review of the literature section here. As applicable in the field of libraries too, this study scientifically mapped the papers on AAIL and analyze its growth, collaboration network, trending topics, or research hot spots to highlight the challenges and opportunities in adopting AI-based advancements in library systems and processes.

Design/methodology/approach

The study was developed with a bibliometric approach, considering a decade, 2012 to 2021 for data extraction from a premier database, Scopus. The steps followed are (1) identification, selection of keywords, and forming the search strategy with the approval of a panel of computer scientists and librarians and (2) design and development of a perfect algorithm to verify these selected keywords in title-abstract-keywords of Scopus (3) Performing data processing in some state-of-the-art bibliometric visualization tools, Biblioshiny R and VOSviewer (4) discussing the findings for practical implications of the study and limitations.

Findings

As evident from several papers, not much research has been conducted on AI applications in libraries in comparison to topics like AI applications in cancer, health, medicine, education, and agriculture. As per the Price law, the growth pattern is exponential. The total number of papers relevant to the subject is 1462 (single and multi-authored) contributed by 5400 authors with 0.271 documents per author and around 4 authors per document. Papers occurred mostly in open-access journals. The productive journal is the Journal of Chemical Information and Modelling (NP = 63) while the highly consistent and impactful is the Journal of Machine Learning Research (z-index=63.58 and CPP = 56.17). In the case of authors, J Chen (z-index=28.86 and CPP = 43.75) is the most consistent and impactful author. At the country level, the USA has recorded the highest number of papers positioned at the center of the co-authorship network but at the institutional level, China takes the 1st position. The trending topics of research are machine learning, large dataset, deep learning, high-level languages, etc. The present information system has a high potential to improve if integrated with AI technologies.

Practical implications

The number of scientific papers has increased over time. The evolution of themes like machine learning implicates AI as a broad field of knowledge that converges with other disciplines. The themes like large datasets imply that AI may be applied to analyze and interpret these data and support decision-making in public sector enterprises. Theme named high-level language emerged as a research hotspot which indicated that extensive research has been going on in this area to improve computer systems for facilitating the processing of data with high momentum. These implications are of high strategic worth for policymakers, library stakeholders, researchers and the government as a whole for decision-making.

Originality/value

The analysis of collaboration, prolific authors/journals using consistency factor and CPP, testing the relationship between consistency (z-index) and impact (h-index), using state-of-the-art network visualization and cluster analysis techniques make this study novel and differentiates it from the traditional bibliometric analysis. To the best of the author's knowledge, this work is the first attempt to comprehend the research streams and provide a holistic view of research on the application of AI in libraries. The insights obtained from this analysis are instrumental for both academics and practitioners.

Details

Library Hi Tech, vol. 42 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 2 August 2022

Ahmet Tarık Usta and Mehmet Şahin Gök

The building and construction industry has a significant potential to reduce adverse climate change effects. There are plans to improve the natural resource use and greenhouse gas…

Abstract

Purpose

The building and construction industry has a significant potential to reduce adverse climate change effects. There are plans to improve the natural resource use and greenhouse gas emissions caused by the buildings by choosing energy-efficient technologies, renewable energy sources and sustainable architectural and constructional elements. This study systematically reviews the patent data for climate change mitigation technologies related to buildings, aiming to detect their relative importance and evaluate each technology in the Y02B network.

Design/methodology/approach

The applied approach covers the process of (1) selecting high-impact technology, (2) collecting patent data from the USPTO database, (3) creating a citation frequency matrix using cooperative patent classification codes, (4) linking high-impact patents with analytical network process method, (5) limiting centrality of identifying core technologies from indicators and (6) creating a technology network map with social network analysis.

Findings

The study results show that energy-saving control techniques, energy-efficient lighting devices, end-user electricity consumption, management technologies and systems that convert solar energy into electrical energy are core solutions that reduce the effects of climate change. In addition, solutions that will support core technologies and whose effects are expected to increase in the coming years are energy-efficient heating, ventilating and air conditioning systems, smart grid integration, hybrid renewable energy systems, fuel cells, free cooling and heat recovery units and glazing technologies.

Originality/value

Most of the studies on patent analysis have failed to demonstrate any convincing evidence down to the lowest component groups of an entire technology network. The applied approach considers and evaluates each component included in a technology network from a holistic perspective.

Details

Kybernetes, vol. 52 no. 11
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 7 March 2024

Manpreet Kaur, Amit Kumar and Anil Kumar Mittal

In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered…

Abstract

Purpose

In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered considerable attention from researchers worldwide. The present study aims to synthesize the research field concerning ANN applications in the stock market to a) systematically map the research trends, key contributors, scientific collaborations, and knowledge structure, and b) uncover the challenges and future research areas in the field.

Design/methodology/approach

To provide a comprehensive appraisal of the extant literature, the study adopted the mixed approach of quantitative (bibliometric analysis) and qualitative (intensive review of influential articles) assessment to analyse 1,483 articles published in the Scopus and Web of Science indexed journals during 1992–2022. The bibliographic data was processed and analysed using VOSviewer and R software.

Findings

The results revealed the proliferation of articles since 2018, with China as the dominant country, Wang J as the most prolific author, “Expert Systems with Applications” as the leading journal, “computer science” as the dominant subject area, and “stock price forecasting” as the predominantly explored research theme in the field. Furthermore, “portfolio optimization”, “sentiment analysis”, “algorithmic trading”, and “crisis prediction” are found as recently emerged research areas.

Originality/value

To the best of the authors’ knowledge, the current study is a novel attempt that holistically assesses the existing literature on ANN applications throughout the entire domain of stock market. The main contribution of the current study lies in discussing the challenges along with the viable methodological solutions and providing application area-wise knowledge gaps for future studies.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

1 – 10 of 443