Search results

1 – 10 of 412
Article
Publication date: 8 November 2023

Lea Iaia, Monica Fait, Alessia Munnia, Federica Cavallo and Elbano De Nuccio

This study aims to explore human–machine interactions in the process of adopting artificial intelligence (AI) based on the principles of Taylorism and digital Taylorism to…

Abstract

Purpose

This study aims to explore human–machine interactions in the process of adopting artificial intelligence (AI) based on the principles of Taylorism and digital Taylorism to validate these principles in postmodern management.

Design/methodology/approach

The topic has been investigated by means of a case study based on the current experience of Carrozzeria Basile, a body shop born in Turin in 1970.

Findings

The Carrozzeria Basile’s approach is rooted in scientific management concepts, and its digital evolution is aimed at centring humans, investigating human–machine interactions and how to take advantage of both of these.

Research limitations/implications

The research contributes to both Taylorism management and the literature on human–machine interactions. A unique case study represents a first step in comprehending the phenomenon but could also represent a limit for the study.

Practical implications

Practical implications refer to the scientific path to facilitate the implementation and adoption of emerging technologies in the organisational process, including employee engagement and continuous employee training.

Originality/value

The research focuses on human–machine interactions in the process of adopting AI in the automation process. Its novelty also relies on the comprehension of the needed path to facilitate these interactions and stimulate a collaborative and positive approach. The study fills the literature gap investigating the interactions between humans and machines beginning with their historical roots, from Taylorism to digital Taylorism, in relation to an empirical scenario.

Article
Publication date: 15 March 2024

Florian Rupp, Benjamin Schnabel and Kai Eckert

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…

Abstract

Purpose

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.

Design/methodology/approach

In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.

Findings

The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.

Practical implications

Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.

Originality/value

With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 16 February 2024

Khameel B. Mustapha, Eng Hwa Yap and Yousif Abdalla Abakr

Following the recent rise in generative artificial intelligence (GenAI) tools, fundamental questions about their wider impacts have started to reverberate around various…

Abstract

Purpose

Following the recent rise in generative artificial intelligence (GenAI) tools, fundamental questions about their wider impacts have started to reverberate around various disciplines. This study aims to track the unfolding landscape of general issues surrounding GenAI tools and to elucidate the specific opportunities and limitations of these tools as part of the technology-assisted enhancement of mechanical engineering education and professional practices.

Design/methodology/approach

As part of the investigation, the authors conduct and present a brief scientometric analysis of recently published studies to unravel the emerging trend on the subject matter. Furthermore, experimentation was done with selected GenAI tools (Bard, ChatGPT, DALL.E and 3DGPT) for mechanical engineering-related tasks.

Findings

The study identified several pedagogical and professional opportunities and guidelines for deploying GenAI tools in mechanical engineering. Besides, the study highlights some pitfalls of GenAI tools for analytical reasoning tasks (e.g., subtle errors in computation involving unit conversions) and sketching/image generation tasks (e.g., poor demonstration of symmetry).

Originality/value

To the best of the authors’ knowledge, this study presents the first thorough assessment of the potential of GenAI from the lens of the mechanical engineering field. Combining scientometric analysis, experimentation and pedagogical insights, the study provides a unique focus on the implications of GenAI tools for material selection/discovery in product design, manufacturing troubleshooting, technical documentation and product positioning, among others.

Details

Interactive Technology and Smart Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-5659

Keywords

Open Access
Article
Publication date: 17 November 2023

Peiman Tavakoli, Ibrahim Yitmen, Habib Sadri and Afshin Taheri

The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin…

Abstract

Purpose

The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin smart and sustainable built environment (DT) for predictive asset management (PAM) in building facilities.

Design/methodology/approach

Qualitative research data were collected through a comprehensive scoping review of secondary sources. Additionally, primary data were gathered through interviews with industry specialists. The analysis of the data served as the basis for developing blockchain-based DT data provenance models and scenarios. A case study involving a conference room in an office building in Stockholm was conducted to assess the proposed data provenance model. The implementation utilized the Remix Ethereum platform and Sepolia testnet.

Findings

Based on the analysis of results, a data provenance model on blockchain-based DT which ensures the reliability and trustworthiness of data used in PAM processes was developed. This was achieved by providing a transparent and immutable record of data origin, ownership and lineage.

Practical implications

The proposed model enables decentralized applications (DApps) to publish real-time data obtained from dynamic operations and maintenance processes, enhancing the reliability and effectiveness of data for PAM.

Originality/value

The research presents a data provenance model on a blockchain-based DT, specifically tailored to PAM in building facilities. The proposed model enhances decision-making processes related to PAM by ensuring data reliability and trustworthiness and providing valuable insights for specialists and stakeholders interested in the application of blockchain technology in asset management and data provenance.

Details

Smart and Sustainable Built Environment, vol. 13 no. 1
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 6 February 2024

Abdul Moid, M. Masoom Raza, Mohammad Javed and Keshwar Jahan

Records are current documents containing crucial personal, legal, financial and medical information, while archives house non-current documents with the same details. This study…

Abstract

Purpose

Records are current documents containing crucial personal, legal, financial and medical information, while archives house non-current documents with the same details. This study specifically aims to measure existing research in records and archives management with various scientific indicators.

Design/methodology/approach

Data extraction was conducted using the Web of Science, resulting in a data set of 2003 records for further analysis. Biblioshiny and VOSviewer have been used for mapping and visualization of the extracted data.

Findings

Managing and organizing this essential information is equally vital to maintaining records and archives. The findings encompass various aspects such as publications and citations, influential authors, source impact factors, relevant articles, affiliations, co-authorship trends across the top 10 countries and regions, references, publication year spectroscopy, keyword co-occurrence and historiography. The study concludes that medical records management prominently dominates the selected research area.

Originality/value

The study reflects the advancements in management systems and continues to emerge as research on the management of records and archives has gained significance.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Open Access
Article
Publication date: 22 December 2023

Tao Xu, Hanning Shi, Yongjiang Shi and Jianxin You

The purpose of this paper is to explore the concept of data assets and how companies can assetize their data. Using the literature review methodology, the paper first summarizes…

1071

Abstract

Purpose

The purpose of this paper is to explore the concept of data assets and how companies can assetize their data. Using the literature review methodology, the paper first summarizes the conceptual controversies over data assets in the existing literature. Subsequently, the paper defines the concept of data assets. Finally, keywords from the existing research literature are presented visually and a foundational framework for achieving data assetization is proposed.

Design/methodology/approach

This paper uses a systematic literature review approach to discuss the conceptual evolution and strategic imperatives of data assets. To establish a robust research methodology, this paper takes into account two main aspects. First, it conducts a comprehensive review of the existing literature on digital technology and data assets, which enables the derivation of an evolutionary path of data assets and the development of a clear and concise definition of the concept. Second, the paper uses Citespace, a widely used software for literature review, to examine the research framework of enterprise data assetization.

Findings

The paper offers pivotal insights into the realm of data assets. It highlights the changing perceptions of data assets with digital progression and addresses debates on data asset categorization, value attributes and ownership. The study introduces a definitive concept of data assets as electronically recorded data resources with real or potential value under legal parameters. Moreover, it delineates strategic imperatives for harnessing data assets, presenting a practical framework that charts the stages of “resource readiness, capacity building, and data application”, guiding businesses in optimizing their data throughout its lifecycle.

Originality/value

This paper comprehensively explores the issue of data assets, clarifying controversial concepts and categorizations and bridging gaps in the existing literature. The paper introduces a clear conceptualization of data assets, bridging the gap between academia and practice. In addition, the study proposes a strategic framework for data assetization. This study not only helps to promote a unified understanding among academics and professionals but also helps businesses to understand the process of data assetization.

Details

Asia Pacific Journal of Innovation and Entrepreneurship, vol. 18 no. 1
Type: Research Article
ISSN: 2071-1395

Keywords

Article
Publication date: 7 March 2023

Preeti Godabole and Girish Bhole

The main purpose of the paper is timing analysis of mixed critical applications on the multicore system to identify an efficient task scheduling mechanism to achieve three main…

Abstract

Purpose

The main purpose of the paper is timing analysis of mixed critical applications on the multicore system to identify an efficient task scheduling mechanism to achieve three main objectives improving schedulability, achieving reliability and minimizing the number of cores used. The rise in transient faults in embedded systems due to the use of low-cost processors has led to the use of fault-tolerant scheduling and mapping techniques.

Design/methodology/approach

The paper opted for a simulation-based study. The simulation of mixed critical applications, like air traffic control systems and synthetic workloads, is carried out using a litmus-real time testbed on an Ubuntu machine. The heuristic algorithms for task allocation based on utilization factors and task criticalities are proposed for partitioned approaches with multiple objectives.

Findings

Both partitioned earliest deadline first (EDF) with the utilization-based heuristic and EDF-virtual deadline (VD) with a criticality-based heuristic for allocation works well, as it schedules the air traffic system with a 98% success ratio (SR) using only three processor cores with transient faults being handled by the active backup of the tasks. With synthetic task loads, the proposed criticality-based heuristic works well with EDF-VD, as the SR is 94%. The validation of the proposed heuristic is done with a global and partitioned approach of scheduling, considering active backups to make the system reliable. There is an improvement in SR by 11% as compared to the global approach and a 17% improvement in comparison with the partitioned fixed-priority approach with only three processor cores being used.

Research limitations/implications

The simulations of mixed critical tasks are carried out on a real-time kernel based on Linux and are generalizable in Linux-based environments.

Practical implications

The rise in transient faults in embedded systems due to the use of low-cost processors has led to the use of fault-tolerant scheduling and mapping techniques.

Originality/value

This paper fulfills an identified need to have multi-objective task scheduling in a mixed critical system. The timing analysis helps to identify performance risks and assess alternative architectures used to achieve reliability in terms of transient faults.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 16 January 2024

Ayodeji E. Oke and Seyi S. Stephen

The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The…

Abstract

The interaction of systems through a designated control channel has improved communication, efficiency, management, storage, processing, etc. across several industries. The construction industry is an industry that thrives on a well-planned workflow rhythm; a change in the environmental dynamism will either have a positive or negative impact on the output of the project planned for execution. More so, raising the need for effective collaboration through workflow and project planning, grid application in construction facilitates the relationship between the project reality and the end users, all with the aim of improving resources and value management. However, decentralisation of close-domain control can cause uncertainty and incompleteness of data. And this can be a big factor, especially when a complex project is being executed.

Details

A Digital Path to Sustainable Infrastructure Management
Type: Book
ISBN: 978-1-83797-703-1

Keywords

Open Access
Article
Publication date: 31 July 2023

Sara Lafia, David A. Bleckley and J. Trent Alexander

Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use…

Abstract

Purpose

Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use. Digitization transforms paper-based collections into more accessible and analyzable formats. As collections are digitized, there is an opportunity to incorporate deep learning techniques, such as Document Image Analysis (DIA), into workflows to increase the usability of information extracted from archival documents. This paper describes the authors' approach using digital scanning, optical character recognition (OCR) and deep learning to create a digital archive of administrative records related to the mortgage guarantee program of the Servicemen's Readjustment Act of 1944, also known as the G.I. Bill.

Design/methodology/approach

The authors used a collection of 25,744 semi-structured paper-based records from the administration of G.I. Bill Mortgages from 1946 to 1954 to develop a digitization and processing workflow. These records include the name and city of the mortgagor, the amount of the mortgage, the location of the Reconstruction Finance Corporation agent, one or more identification numbers and the name and location of the bank handling the loan. The authors extracted structured information from these scanned historical records in order to create a tabular data file and link them to other authoritative individual-level data sources.

Findings

The authors compared the flexible character accuracy of five OCR methods. The authors then compared the character error rate (CER) of three text extraction approaches (regular expressions, DIA and named entity recognition (NER)). The authors were able to obtain the highest quality structured text output using DIA with the Layout Parser toolkit by post-processing with regular expressions. Through this project, the authors demonstrate how DIA can improve the digitization of administrative records to automatically produce a structured data resource for researchers and the public.

Originality/value

The authors' workflow is readily transferable to other archival digitization projects. Through the use of digital scanning, OCR and DIA processes, the authors created the first digital microdata file of administrative records related to the G.I. Bill mortgage guarantee program available to researchers and the general public. These records offer research insights into the lives of veterans who benefited from loans, the impacts on the communities built by the loans and the institutions that implemented them.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 20 February 2023

Zakaria Sakyoud, Abdessadek Aaroud and Khalid Akodadi

The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The…

Abstract

Purpose

The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The authors have worked on the public university as an implementation field.

Design/methodology/approach

The design of the research work followed the design science research (DSR) methodology for information systems. DSR is a research paradigm wherein a designer answers questions relevant to human problems through the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. The authors have adopted a techno-functional approach. The technical part consists of the development of an intelligent recommendation system that supports the choice of optimal information technology (IT) equipment for decision-makers. This intelligent recommendation system relies on a set of functional and business concepts, namely the Moroccan normative laws and Control Objectives for Information and Related Technology's (COBIT) guidelines in information system governance.

Findings

The modeling of business processes in public universities is established using business process model and notation (BPMN) in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature. Implementation of artificial intelligence techniques can bring great value in terms of transparency and fluidity in purchasing business process execution.

Research limitations/implications

Business limitations: First, the proposed system was modeled to handle one type products, which are computer-related equipment. Hence, the authors intend to extend the model to other types of products in future works. Conversely, the system proposes optimal purchasing order and assumes that decision makers will rely on this optimal purchasing order to choose between offers. In fact, as a perspective, the authors plan to work on a complete automation of the workflow to also include vendor selection and offer validation. Technical limitations: Natural language processing (NLP) is a widely used sentiment analysis (SA) technique that enabled the authors to validate the proposed system. Even working on samples of datasets, the authors noticed NLP dependency on huge computing power. The authors intend to experiment with learning and knowledge-based SA and assess the' computing power consumption and accuracy of the analysis compared to NLP. Another technical limitation is related to the web scraping technique; in fact, the users' reviews are crucial for the authors' system. To guarantee timeliness and reliable reviews, the system has to look automatically in websites, which confront the authors with the limitations of the web scraping like the permanent changing of website structure and scraping restrictions.

Practical implications

The modeling of business processes in public universities is established using BPMN in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature.

Originality/value

The adopted techno-functional approach enabled the authors to bring information system governance from a highly abstract level to a practical implementation where the theoretical best practices and guidelines are transformed to a tangible application.

Details

Kybernetes, vol. 53 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 412