Search results

1 – 10 of 49
Article
Publication date: 16 April 2024

Liezl Smith and Christiaan Lamprecht

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine…

Abstract

Purpose

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine learning (ML) is a strategic technology that enables digital transformation to the metaverse, and it is becoming a more prevalent driver of business performance and reporting on performance. However, ML has limitations, and using the technology in business processes, such as accounting, poses a technology governance failure risk. To address this risk, decision makers and those tasked to govern these technologies must understand where the technology fits into the business process and consider its limitations to enable a governed transition to the metaverse. Using selected accounting processes, this study aims to describe the limitations that ML techniques pose to ensure the quality of financial information.

Design/methodology/approach

A grounded theory literature review method, consisting of five iterative stages, was used to identify the accounting tasks that ML could perform in the respective accounting processes, describe the ML techniques that could be applied to each accounting task and identify the limitations associated with the individual techniques.

Findings

This study finds that limitations such as data availability and training time may impact the quality of the financial information and that ML techniques and their limitations must be clearly understood when developing and implementing technology governance measures.

Originality/value

The study contributes to the growing literature on enterprise information and technology management and governance. In this study, the authors integrated current ML knowledge into an accounting context. As accounting is a pervasive aspect of business, the insights from this study will benefit decision makers and those tasked to govern these technologies to understand how some processes are more likely to be affected by certain limitations and how this may impact the accounting objectives. It will also benefit those users hoping to exploit the advantages of ML in their accounting processes while understanding the specific technology limitations on an accounting task level.

Details

Journal of Financial Reporting and Accounting, vol. 22 no. 2
Type: Research Article
ISSN: 1985-2517

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 19 May 2023

Amit Kumar, Bala Krishnamoorthy and Som Sekhar Bhattacharyya

This research study aims to inquire into the technostress phenomenon at an organizational level from machine learning (ML) and artificial intelligence (AI) deployment. The authors…

1217

Abstract

Purpose

This research study aims to inquire into the technostress phenomenon at an organizational level from machine learning (ML) and artificial intelligence (AI) deployment. The authors investigated the role of ML and AI automation-augmentation paradox and the socio-technical systems as coping mechanisms for technostress management amongst managers.

Design/methodology/approach

The authors applied an exploratory qualitative method and conducted in-depth interviews based on a semi-structured interview questionnaire. Data were collected from 26 subject matter experts. The data transcripts were analyzed using thematic content analysis.

Findings

The study results indicated that role ambiguity, job insecurity and the technology environment contributed to technostress because of ML and AI technologies deployment. Complexity, uncertainty, reliability and usefulness were primary technology environment-related stress. The novel integration of ML and AI automation-augmentation interdependence, along with socio-technical systems, could be effectively used for technostress management at the organizational level.

Research limitations/implications

This research study contributed to theoretical discourse regarding the technostress in organizations because of increased ML and AI technologies deployment. This study identified the main techno stressors and contributed critical and novel insights regarding the theorization of coping mechanisms for technostress management in organizations from ML and AI deployment.

Practical implications

The phenomenon of technostress because of ML and AI technologies could have restricting effects on organizational performance. Executives could follow the simultaneous deployment of ML and AI technologies-based automation-augmentation strategy along with socio-technical measures to cope with technostress. Managers could support the technical up-skilling of employees, the realization of ML and AI value, the implementation of technology-driven change management and strategic planning of ML and AI technologies deployment.

Originality/value

This research study was among the first few studies providing critical insights regarding the technostress at the organizational level because of ML and AI deployment. This research study integrated the novel theoretical paradigm of ML and AI automation-augmentation paradox and the socio-technical systems as coping mechanisms for technostress management.

Details

International Journal of Organizational Analysis, vol. 32 no. 4
Type: Research Article
ISSN: 1934-8835

Keywords

Book part
Publication date: 13 May 2024

Kshitiz Jangir, Vikas Sharma and Munish Gupta

Purpose: The study aims to analyse and discuss the effect of COVID-19 on businesses. The chapter discusses the various machine learning (ML) tools and techniques, which can help…

Abstract

Purpose: The study aims to analyse and discuss the effect of COVID-19 on businesses. The chapter discusses the various machine learning (ML) tools and techniques, which can help in better decision making by businesses in the present world.

Need for the Study: COVID-19 has increased the role of VUCA elements in the business environment, and there is a need to address the challenges faced by businesses in such environment. ML and artificial learning can help businesses in facing such challenges.

Methodology: The focus and approach of the chapter are in the context of using artificial intelligence (AI) and ML techniques for decision making during the COVID-19 pandemic in a VUCA business environment.

Findings: The key findings and their implications emphasise the importance of understanding and implementing AI and ML techniques in business strategies during times of crisis.

Practical Implications: The chapter’s content is in the context of using AI and ML techniques during the COVID-19 pandemic and in a VUCA business environment.

Details

VUCA and Other Analytics in Business Resilience, Part B
Type: Book
ISBN: 978-1-83753-199-8

Keywords

Article
Publication date: 1 March 2023

Hossein Shakibaei, Mohammad Reza Farhadi-Ramin, Mohammad Alipour-Vaezi, Amir Aghsami and Masoud Rabbani

Every day, small and big incidents happen all over the world, and given the human, financial and spiritual damage they cause, proper planning should be sought to deal with them so…

Abstract

Purpose

Every day, small and big incidents happen all over the world, and given the human, financial and spiritual damage they cause, proper planning should be sought to deal with them so they can be appropriately managed in times of crisis. This study aims to examine humanitarian supply chain models.

Design/methodology/approach

A new model is developed to pursue the necessary relations in an optimal way that will minimize human, financial and moral losses. In this developed model, in order to optimize the problem and minimize the amount of human and financial losses, the following subjects have been applied: magnitude of the areas in which an accident may occur as obtained by multiple attribute decision-making methods, the distances between relief centers, the number of available rescuers, the number of rescuers required and the risk level of each patient which is determined using previous data and machine learning (ML) algorithms.

Findings

For this purpose, a case study in the east of Tehran has been conducted. According to the results obtained from the algorithms, problem modeling and case study, the accuracy of the proposed model is evaluated very well.

Originality/value

Obtaining each injured person's priority using ML techniques and each area's importance or risk level, besides developing a bi-objective mathematical model and using multiple attribute decision-making methods, make this study unique among very few studies that concern ML in the humanitarian supply chain. Moreover, the findings validate the results and the model's functionality very well.

Article
Publication date: 2 May 2024

Neveen Barakat, Liana Hajeir, Sarah Alattal, Zain Hussein and Mahmoud Awad

The objective of this paper is to develop a condition-based maintenance (CBM) scheme for pneumatic cylinders. The CBM scheme will detect two common types of air leaking failure…

Abstract

Purpose

The objective of this paper is to develop a condition-based maintenance (CBM) scheme for pneumatic cylinders. The CBM scheme will detect two common types of air leaking failure modes and identify the leaky/faulty cylinder. The successful implementation of the proposed scheme will reduce energy consumption, scrap and rework, and time to repair.

Design/methodology/approach

Effective implementation of maintenance is important to reduce operation cost, improve productivity and enhance quality performance at the same time. Condition-based monitoring is an effective maintenance scheme where maintenance is triggered based on the condition of the equipment monitored either real time or at certain intervals. Pneumatic air systems are commonly used in many industries for packaging, sorting and powering air tools among others. A common failure mode of pneumatic cylinders is air leaks which is difficult to detect for complex systems with many connections. The proposed method consists of monitoring the stroke speed profile of the piston inside the pneumatic cylinder using hall effect sensors. Statistical features are extracted from the speed profiles and used to develop a fault detection machine learning model. The proposed method is demonstrated using a real-life case of tea packaging machines.

Findings

Based on the limited data collected, the ensemble machine learning algorithm resulted in 88.4% accuracy. The algorithm can detect failures as soon as they occur based on majority vote rule of three machine learning models.

Practical implications

Early air leak detection will improve quality of packaged tea bags and provide annual savings due to time to repair and energy waste reduction. The average annual estimated savings due to the implementation of the new CBM method is $229,200 with a payback period of less than two years.

Originality/value

To the best of the authors’ knowledge, this paper is the first in terms of proposing a CBM for pneumatic systems air leaks using piston speed. Majority, if not all, current detection methods rely on expensive equipment such as infrared or ultrasonic sensors. This paper also contributes to the research gap of economic justification of using CBM.

Details

Journal of Quality in Maintenance Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 26 May 2022

Ismail Abiodun Sulaimon, Hafiz Alaka, Razak Olu-Ajayi, Mubashir Ahmad, Saheed Ajayi and Abdul Hye

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully…

262

Abstract

Purpose

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully investigated. This paper aims to investigate the effects traffic data set have on the performance of machine learning (ML) predictive models in AQ prediction.

Design/methodology/approach

To achieve this, the authors have set up an experiment with the control data set having only the AQ data set and meteorological (Met) data set, while the experimental data set is made up of the AQ data set, Met data set and traffic data set. Several ML models (such as extra trees regressor, eXtreme gradient boosting regressor, random forest regressor, K-neighbors regressor and two others) were trained, tested and compared on these individual combinations of data sets to predict the volume of PM2.5, PM10, NO2 and O3 in the atmosphere at various times of the day.

Findings

The result obtained showed that various ML algorithms react differently to the traffic data set despite generally contributing to the performance improvement of all the ML algorithms considered in this study by at least 20% and an error reduction of at least 18.97%.

Research limitations/implications

This research is limited in terms of the study area, and the result cannot be generalized outside of the UK as some of the inherent conditions may not be similar elsewhere. Additionally, only the ML algorithms commonly used in literature are considered in this research, therefore, leaving out a few other ML algorithms.

Practical implications

This study reinforces the belief that the traffic data set has a significant effect on improving the performance of air pollution ML prediction models. Hence, there is an indication that ML algorithms behave differently when trained with a form of traffic data set in the development of an AQ prediction model. This implies that developers and researchers in AQ prediction need to identify the ML algorithms that behave in their best interest before implementation.

Originality/value

The result of this study will enable researchers to focus more on algorithms of benefit when using traffic data sets in AQ prediction.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 11 December 2023

Chi-Un Lei, Wincy Chan and Yuyue Wang

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…

Abstract

Purpose

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.

Design/methodology/approach

In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.

Findings

The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.

Research limitations/implications

The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.

Originality/value

The proposed approach explores the possibility of using machine learning for SDG classifications in scale.

Details

International Journal of Sustainability in Higher Education, vol. 25 no. 4
Type: Research Article
ISSN: 1467-6370

Keywords

Open Access
Article
Publication date: 23 January 2024

Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…

Abstract

Purpose

This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.

Design/methodology/approach

This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.

Findings

This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.

Originality/value

Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 12 December 2023

Laura Lucantoni, Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua

The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely…

Abstract

Purpose

The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely used for analyzing OEE results and identifying corrective actions. Therefore, the approach proposed in this paper aims to provide a new rule-based Machine Learning (ML) framework for OEE enhancement and the selection of improvement actions.

Design/methodology/approach

Association Rules (ARs) are used as a rule-based ML method for extracting knowledge from huge data. First, the dominant loss class is identified and traditional methodologies are used with ARs for anomaly classification and prioritization. Once selected priority anomalies, a detailed analysis is conducted to investigate their influence on the OEE loss factors using ARs and Network Analysis (NA). Then, a Deming Cycle is used as a roadmap for applying the proposed methodology, testing and implementing proactive actions by monitoring the OEE variation.

Findings

The method proposed in this work has also been tested in an automotive company for framework validation and impact measuring. In particular, results highlighted that the rule-based ML methodology for OEE improvement addressed seven anomalies within a year through appropriate proactive actions: on average, each action has ensured an OEE gain of 5.4%.

Originality/value

The originality is related to the dual application of association rules in two different ways for extracting knowledge from the overall OEE. In particular, the co-occurrences of priority anomalies and their impact on asset Availability, Performance and Quality are investigated.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of 49