Search results

1 – 10 of 859
Article
Publication date: 11 December 2023

Chi-Un Lei, Wincy Chan and Yuyue Wang

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…

Abstract

Purpose

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.

Design/methodology/approach

In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.

Findings

The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.

Research limitations/implications

The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.

Originality/value

The proposed approach explores the possibility of using machine learning for SDG classifications in scale.

Details

International Journal of Sustainability in Higher Education, vol. 25 no. 4
Type: Research Article
ISSN: 1467-6370

Keywords

Article
Publication date: 25 March 2024

Raúl Katz, Juan Jung and Matan Goldman

This paper aims to study the economic effects of Cloud Computing for a sample of Israeli firms. The authors propose a framework that considers how this technology affects firm…

Abstract

Purpose

This paper aims to study the economic effects of Cloud Computing for a sample of Israeli firms. The authors propose a framework that considers how this technology affects firm performance also introducing the indirect economic effects that take place through cloud-complementary technologies such as Big Data and Machine Learning.

Design/methodology/approach

The model is estimated through structural equation modeling. The data set consists of the microdata of the survey of information and communication technologies uses and cyber protection in business conducted in Israel by the Central Bureau of Statistics.

Findings

The results point to Cloud Computing as a crucial technology to increase firm performance, presenting significant direct and indirect effects as the use of complementary technologies maximizes its impact. Firms that enjoy most direct economic gains from Cloud Computing appear to be the smaller ones, although larger enterprises seem more capable to assimilate complementary technologies, such as Big Data and Machine Learning. The total effects of cloud on firm performance are quite similar among manufacturing and service firms, although the composition of the different effects involved is different.

Originality/value

This paper is one of the very few analyses estimating the impact of Cloud Computing on firm performance based on country microdata and, to the best of the authors’ knowledge, the first one that contemplates the indirect economic effects that take place through cloud-complementary technologies such as Big Data and Machine Learning.

Details

Digital Policy, Regulation and Governance, vol. 26 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Open Access
Article
Publication date: 16 April 2024

Liezl Smith and Christiaan Lamprecht

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine…

Abstract

Purpose

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine learning (ML) is a strategic technology that enables digital transformation to the metaverse, and it is becoming a more prevalent driver of business performance and reporting on performance. However, ML has limitations, and using the technology in business processes, such as accounting, poses a technology governance failure risk. To address this risk, decision makers and those tasked to govern these technologies must understand where the technology fits into the business process and consider its limitations to enable a governed transition to the metaverse. Using selected accounting processes, this study aims to describe the limitations that ML techniques pose to ensure the quality of financial information.

Design/methodology/approach

A grounded theory literature review method, consisting of five iterative stages, was used to identify the accounting tasks that ML could perform in the respective accounting processes, describe the ML techniques that could be applied to each accounting task and identify the limitations associated with the individual techniques.

Findings

This study finds that limitations such as data availability and training time may impact the quality of the financial information and that ML techniques and their limitations must be clearly understood when developing and implementing technology governance measures.

Originality/value

The study contributes to the growing literature on enterprise information and technology management and governance. In this study, the authors integrated current ML knowledge into an accounting context. As accounting is a pervasive aspect of business, the insights from this study will benefit decision makers and those tasked to govern these technologies to understand how some processes are more likely to be affected by certain limitations and how this may impact the accounting objectives. It will also benefit those users hoping to exploit the advantages of ML in their accounting processes while understanding the specific technology limitations on an accounting task level.

Details

Journal of Financial Reporting and Accounting, vol. 22 no. 2
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 2 May 2024

Xin Fan, Yongshou Liu, Zongyi Gu and Qin Yao

Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional…

Abstract

Purpose

Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional methods struggle to conduct a reliability analysis. Therefore, this paper proposes a reliability analysis method aimed at enhancing the efficiency of rare event analysis, using the widely recognized Relevant Vector Machine (RVM).

Design/methodology/approach

Drawing from the principles of importance sampling (IS), this paper employs Harris Hawks Optimization (HHO) to ascertain the optimal design point. This approach not only guarantees precision but also facilitates the RVM in approximating the limit state surface. When the U learning function, designed for Kriging, is applied to RVM, it results in sample clustering in the design of experiment (DoE). Therefore, this paper proposes a FU learning function, which is more suitable for RVM.

Findings

Three numerical examples and two engineering problem demonstrate the effectiveness of the proposed method.

Originality/value

By employing the HHO algorithm, this paper innovatively applies RVM in IS reliability analysis, proposing a novel method termed RVM-HIS. The RVM-HIS demonstrates exceptional computational efficiency, making it eminently suitable for rare events reliability analysis with implicit performance function. Moreover, the computational efficiency of RVM-HIS has been significantly enhanced through the improvement of the U learning function.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 21 December 2023

Meena Subedi

The current study uses an advanced machine learning method and aims to investigate whether auditors perceive financial statements that are principles-based as less risky. More…

Abstract

Purpose

The current study uses an advanced machine learning method and aims to investigate whether auditors perceive financial statements that are principles-based as less risky. More specifically, this study aims to explore the association between principles-based accounting standards and audit pricing and between principles-based accounting standards and the likelihood of receiving a going concern opinion.

Design/methodology/approach

The study uses an advanced machine-learning method to understand the role of principles-based accounting standards in predicting audit fees and going concern opinion. The study also uses multiple regression models defining audit fees and the probability of receiving going concern opinion. The analyses are complemented by additional tests such as economic significance, firm fixed effects, propensity score matching, entropy balancing, change analysis, yearly regression results and controlling for managerial risk-taking incentives and governance variables.

Findings

The paper provides empirical evidence that auditors charge less audit fees to clients whose financial statements are more principles-based. The finding suggests that auditors perceive financial statements that are principles-based less risky. The study also provides evidence that the probability of receiving a going-concern opinion reduces as firms rely more on principles-based standards. The finding further suggests that auditors discount the financial numbers supplied by the managers using rules-based standards. The study also reveals that the degree of reliance by a US firm on principles-based accounting standards has a negative impact on accounting conservatism, the risk of financial statement misstatement, accruals and the difficulty in predicting future earnings. This suggests potential mechanisms through which principles-based accounting standards influence auditors’ risk assessments.

Research limitations/implications

The authors recognize the limitation of this study regarding the sample period. Prior studies compare rules vs principles-based standards by focusing on the differences between US generally accepted accounting principles (GAAP) and international financial reporting standards (IFRS) or pre- and post-IFRS adoption, which raises questions about differences in cross-country settings and institutional environment and other confounding factors such as transition costs. This study addresses these issues by comparing rules vs principles-based standards within the US GAAP setting. However, this limits the sample period to the year 2006 because the measure of the relative extent to which a US firm is reliant upon principles-based standards is available until 2006.

Practical implications

The study has major public policy suggestions as it responds to the call by Jay Clayton and Mary Jo White, the former Chairs of the US Securities and Exchange Commission (SEC), to pursue high-quality, globally accepted accounting standards to ensure that investors continue to receive clear and reliable financial information globally. The study also recognizes the notable public policy implications, particularly in light of the current Chair of the International Accounting Standards Board (IASB) Andreas Barckow’s recent public statement, which emphasizes the importance of principles-based standards and their ability to address sustainability concerns, including emerging risks such as climate change.

Originality/value

The study has major public policy suggestions because it demonstrates the value of principles-based standards. The study responds to the call by Jay Clayton and Mary Jo White, the former Chairs of the US SEC, to pursue high-quality, globally accepted accounting standards to ensure that investors continue to receive clear and reliable financial information as business transactions and investor needs continue to evolve globally. The study also recognizes the notable public policy implications, particularly in light of the current Chair of the IASB Andreas Barckow’s recent public statement, which emphasizes the importance of principles-based standards and their ability to address sustainability concerns, including emerging risks like climate change. The study fills the gap in the literature that auditors perceive principles-based financial statements as less risky and further expands the literature by providing empirical evidence that the likelihood of receiving a going concern opinion is increasing in the degree of rules-based standards.

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 3 July 2023

Vishal Ashok Wankhede, Rohit Agrawal, Anil Kumar, Sunil Luthra, Dragan Pamucar and Željko Stević

Sustainable development goals (SDGs) are gaining significant importance in the current environment. Many businesses are keen to adopt SDGs to get a competitive edge. There are…

Abstract

Purpose

Sustainable development goals (SDGs) are gaining significant importance in the current environment. Many businesses are keen to adopt SDGs to get a competitive edge. There are certain challenges in realigning the present working scenario for sustainable development, which is a primary concern for society. Various firms are adopting sustainable engineering (SE) practices to tackle such issues. Artificial intelligence (AI) is an emerging technology that can help the ineffective adoption of sustainable practices in an uncertain environment. In this regard, there is a need to review the current research practices in the field of SE in AI. The purpose of the present study is to comprehensive review the research trend in the field of SE in AI.

Design/methodology/approach

This work presents a review of AI applications in SE for decision-making in an uncertain environment. SCOPUS database was considered for shortlisting the articles. Specific keywords on AI, SE and decision-making were given, and a total of 127 articles were shortlisted after implying inclusion and exclusion criteria.

Findings

Bibliometric study and network analyses were performed to analyse the current research trends and to see the research collaboration between researchers and countries. Emerging research themes were identified by using structural topic modelling (STM) and were discussed further.

Research limitations/implications

Research propositions corresponding to each research theme were presented for future research directions. Finally, the implications of the study were discussed.

Originality/value

This work presents a systematic review of articles in the field of AI applications in SE with the help of bibliometric study, network analyses and STM.

Details

Journal of Global Operations and Strategic Sourcing, vol. 17 no. 2
Type: Research Article
ISSN: 2398-5364

Keywords

Article
Publication date: 29 April 2024

Amin Mojoodi, Saeed Jalalian and Tafazal Kumail

This research aims to determine the ideal fare for various aircraft itineraries by modeling prices using a neural network method. Dynamic pricing has been studied from the…

Abstract

Purpose

This research aims to determine the ideal fare for various aircraft itineraries by modeling prices using a neural network method. Dynamic pricing has been studied from the airline’s point of view, with a focus on demand forecasting and price differentiation. Early demand forecasting on a specific route can assist an airline in strategically planning flights and determining optimal pricing strategies.

Design/methodology/approach

A feedforward neural network was employed in the current study. Two hidden layers, consisting of 18 and 12 neurons, were incorporated to enhance the network’s capabilities. The activation function employed for these layers was tanh. Additionally, it was considered that the output layer’s functions were linear. The neural network inputs considered in this study were flight path, month of flight, flight date (week/day), flight time, aircraft type (Boeing, Airbus, other), and flight class (economy, business). The neural network output, on the other hand, was the ticket price. The dataset comprises 16,585 records, specifically flight data for Iranian airlines for 2022.

Findings

The findings indicate that the model achieved a high level of accuracy in approximating the actual data. Additionally, it demonstrated the ability to predict the optimal ticket price for various flight routes with minimal error.

Practical implications

Based on the significant alignment observed between the actual data and the tested data utilizing the algorithmic model, airlines can proactively anticipate ticket prices across all routes, optimizing the revenue generated by each flight. The neural network algorithm utilized in this study offers a valuable opportunity for companies to enhance their decision-making processes. By leveraging the algorithm’s features, companies can analyze past data effectively and predict future prices. This enables them to make informed and timely decisions based on reliable information.

Originality/value

The present study represents a pioneering research endeavor that investigates using a neural network algorithm to predict the most suitable pricing for various flight routes. This study aims to provide valuable insights into dynamic pricing for marketing researchers and practitioners.

Details

Journal of Hospitality and Tourism Insights, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9792

Keywords

Open Access
Article
Publication date: 30 April 2024

Armando Di Meglio, Nicola Massarotti and Perumal Nithiarasu

In this study, the authors propose a novel digital twinning approach specifically designed for controlling transient thermal systems. The purpose of this study is to harness the…

Abstract

Purpose

In this study, the authors propose a novel digital twinning approach specifically designed for controlling transient thermal systems. The purpose of this study is to harness the combined power of deep learning (DL) and physics-based methods (PBM) to create an active virtual replica of the physical system.

Design/methodology/approach

To achieve this goal, we introduce a deep neural network (DNN) as the digital twin and a Finite Element (FE) model as the physical system. This integrated approach is used to address the challenges of controlling an unsteady heat transfer problem with an integrated feedback loop.

Findings

The results of our study demonstrate the effectiveness of the proposed digital twinning approach in regulating the maximum temperature within the system under varying and unsteady heat flux conditions. The DNN, trained on stationary data, plays a crucial role in determining the heat transfer coefficients necessary to maintain temperatures below a defined threshold value, such as the material’s melting point. The system is successfully controlled in 1D, 2D and 3D case studies. However, careful evaluations should be conducted if such a training approach, based on steady-state data, is applied to completely different transient heat transfer problems.

Originality/value

The present work represents one of the first examples of a comprehensive digital twinning approach to transient thermal systems, driven by data. One of the noteworthy features of this approach is its robustness. Adopting a training based on dimensionless data, the approach can seamlessly accommodate changes in thermal capacity and thermal conductivity without the need for retraining.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 8 March 2024

Feng Zhang, Youliang Wei and Tao Feng

GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to…

Abstract

Purpose

GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to an excessive data volume of the query result, which causes problems such as resource overload of the API server. Therefore, this paper aims to address this issue by predicting the response data volume of a GraphQL query statement.

Design/methodology/approach

This paper proposes a GraphQL response data volume prediction approach based on Code2Vec and AutoML. First, a GraphQL query statement is transformed into a path collection of an abstract syntax tree based on the idea of Code2Vec, and then the query is aggregated into a vector with the fixed length. Finally, the response result data volume is predicted by a fully connected neural network. To further improve the prediction accuracy, the prediction results of embedded features are combined with the field features and summary features of the query statement to predict the final response data volume by the AutoML model.

Findings

Experiments on two public GraphQL API data sets, GitHub and Yelp, show that the accuracy of the proposed approach is 15.85% and 50.31% higher than existing GraphQL response volume prediction approaches based on machine learning techniques, respectively.

Originality/value

This paper proposes an approach that combines Code2Vec and AutoML for GraphQL query response data volume prediction with higher accuracy.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of 859