Search results

1 – 10 of 145
Article
Publication date: 28 November 2022

Prateek Kumar Tripathi, Chandra Kant Singh, Rakesh Singh and Arun Kumar Deshmukh

In a volatile agricultural postharvest market, producers require more personalized information about market dynamics for informed decisions on the marketed surplus. However, this…

Abstract

Purpose

In a volatile agricultural postharvest market, producers require more personalized information about market dynamics for informed decisions on the marketed surplus. However, this adaptive strategy fails to benefit them if the selection of a computational price predictive model to disseminate information on the market outlook is not efficient, and the associated risk of perishability, and storage cost factor are not assumed against the seemingly favourable market behaviour. Consequently, the decision of whether to store or sell at the time of crop harvest is a perennial dilemma to solve. With the intent of addressing this challenge for agricultural producers, the study is focused on designing an agricultural decision support system (ADSS) to suggest a favourable marketing strategy to crop producers.

Design/methodology/approach

The present study is guided by an eclectic theoretical perspective from supply chain literature that included agency theory, transaction cost theory, organizational information processing theory and opportunity cost theory in revenue risk management. The paper models a structured iterative algorithmic framework that leverages the forecasting capacity of different time series and machine learning models, considering the effect of influencing factors on agricultural price movement for better forecasting predictability against market variability or dynamics. It also attempts to formulate an integrated risk management framework for effective sales planning decisions that factors in the associated costs of storage, rental and physical loss until the surplus is held for expected returns.

Findings

Empirical demonstration of the model was simulated on the dynamic markets of tomatoes, onions and potatoes in a north Indian region. The study results endorse that farmer-centric post-harvest information intelligence assists crop producers in the strategic sales planning of their produce, and also vigorously promotes that the effectiveness of decision making is contingent upon the selection of the best predictive model for every future market event.

Practical implications

As a policy implication, the proposed ADSS addresses the pressing need for a robust marketing support system for the socio-economic welfare of farming communities grappling with distress sales, and low remunerative returns.

Originality/value

Based on the extant literature studied, there is no such study that pays personalized attention to agricultural producers, enabling them to make a profitable sales decision against the volatile post-harvest market scenario. The present research is an attempt to fill that gap with the scope of addressing crop producer's ubiquitous dilemma of whether to sell or store at the time of harvesting. Besides, an eclectic and iterative style of predictive modelling has also a limited implication in the agricultural supply chain based on the literature; however, it is found to be a more efficient practice to function in a dynamic market outlook.

Article
Publication date: 29 December 2023

Ying Hsun Lai

The study integrated understanding by design-Internet of Things (UbD-IoT) education with design thinking and computational thinking to plan and design an IoT course. Cross-domain…

Abstract

Purpose

The study integrated understanding by design-Internet of Things (UbD-IoT) education with design thinking and computational thinking to plan and design an IoT course. Cross-domain application examples were employed to train students in problem-understanding, deep thinking and logical design for IoT applications.

Design/methodology/approach

In this study, the UbD model was integrated with design thinking and computational thinking in the planning and design of an IoT course. The examples of cross-domain applications were used to train students to understand a problem by engaging themselves in deep thinking and helping them think and design logically for an IoT application.

Findings

The UbD-IoT learning design greatly decreased students' overall cognitive load. UbD-IoT learning has a significant impact on the performance of computational thinking in problem-solving and problem-understanding. The impact of UbD-IoT learning on logical thinking and program learning cognition in students needs to be verified.

Originality/value

The results of this study have shown that the UbD model is effective in reducing the cognitive load of a learning course and also strengthens T-competencies in the lateral skills of computational thinking, critical problem-solving, logical thinking and creative thinking.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 10 October 2023

Visar Hoxha

The purpose of the study is to examine the efficiency of linear, nonlinear and artificial neural networks (ANNs), in predicting property prices.

Abstract

Purpose

The purpose of the study is to examine the efficiency of linear, nonlinear and artificial neural networks (ANNs), in predicting property prices.

Design/methodology/approach

The present study uses a dataset of 1,468 real estate transactions from 2020 to 2022, obtained from the Department of Property Taxes of Republic of Kosovo. Beginning with a fundamental linear regression model, the study tackles the question of overlooked nonlinearity, employing a similar strategy like Peterson and Flanagan (2009) and McCluskey et al. (2012), whereby ANN's predictions are incorporated as an additional regressor within the ordinary least squares (OLS) model.

Findings

The research findings underscore the superior fit of semi-log and double-log models over the OLS model, while the ANN model shows moderate performance, contrary to the conventional conviction of ANN's superior predictive power. This is notably divergent from the prevailing belief about ANN's superior predictive power, shedding light on the potential overestimation of ANN's efficacy.

Practical implications

The study accentuates the importance of embracing diverse models in property price prediction, debunking the notion of the ubiquitous applicability of ANN models. The research outcomes carry substantial ramifications for both scholars and professionals engaged in property valuation.

Originality/value

Distinctively, this research pioneers the comparative analysis of diverse models, including ANN, in the setting of a developing country's capital, hence providing a fresh perspective to their effectiveness in property price prediction.

Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 7 November 2023

Christian Nnaemeka Egwim, Hafiz Alaka, Youlu Pan, Habeeb Balogun, Saheed Ajayi, Abdul Hye and Oluwapelumi Oluwaseun Egunjobi

The study aims to develop a multilayer high-effective ensemble of ensembles predictive model (stacking ensemble) using several hyperparameter optimized ensemble machine learning…

66

Abstract

Purpose

The study aims to develop a multilayer high-effective ensemble of ensembles predictive model (stacking ensemble) using several hyperparameter optimized ensemble machine learning (ML) methods (bagging and boosting ensembles) trained with high-volume data points retrieved from Internet of Things (IoT) emission sensors, time-corresponding meteorology and traffic data.

Design/methodology/approach

For a start, the study experimented big data hypothesis theory by developing sample ensemble predictive models on different data sample sizes and compared their results. Second, it developed a standalone model and several bagging and boosting ensemble models and compared their results. Finally, it used the best performing bagging and boosting predictive models as input estimators to develop a novel multilayer high-effective stacking ensemble predictive model.

Findings

Results proved data size to be one of the main determinants to ensemble ML predictive power. Second, it proved that, as compared to using a single algorithm, the cumulative result from ensemble ML algorithms is usually always better in terms of predicted accuracy. Finally, it proved stacking ensemble to be a better model for predicting PM2.5 concentration level than bagging and boosting ensemble models.

Research limitations/implications

A limitation of this study is the trade-off between performance of this novel model and the computational time required to train it. Whether this gap can be closed remains an open research question. As a result, future research should attempt to close this gap. Also, future studies can integrate this novel model to a personal air quality messaging system to inform public of pollution levels and improve public access to air quality forecast.

Practical implications

The outcome of this study will aid the public to proactively identify highly polluted areas thus potentially reducing pollution-associated/ triggered COVID-19 (and other lung diseases) deaths/ complications/ transmission by encouraging avoidance behavior and support informed decision to lock down by government bodies when integrated into an air pollution monitoring system

Originality/value

This study fills a gap in literature by providing a justification for selecting appropriate ensemble ML algorithms for PM2.5 concentration level predictive modeling. Second, it contributes to the big data hypothesis theory, which suggests that data size is one of the most important factors of ML predictive capability. Third, it supports the premise that when using ensemble ML algorithms, the cumulative output is usually always better in terms of predicted accuracy than using a single algorithm. Finally developing a novel multilayer high-performant hyperparameter optimized ensemble of ensembles predictive model that can accurately predict PM2.5 concentration levels with improved model interpretability and enhanced generalizability, as well as the provision of a novel databank of historic pollution data from IoT emission sensors that can be purchased for research, consultancy and policymaking.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 7 August 2023

Tiziano Volpentesta, Esli Spahiu and Pietro De Giovanni

Digital transformation (DT) is a major challenge for incumbent organisations, as research on this phenomenon has revealed a high failure rate. Given this consideration, this paper…

2219

Abstract

Purpose

Digital transformation (DT) is a major challenge for incumbent organisations, as research on this phenomenon has revealed a high failure rate. Given this consideration, this paper reviews the literature on DT in incumbent organisations to identify the main themes and research directions to be undertaken.

Design/methodology/approach

The authors adopt a systematic literature review (SLR) and computational literature review (CLR) employing a machine learning algorithm for topic modelling (LDA) to surface the themes discussed in 103 peer-reviewed studies published between 2010 and 2022 in a multidisciplinary article sample.

Findings

The authors identify and discuss the five main themes emerging from the studies, offering the state-of-the-art of DT in established firms' literature. The authors find that the most discussed topics revolve around the DT of healthcare, the process of renewal and change, the project management, the changes in value performances and capabilities and the consequences on the products of DT. Accordingly, the authors identify the topics overlooked by literature that future studies could tackle, which concern sustainability and contextualisation of the DT phenomenon.

Practical implications

The authors further propose managerial insights which equip managers with a revolutionary mindset that is not constraining but, rather, integration-seeking. DT is not only about technology (Tabrizi B et al., 2019). Successful DT initiatives require managerial capabilities that foster a sustainable departure from the current organising logic (Markus, 2004). This study pinpoints and prioritises the role that paradox-informed thinking can have to sustain an effective digital mindset (Eden et al., 2018) that allows for the building of momentum in DT initiatives and facilitates the renewal process. Indeed, managers lagging behind DT could shift from an “either-or” solutions mindset where one pole is preferred over the other (e.g. digital or physical) to embracing a “both-and-with” thinking balancing between poles (e.g. digital and physical) to successfully fuse the digital and the legacy (Lewis and Smith, 2022b; Smith, Lewis and Edmondson, 2022), enact the renewal, and build and maintain momentum for DTs. The outcomes of adopting a paradox mindset in managerial practice are enabling learning and creativity, fostering flexibility and resilience and, finally, unleashing human potential (Lewis and Smith, 2014).

Social implications

The authors propose insight that will equip managers with a mindset that will allow DT to fail less often than current reported rates, which failure may imply potential organisational collapse, financial bankrupt and social crisis.

Originality/value

The authors offer a multidisciplinary review of the DT complementing existing reviews due to the focus on the organisational context of established organisations. Moreover, the authors advance paradoxical thinking as a novel lens through which to study DT in incumbent organisations by proposing an array of potential research questions and new avenues for research. Finally, the authors offer insights for managers to help them thrive in DT by adopting a paradoxical mindset.

Details

European Journal of Innovation Management, vol. 26 no. 7
Type: Research Article
ISSN: 1460-1060

Keywords

Article
Publication date: 26 September 2023

Deepak Kumar, Yongxin Liu, Houbing Song and Sirish Namilae

The purpose of this study is to develop a deep learning framework for additive manufacturing (AM), that can detect different defect types without being trained on specific defect…

Abstract

Purpose

The purpose of this study is to develop a deep learning framework for additive manufacturing (AM), that can detect different defect types without being trained on specific defect data sets and can be applied for real-time process control.

Design/methodology/approach

This study develops an explainable artificial intelligence (AI) framework, a zero-bias deep neural network (DNN) model for real-time defect detection during the AM process. In this method, the last dense layer of the DNN is replaced by two consecutive parts, a regular dense layer denoted (L1) for dimensional reduction, and a similarity matching layer (L2) for equal weight and non-biased cosine similarity matching. Grayscale images of 3D printed samples acquired during printing were used as the input to the zero-bias DNN.

Findings

This study demonstrates that the approach is capable of successfully detecting multiple types of defects such as cracks, stringing and warping with high accuracy without any prior training on defective data sets, with an accuracy of 99.5%.

Practical implications

Once the model is set up, the computational time for anomaly detection is lower than the speed of image acquisition indicating the potential for real-time process control. It can also be used to minimize manual processing in AI-enabled AM.

Originality/value

To the best of the authors’ knowledge, this is the first study to use zero-bias DNN, an explainable AI approach for defect detection in AM.

Details

Rapid Prototyping Journal, vol. 30 no. 1
Type: Research Article
ISSN: 1355-2546

Keywords

Book part
Publication date: 11 December 2023

Hamid Doost Mohammadian

Based on the 5th wave/tomorrow age theory, we are living in the world that is in necessity to change. Rapid urbanization causes global challenges such as economic problems and…

Abstract

Based on the 5th wave/tomorrow age theory, we are living in the world that is in necessity to change. Rapid urbanization causes global challenges such as economic problems and recessions, environmental challenges, climate change, social instability, health diseases, biological attached, and crisis caused by technological dominations. These challenges threaten the world, humanity, and human beings. Therefore, it is vital to tackle and struggle with them in order to maintain the world and improve quality of livability and quality of life to achieve sustainability. Generally, modern Blue-Green urban areas and smart cities with high quality of livability and life are proposed to deal with urbanization challenges to maintain the world and improve quality of human life. Based on Prof. Doost's 5th wave theory, related theories, concepts and models like Doost Risk Mitigation Method (DRMM), and also his experience on sustainability as best practice such as cooperating with Danish Sustainable Platforms Company, working as an academic leader at IoE/EQ EU Erasmus Plus project in Germany during 2017–2020, cooperating with former mayor of Copenhagen, consulting the German MV State Minister of Energy, Digitalization, and Infrastructure to cooperate with Iran in 2016, more than 15 years holding lecture and research internationally about risk and risk management on mobility in different universities like (TU Berlin) Technical University of Berlin (EUREF Campus, Sustainable Mobility Management and Sustainability Building) and also achieving a honorary doctorate in sustainable development management, a practical model concerned on risk management in mobility to provide comprehensive global Blue-Green clean sustainable urban mobility risk mitigation strategic plan is given. Therefore, in this chapter, impact of risk management on mobility to provide sustainable global urban mobility plan in order to create modern Blue-Green sustainable urban area and future smart cities through the 5th wave theory are explored. Fundamentally, the main goal of the research is to have an applied study about mobility risk mitigation and utilize it as a key to create comprehensive global urban mobility risk mitigation plan toward Blue-Green sustainable clean mobility technologies to create modern sustainable smart cities through the tomorrow age theory in order to create livable urban area with high quality of livability and life. In addition, the risks in mobility through the DRMM are measured to analyze the risk and to do risk mitigation and mobility project improvement to move to sustainable mobility and high sustainability in future smart cities.

Article
Publication date: 14 August 2023

Usman Tariq, Ranjit Joy, Sung-Heng Wu, Muhammad Arif Mahmood, Asad Waqar Malik and Frank Liou

This study aims to discuss the state-of-the-art digital factory (DF) development combining digital twins (DTs), sensing devices, laser additive manufacturing (LAM) and subtractive…

Abstract

Purpose

This study aims to discuss the state-of-the-art digital factory (DF) development combining digital twins (DTs), sensing devices, laser additive manufacturing (LAM) and subtractive manufacturing (SM) processes. The current shortcomings and outlook of the DF also have been highlighted. A DF is a state-of-the-art manufacturing facility that uses innovative technologies, including automation, artificial intelligence (AI), the Internet of Things, additive manufacturing (AM), SM, hybrid manufacturing (HM), sensors for real-time feedback and control, and a DT, to streamline and improve manufacturing operations.

Design/methodology/approach

This study presents a novel perspective on DF development using laser-based AM, SM, sensors and DTs. Recent developments in laser-based AM, SM, sensors and DTs have been compiled. This study has been developed using systematic reviews and meta-analyses (PRISMA) guidelines, discussing literature on the DTs for laser-based AM, particularly laser powder bed fusion and direct energy deposition, in-situ monitoring and control equipment, SM and HM. The principal goal of this study is to highlight the aspects of DF and its development using existing techniques.

Findings

A comprehensive literature review finds a substantial lack of complete techniques that incorporate cyber-physical systems, advanced data analytics, AI, standardized interoperability, human–machine cooperation and scalable adaptability. The suggested DF effectively fills this void by integrating cyber-physical system components, including DT, AM, SM and sensors into the manufacturing process. Using sophisticated data analytics and AI algorithms, the DF facilitates real-time data analysis, predictive maintenance, quality control and optimal resource allocation. In addition, the suggested DF ensures interoperability between diverse devices and systems by emphasizing standardized communication protocols and interfaces. The modular and adaptable architecture of the DF enables scalability and adaptation, allowing for rapid reaction to market conditions.

Originality/value

Based on the need of DF, this review presents a comprehensive approach to DF development using DTs, sensing devices, LAM and SM processes and provides current progress in this domain.

1 – 10 of 145