Search results

1 – 10 of 323
Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

1256

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 17 July 2023

Cecília Lobo, Rui Augusto Costa and Adriana Fumi Chim-Miki

This paper aims to analyse the effects of events image from host communities’ perspective on the city’s overall image and the intention to recommend the events and the city as a…

1443

Abstract

Purpose

This paper aims to analyse the effects of events image from host communities’ perspective on the city’s overall image and the intention to recommend the events and the city as a tourism destination.

Design/methodology/approach

The research used a bivariate data analysis based on Spearman’s correlation and regression analysis to determine useful variables to predict the intention to recommend the city as a tourism destination. Data collection was face-to-face and online with a non-probabilistic sample of Viseu city residents, the second largest city in the central region of Portugal.

Findings

The findings had implications for researchers, governments and stakeholders. From the resident’s point of view, there is a high correlation between the overall city image and the intention to recommend it as a tourism destination. Event image and the intention to recommend the event participation affect the overall city image. Results point out the resident as natural promoters of events and their city if the local events have an appeal that generates their participation. Conclusions indicated that cities need to re-thinking tourism from the citizen’s perspective as staycation is a grown option.

Originality/value

Event image by host-city residents’ perceptions is an underdevelopment theme in the literature, although residents’ participation is essential to the success of most events. Local events can promote tourist citizenship and reinforce the positioning of tourism destinations, associating them with an image of desirable places to visit and live.

Details

International Journal of Tourism Cities, vol. 9 no. 4
Type: Research Article
ISSN: 2056-5607

Keywords

Open Access
Article
Publication date: 1 December 2023

Francois Du Rand, André Francois van der Merwe and Malan van Tonder

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without…

Abstract

Purpose

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without the need for specialised computational hardware. The idea is to develop this system by making use of more traditional machine learning (ML) models instead of using computationally intensive deep learning (DL) models.

Design/methodology/approach

The approach that is used by this study is to use traditional image processing and classification techniques that can be applied to captured layer images to detect and classify defects without the need for DL algorithms.

Findings

The study proved that a defect classification algorithm could be developed by making use of traditional ML models with a high degree of accuracy and the images could be processed at higher speeds than typically reported in literature when making use of DL models.

Originality/value

This paper addresses a need that has been identified for a high-speed defect classification algorithm that can detect and classify defects without the need for specialised hardware that is typically used when making use of DL technologies. This is because when developing closed-loop feedback systems for these additive manufacturing machines, it is important to detect and classify defects without inducing additional delays to the control system.

Details

Rapid Prototyping Journal, vol. 29 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 7 April 2023

Virginie Lavoye, Jenni Sipilä, Joel Mero and Anssi Tarkiainen

Virtual try-on (VTO) technology offers an opportunity for fashion and beauty brands to provide enriched self-explorative experiences. The increased popularity of VTOs makes it…

4429

Abstract

Purpose

Virtual try-on (VTO) technology offers an opportunity for fashion and beauty brands to provide enriched self-explorative experiences. The increased popularity of VTOs makes it urgent to understand the drivers and consequences of the exploration of styles in VTO contexts (herein called self-explorative engagement). Notably, little is known about the antecedent and outcomes of the personalized self-explorative experience central to VTOs. This paper aims to fill this knowledge gap.

Design/methodology/approach

An online quasi-experiment (N = 500) was conducted in the context of fashion and beauty VTOs. Participants were asked to virtually try on sunglasses or lipsticks and subsequently answer a questionnaire measuring the key constructs: self-presence (i.e. physical similarity and identification), self-explorative engagement (i.e. exploration of styles in VTO context), brand cognitive processing and brand attitude. The authors analyze the data with structural equation modeling via maximum likelihood estimation in LISREL.

Findings

The experience of self-presence during consumers’ use of VTOs in augmented reality environments has a positive effect on self-explorative engagement. Furthermore, a mediation analysis reveals that self-explorative engagement improves brand attitude via brand cognitive processing. The results are confirmed for two popular fashion and beauty brands.

Originality/value

Grounded in extended self theory, to the best of the authors’ knowledge, this is the first study to show that a realistic VTO experience encourages self-extension via a process starting from the exploration of styles and results in increased brand cognitive processing and more positive brand attitudes. The exploration of styles is enabled by self-presence.

Details

Journal of Services Marketing, vol. 37 no. 10
Type: Research Article
ISSN: 0887-6045

Keywords

Open Access
Article
Publication date: 22 December 2023

Khaled Hamad Almaiman, Lawrence Ang and Hume Winzar

The purpose of this paper is to study the effects of sports sponsorship on brand equity using two managerially related outcomes: price premium and market share.

2710

Abstract

Purpose

The purpose of this paper is to study the effects of sports sponsorship on brand equity using two managerially related outcomes: price premium and market share.

Design/methodology/approach

This study uses a best–worst discrete choice experiment (BWDCE) and compares the outcome with that of the purchase intention scale, an established probabilistic measure of purchase intention. The total sample consists of 409 fans of three soccer teams sponsored by three different competing brands: Nike, Adidas and Puma.

Findings

With sports sponsorship, fans were willing to pay more for the sponsor’s product, with the sponsoring brand obtaining the highest market share. Prominent brands generally performed better than less prominent brands. The best–worst scaling method was also 35% more accurate in predicting brand choice than a purchase intention scale.

Research limitations/implications

Future research could use the same method to study other types of sponsors, such as title sponsors or other product categories.

Practical implications

Sponsorship managers can use this methodology to assess the return on investment in sponsorship engagement.

Originality/value

Prior sponsorship studies on brand equity tend to ignore market share or fans’ willingness to pay a price premium for a sponsor’s goods and services. However, these two measures are crucial in assessing the effectiveness of sponsorship. This study demonstrates how to conduct such an assessment using the BWDCE method. It provides a clearer picture of sponsorship in terms of its economic value, which is more managerially useful.

Details

European Journal of Marketing, vol. 58 no. 13
Type: Research Article
ISSN: 0309-0566

Keywords

Open Access
Article
Publication date: 16 April 2024

Liezl Smith and Christiaan Lamprecht

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine…

Abstract

Purpose

In a virtual interconnected digital space, the metaverse encompasses various virtual environments where people can interact, including engaging in business activities. Machine learning (ML) is a strategic technology that enables digital transformation to the metaverse, and it is becoming a more prevalent driver of business performance and reporting on performance. However, ML has limitations, and using the technology in business processes, such as accounting, poses a technology governance failure risk. To address this risk, decision makers and those tasked to govern these technologies must understand where the technology fits into the business process and consider its limitations to enable a governed transition to the metaverse. Using selected accounting processes, this study aims to describe the limitations that ML techniques pose to ensure the quality of financial information.

Design/methodology/approach

A grounded theory literature review method, consisting of five iterative stages, was used to identify the accounting tasks that ML could perform in the respective accounting processes, describe the ML techniques that could be applied to each accounting task and identify the limitations associated with the individual techniques.

Findings

This study finds that limitations such as data availability and training time may impact the quality of the financial information and that ML techniques and their limitations must be clearly understood when developing and implementing technology governance measures.

Originality/value

The study contributes to the growing literature on enterprise information and technology management and governance. In this study, the authors integrated current ML knowledge into an accounting context. As accounting is a pervasive aspect of business, the insights from this study will benefit decision makers and those tasked to govern these technologies to understand how some processes are more likely to be affected by certain limitations and how this may impact the accounting objectives. It will also benefit those users hoping to exploit the advantages of ML in their accounting processes while understanding the specific technology limitations on an accounting task level.

Details

Journal of Financial Reporting and Accounting, vol. 22 no. 2
Type: Research Article
ISSN: 1985-2517

Keywords

Open Access
Article
Publication date: 8 November 2023

Stephen Oduro, Alessandro De Nisco and Luca Petruzzellis

This study aims to draw on cue utilization and irradiation theories to: determine the extent to which country-of-origin image and its sub-dimensions exert an aggregate and…

2449

Abstract

Purpose

This study aims to draw on cue utilization and irradiation theories to: determine the extent to which country-of-origin image and its sub-dimensions exert an aggregate and relative influence on consumer brand evaluations; and identify the contextual and methodological factors that account for between-study variance in the focal relationship.

Design/methodology/approach

A random-effects model was used to examine 166 empirical articles encompassing 499,563 observations, and 282 effect sizes from 1984 to 2020 using Comprehensive Meta-Analysis software.

Findings

Results show that country-of-origin image has a positive, moderate effect on consumer brand evaluations. Moreover, findings reveal that each dimension of country-of-origin image – general country image, general product country image, specific product country image and partitioned country image – significantly influences consumer brand evaluation, but the effect of general product country image is the largest. What’s more, the aggregate impacts of country-of-origin image on consumer brand evaluation – brand commitment, brand-specific associations and general brand impressions – show that the effect on brand commitment is the largest. Finally, findings show that contextual factors (brand source, product sector, culture [individualism vs collectivism], brand origin continents and respondents’ continent) and methodological factors (cues, sampling unit, publication year and sample size) significantly account for between-study variance.

Originality/value

This study provides the first meta-analytic review of the relationship between country-of-origin image and consumer brand evaluation to help clarify mixed findings and balance out the literature, which has only seen quantitative reviews on product evaluation and purchase decisions.

Details

Journal of Product & Brand Management, vol. 33 no. 1
Type: Research Article
ISSN: 1061-0421

Keywords

Open Access
Article
Publication date: 28 February 2023

Luca Rampini and Fulvio Re Cecconi

This study aims to introduce a new methodology for generating synthetic images for facility management purposes. The method starts by leveraging the existing 3D open-source BIM…

1033

Abstract

Purpose

This study aims to introduce a new methodology for generating synthetic images for facility management purposes. The method starts by leveraging the existing 3D open-source BIM models and using them inside a graphic engine to produce a photorealistic representation of indoor spaces enriched with facility-related objects. The virtual environment creates several images by changing lighting conditions, camera poses or material. Moreover, the created images are labeled and ready to be trained in the model.

Design/methodology/approach

This paper focuses on the challenges characterizing object detection models to enrich digital twins with facility management-related information. The automatic detection of small objects, such as sockets, power plugs, etc., requires big, labeled data sets that are costly and time-consuming to create. This study proposes a solution based on existing 3D BIM models to produce quick and automatically labeled synthetic images.

Findings

The paper presents a conceptual model for creating synthetic images to increase the performance in training object detection models for facility management. The results show that virtually generated images, rather than an alternative to real images, are a powerful tool for integrating existing data sets. In other words, while a base of real images is still needed, introducing synthetic images helps augment the model’s performance and robustness in covering different types of objects.

Originality/value

This study introduced the first pipeline for creating synthetic images for facility management. Moreover, this paper validates this pipeline by proposing a case study where the performance of object detection models trained on real data or a combination of real and synthetic images are compared.

Details

Construction Innovation , vol. 24 no. 1
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Access

Only Open Access

Year

Last 6 months (323)

Content type

1 – 10 of 323