Search results
1 – 10 of 47Biyanka Ekanayake, Alireza Ahmadian Fard Fini, Johnny Kwok Wai Wong and Peter Smith
Recognising the as-built state of construction elements is crucial for construction progress monitoring. Construction scholars have used computer vision-based algorithms to…
Abstract
Purpose
Recognising the as-built state of construction elements is crucial for construction progress monitoring. Construction scholars have used computer vision-based algorithms to automate this process. Robust object recognition from indoor site images has been inhibited by technical challenges related to indoor objects, lighting conditions and camera positioning. Compared with traditional machine learning algorithms, one-stage detector deep learning (DL) algorithms can prioritise the inference speed, enable real-time accurate object detection and classification. This study aims to present a DL-based approach to facilitate the as-built state recognition of indoor construction works.
Design/methodology/approach
The one-stage DL-based approach was built upon YOLO version 4 (YOLOv4) algorithm using transfer learning with few hyperparameters customised and trained in the Google Colab virtual machine. The process of framing, insulation and drywall installation of indoor partitions was selected as the as-built scenario. For training, images were captured from two indoor sites with publicly available online images.
Findings
The DL model reported a best-trained weight with a mean average precision of 92% and an average loss of 0.83. Compared to previous studies, the automation level of this study is high due to the use of fixed time-lapse cameras for data collection and zero manual intervention from the pre-processing algorithms to enhance visual quality of indoor images.
Originality/value
This study extends the application of DL models for recognising as-built state of indoor construction works upon providing training images. Presenting a workflow on training DL models in a virtual machine platform by reducing the computational complexities associated with DL models is also materialised.
Details
Keywords
This study aims to determine if automated coding with regular expression is a strong methodology to identify themes in virtual reference chat.
Abstract
Purpose
This study aims to determine if automated coding with regular expression is a strong methodology to identify themes in virtual reference chat.
Design/methodology/approach
The authors used a combination of manual and automated coding of chat transcripts for a period of two years to identify the categories of questions related to the new library system. This methodology enabled them to determine if regular expression accurately identified the topics of chat transcripts.
Findings
They discovered that regular expression is an appropriate method to identify themes in virtual reference interactions. This method enabled them to establish that patrons asked questions related to system changes in the weeks following their implementations.
Originality/value
This study highlights a new methodology for transcript analysis.
Details
Keywords
Scientific impact is traditionally assessed with citation-based metrics. Recently, altmetric indices have been introduced to measure scientific impact both within academia and…
Abstract
Purpose
Scientific impact is traditionally assessed with citation-based metrics. Recently, altmetric indices have been introduced to measure scientific impact both within academia and among the general public. However, little research has investigated the association between the linguistic features of research article titles and received online attention. To address this issue, the authors examined in the present study the relationship between a series of title features and altmetric attention scores.
Design/methodology/approach
The data included 8,658 titles of Science articles. The authors extracted six features from the title corpus (i.e. mean word length, lexical sophistication, lexical density, title length, syntactic dependency length and sentiment score). The authors performed Spearman’s rank analyses to analyze the correlations between these features and online impact. The authors then conducted a stepwise backward multiple regression to identify predictors for the articles' online impact.
Findings
The correlation analyses revealed weak but significant correlations between all six title features and the altmetric attention scores. The regression analysis showed that four linguistic features of titles (mean word length, lexical sophistication, title length and sentiment score) have modest predictive effects on the online impact of research articles.
Originality/value
In the internet era with the widespread use of social media and online platforms, it is becoming increasingly important for researchers to adapt to the changing context of research evaluation. This study identifies several linguistic features that deserve scholars’ attention in the writing of article titles. It also has practical implications for academic administrators and pedagogical implications for instructors of academic writing courses.
Details
Keywords
Diana Gavilan and Omar Adeeb A. Al-shboul
This paper aims to identify potential avenues for innovation in urban hotel management by analyzing self-reported data from visitors regarding their experience with interior…
Abstract
Purpose
This paper aims to identify potential avenues for innovation in urban hotel management by analyzing self-reported data from visitors regarding their experience with interior design.
Design/methodology/approach
A qualitative exploratory computer-assisted content analysis was conducted to identify the impact of interior design on the guest experience. Leximancer 4.0 software analyzed 2,562 reviews from urban hotels collected through a reservation website.
Findings
The findings reveal that data reported by guests on interior design play a crucial role in shaping guest experiences, both positively and negatively. The esthetic appeal of interior design is shown to impact resting and comfort, affecting overall performance significantly. The study also highlights how different star categories of hotels and variations in visitors' purposes for their stay lead to distinct guest experiences and different opportunities to innovate.
Research limitations/implications
The study’s results provide evidence for researchers and practitioners of the potential of the guest-reported interior design experience as a valuable source for fostering innovation. In addition, in the hotel industry, innovation may eventually be attained through interior design renovation.
Practical implications
Self-reported data from guests on interior design is an effective tool for innovation. Making interior design a priority throughout the establishment and ongoing management of a hotel is crucial. By integrating interior design, not only can potential negative experiences be avoided, but greater guest satisfaction can also be achieved during their stay, promoting memorable experiences that align with the hotel category and customer expectations.
Social implications
This research emphasizes the importance of interior design as a catalyst for innovation and improved social experiences in the hospitality industry. Innovation in interior design can improve hotel performance in several dimensions, including attracting more visitors to the hotel and the area, increasing tourism revenue for local businesses and contributing to the broader societal goal of reducing environmental impact and promoting sustainability.
Originality/value
This article adopts a guest-centered methodology to provide valuable insights for hotel managers to leverage interior design as a tool for innovation in the hospitality industry after showing that interior design enhances guests' experiences, comfort and hotel differentiation.
Details
Keywords
Florian Rupp, Benjamin Schnabel and Kai Eckert
The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…
Abstract
Purpose
The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.
Design/methodology/approach
In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.
Findings
The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.
Practical implications
Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.
Originality/value
With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.
Details
Keywords
Terry Lease, Marni Goldenberg, Matt Haberland and Sam Wallan
The paper has a twofold purpose: (1) to test the application of means-end theory to providers of hospitality goods and services, and (2) to explore this question in the context of…
Abstract
Purpose
The paper has a twofold purpose: (1) to test the application of means-end theory to providers of hospitality goods and services, and (2) to explore this question in the context of winery tasting rooms when they had a unique opportunity to restructure their hospitality experience due to government restrictions in response to COVID.
Design/methodology/approach
A qualitative approach was adopted, and a convenience sample was used to conduct semi-structured laddering interviews. Forty interview transcripts were coded as means-end ladders, which were analyzed using a custom computer program to develop the implication matrix and the hierarchical value map.
Findings
This paper demonstrates that means-end is a useful approach to investigate the values and behaviors of the producer, specifically hospitality hosts. It finds that the principal goal of tasting rooms is to generate sales, and offering a compelling guest experience is the characteristic that contributes the most to achieving that goal. The staff and the atmosphere created for the guests are the two factors with the greatest influence on the guest experience.
Originality/value
This is the first paper to use means-end theory to study the hospitality host, or the producer of goods and services in general, and the first to study winery hospitality primarily through the lens of means-end theory. The study also helps fill a gap in research on tasting room sales focused on the winery’s goals.
Details
Keywords
Amina Dinari, Tarek Benameur and Fuad Khoshnaw
The research aims to investigate the impact of thermo-mechanical aging on SBR under cyclic-loading. By conducting experimental analyses and developing a 3D finite element analysis…
Abstract
Purpose
The research aims to investigate the impact of thermo-mechanical aging on SBR under cyclic-loading. By conducting experimental analyses and developing a 3D finite element analysis (FEA) model, it seeks to understand chemical and physical changes during aging processes. This research provides insights into nonlinear mechanical behavior, stress softening and microstructural alterations in SBR compounds, improving material performance and guiding future strategies.
Design/methodology/approach
This study combines experimental analyses, including cyclic tensile loading, attenuated total reflection (ATR), spectroscopy and energy-dispersive X-ray spectroscopy (EDS) line scans, to investigate the effects of thermo-mechanical aging (TMA) on carbon-black (CB) reinforced styrene-butadiene rubber (SBR). It employs a 3D FEA model using the Abaqus/Implicit code to comprehend the nonlinear behavior and stress softening response, offering a holistic understanding of aging processes and mechanical behavior under cyclic-loading.
Findings
This study reveals significant insights into SBR behavior during thermo-mechanical aging. Findings include surface roughness variations, chemical alterations and microstructural changes. Notably, a partial recovery of stiffness was observed as a function of CB volume fraction. The developed 3D FEA model accurately depicts nonlinear behavior, stress softening and strain fields around CB particles in unstressed states, predicting hysteresis and energy dissipation in aged SBRs.
Originality/value
This research offers novel insights by comprehensively investigating the impact of thermo-mechanical aging on CB-reinforced-SBR. The fusion of experimental techniques with FEA simulations reveals time-dependent mechanical behavior and microstructural changes in SBR materials. The model serves as a valuable tool for predicting material responses under various conditions, advancing the design and engineering of SBR-based products across industries.
Details
Keywords
Ranjit Roy Ghatak and Jose Arturo Garza-Reyes
The research explores the shift to Quality 4.0, examining the move towards a data-focussed transformation within organizational frameworks. This transition is characterized by…
Abstract
Purpose
The research explores the shift to Quality 4.0, examining the move towards a data-focussed transformation within organizational frameworks. This transition is characterized by incorporating Industry 4.0 technological innovations into existing quality management frameworks, signifying a significant evolution in quality control systems. Despite the evident advantages, the practical deployment in the Indian manufacturing sector encounters various obstacles. This research is dedicated to a thorough examination of these impediments. It is structured around a set of pivotal research questions: First, it seeks to identify the key barriers that impede the adoption of Quality 4.0. Second, it aims to elucidate these barriers' interrelations and mutual dependencies. Thirdly, the research prioritizes these barriers in terms of their significance to the adoption process. Finally, it contemplates the ramifications of these priorities for the strategic advancement of manufacturing practices and the development of informed policies. By answering these questions, the research provides a detailed understanding of the challenges faced. It offers actionable insights for practitioners and policymakers implementing Quality 4.0 in the Indian manufacturing sector.
Design/methodology/approach
Employing Interpretive Structural Modelling and Matrix Impact of Cross Multiplication Applied to Classification, the authors probe the interdependencies amongst fourteen identified barriers inhibiting Quality 4.0 adoption. These barriers were categorized according to their driving power and dependence, providing a richer understanding of the dynamic obstacles within the Technology–Organization–Environment (TOE) framework.
Findings
The study results highlight the lack of Quality 4.0 standards and Big Data Analytics (BDA) tools as fundamental obstacles to integrating Quality 4.0 within the Indian manufacturing sector. Additionally, the study results contravene dominant academic narratives, suggesting that the cumulative impact of organizational barriers is marginal, contrary to theoretical postulations emphasizing their central significance in Quality 4.0 assimilation.
Practical implications
This research provides concrete strategies, such as developing a collaborative platform for sharing best practices in Quality 4.0 standards, which fosters a synergistic relationship between organizations and policymakers, for instance, by creating a joint task force, comprised of industry leaders and regulatory bodies, dedicated to formulating and disseminating comprehensive guidelines for Quality 4.0 adoption. This initiative could lead to establishing industry-wide standards, benefiting from the pooled expertise of diverse stakeholders. Additionally, the study underscores the necessity for robust, standardized Big Data Analytics tools specifically designed to meet the Quality 4.0 criteria, which can be developed through public-private partnerships. These tools would facilitate the seamless integration of Quality 4.0 processes, demonstrating a direct route for overcoming the barriers of inadequate standards.
Originality/value
This research delineates specific obstacles to Quality 4.0 adoption by applying the TOE framework, detailing how these barriers interact with and influence each other, particularly highlighting the previously overlooked environmental factors. The analysis reveals a critical interdependence between “lack of standards for Quality 4.0” and “lack of standardized BDA tools and solutions,” providing nuanced insights into their conjoined effect on stalling progress in this field. Moreover, the study contributes to the theoretical body of knowledge by mapping out these novel impediments, offering a more comprehensive understanding of the challenges faced in adopting Quality 4.0.
Details
Keywords
Rufai Ahmad, Sotirios Terzis and Karen Renaud
This study aims to investigate how phishers apply persuasion principles and construct deceptive URLs in mobile instant messaging (MIM) phishing.
Abstract
Purpose
This study aims to investigate how phishers apply persuasion principles and construct deceptive URLs in mobile instant messaging (MIM) phishing.
Design/methodology/approach
In total, 67 examples of real-world MIM phishing attacks were collected from various online sources. Each example was coded using established guidelines from the literature to identify the persuasion principles, and the URL construction techniques employed.
Findings
The principles of social proof, liking and authority were the most widely used in MIM phishing, followed by scarcity and reciprocity. Most phishing examples use three persuasion principles, often a combination of authority, liking and social proof. In contrast to email phishing but similar to vishing, the social proof principle was the most commonly used in MIM phishing. Phishers implement the social proof principle in different ways, most commonly by claiming that other users have already acted (e.g. crafting messages that indicate the sender has already benefited from the scam). In contrast to email, retail and fintech companies are the most commonly targeted in MIM phishing. Furthermore, phishers created deceptive URLs using multiple URL obfuscation techniques, often using spoofed domains, to make the URL complex by adding random characters and using homoglyphs.
Originality/value
The insights from this study provide a theoretical foundation for future research on the psychological aspects of phishing in MIM apps. The study provides recommendations that software developers should consider when developing automated anti-phishing solutions for MIM apps and proposes a set of MIM phishing awareness training tips.
Details
Keywords
Bikesh Manandhar, Thanh-Canh Huynh, Pawan Kumar Bhattarai, Suchita Shrestha and Ananta Man Singh Pradhan
This research is aimed at preparing landslide susceptibility using spatial analysis and soft computing machine learning techniques based on convolutional neural networks (CNNs)…
Abstract
Purpose
This research is aimed at preparing landslide susceptibility using spatial analysis and soft computing machine learning techniques based on convolutional neural networks (CNNs), artificial neural networks (ANNs) and logistic regression (LR) models.
Design/methodology/approach
Using the Geographical Information System (GIS), a spatial database including topographic, hydrologic, geological and landuse data is created for the study area. The data are randomly divided between a training set (70%), a validation (10%) and a test set (20%).
Findings
The validation findings demonstrate that the CNN model (has an 89% success rate and an 84% prediction rate). The ANN model (with an 84% success rate and an 81% prediction rate) predicts landslides better than the LR model (with a success rate of 82% and a prediction rate of 79%). In comparison, the CNN proves to be more accurate than the logistic regression and is utilized for final susceptibility.
Research limitations/implications
Land cover data and geological data are limited in largescale, making it challenging to develop accurate and comprehensive susceptibility maps.
Practical implications
It helps to identify areas with a higher likelihood of experiencing landslides. This information is crucial for assessing the risk posed to human lives, infrastructure and properties in these areas. It allows authorities and stakeholders to prioritize risk management efforts and allocate resources more effectively.
Social implications
The social implications of a landslide susceptibility map are profound, as it provides vital information for disaster preparedness, risk mitigation and landuse planning. Communities can utilize these maps to identify vulnerable areas, implement zoning regulations and develop evacuation plans, ultimately safeguarding lives and property. Additionally, access to such information promotes public awareness and education about landslide risks, fostering a proactive approach to disaster management. However, reliance solely on these maps may also create a false sense of security, necessitating continuous updates and integration with other risk assessment measures to ensure effective disaster resilience strategies are in place.
Originality/value
Landslide susceptibility mapping provides a proactive approach to identifying areas at higher risk of landslides before any significant events occur. Researchers continually explore new data sources, modeling techniques and validation approaches, leading to a better understanding of landslide dynamics and susceptibility factors.
Details