Search results

1 – 10 of 333
Article
Publication date: 28 February 2023

Sandra Matarneh, Faris Elghaish, Amani Al-Ghraibah, Essam Abdellatef and David John Edwards

Incipient detection of pavement deterioration (such as crack identification) is critical to optimizing road maintenance because it enables preventative steps to be implemented to…

Abstract

Purpose

Incipient detection of pavement deterioration (such as crack identification) is critical to optimizing road maintenance because it enables preventative steps to be implemented to mitigate damage and possible failure. Traditional visual inspection has been largely superseded by semi-automatic/automatic procedures given significant advancements in image processing. Therefore, there is a need to develop automated tools to detect and classify cracks.

Design/methodology/approach

The literature review is employed to evaluate existing attempts to use Hough transform algorithm and highlight issues that should be improved. Then, developing a simple low-cost crack detection method based on the Hough transform algorithm for pavement crack detection and classification.

Findings

Analysis results reveal that model accuracy reaches 92.14% for vertical cracks, 93.03% for diagonal cracks and 95.61% for horizontal cracks. The time lapse for detecting the crack type for one image is circa 0.98 s for vertical cracks, 0.79 s for horizontal cracks and 0.83 s for diagonal cracks. Ensuing discourse serves to illustrate the inherent potential of a simple low-cost image processing method in automated pavement crack detection. Moreover, this method provides direct guidance for long-term pavement optimal maintenance decisions.

Research limitations/implications

The outcome of this research can help highway agencies to detect and classify cracks accurately for a very long highway without a need for manual inspection, which can significantly minimize cost.

Originality/value

Hough transform algorithm was tested in terms of detect and classify a large dataset of highway images, and the accuracy reaches 92.14%, which can be considered as a very accurate percentage regarding automated cracks and distresses classification.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 21 March 2024

Thamaraiselvan Natarajan, P. Pragha, Krantiraditya Dhalmahapatra and Deepak Ramanan Veera Raghavan

The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and…

Abstract

Purpose

The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and uncovers a deeper understanding of user opinions and trends within this digital realm. Further, sentiments signify the underlying factor that triggers one’s intent to use technology like the metaverse. Positive sentiments often correlate with positive user experiences, while negative sentiments may signify issues or frustrations. Brands may consider these sentiments and implement them on their metaverse platforms for a seamless user experience.

Design/methodology/approach

The current study adopts machine learning sentiment analysis techniques using Support Vector Machine, Doc2Vec, RNN, and CNN to explore the sentiment of individuals toward metaverse in a user-generated context. The topics were discovered using the topic modeling method, and sentiment analysis was performed subsequently.

Findings

The results revealed that the users had a positive notion about the experience and orientation of the metaverse while having a negative attitude towards the economy, data, and cyber security. The accuracy of each model has been analyzed, and it has been concluded that CNN provides better accuracy on an average of 89% compared to the other models.

Research limitations/implications

Analyzing sentiment can reveal how the general public perceives the metaverse. Positive sentiment may suggest enthusiasm and readiness for adoption, while negative sentiment might indicate skepticism or concerns. Given the positive user notions about the metaverse’s experience and orientation, developers should continue to focus on creating innovative and immersive virtual environments. At the same time, users' concerns about data, cybersecurity and the economy are critical. The negative attitude toward the metaverse’s economy suggests a need for innovation in economic models within the metaverse. Also, developers and platform operators should prioritize robust data security measures. Implementing strong encryption and two-factor authentication and educating users about cybersecurity best practices can address these concerns and enhance user trust.

Social implications

In terms of societal dynamics, the metaverse could revolutionize communication and relationships by altering traditional notions of proximity and the presence of its users. Further, virtual economies might emerge, with virtual assets having real-world value, presenting both opportunities and challenges for industries and regulators.

Originality/value

The current study contributes to research as it is the first of its kind to explore the sentiments of individuals toward the metaverse using deep learning techniques and evaluate the accuracy of these models.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 19 April 2023

Shanaka Herath, Vince Mangioni, Song Shi and Xin Janet Ge

House price fluctuations send vital signals to many parts of the economy, and long-term predictions of house prices are of great interest to governments and property developers…

Abstract

Purpose

House price fluctuations send vital signals to many parts of the economy, and long-term predictions of house prices are of great interest to governments and property developers. Although predictive models based on economic fundamentals are widely used, the common requirement for such studies is that underlying data are stationary. This paper aims to demonstrate the usefulness of alternative filtering methods for forecasting house prices.

Design/methodology/approach

We specifically focus on exponential smoothing with trend adjustment and multiplicative decomposition using median house prices for Sydney from Q3 1994 to Q1 2017. The model performance is evaluated using out-of-sample forecasting techniques and a robustness check against secondary data sources.

Findings

Multiplicative decomposition outperforms exponential smoothing at forecasting accuracy. The superior decomposition model suggests that seasonal and cyclical components provide important additional information for predicting house prices. The forecasts for 2017–2028 suggest that prices will slowly increase, going past 2016 levels by 2020 in the apartment market and by 2022/2023 in the detached housing market.

Research limitations/implications

We demonstrate that filtering models are simple (univariate models that only require historical house prices), easy to implement (with no condition of stationarity) and widely used in financial trading, sports betting and other fields where producing accurate forecasts is more important than explaining the drivers of change. The paper puts forward a case for the inclusion of filtering models within the forecasting toolkit as a useful reference point for comparing forecasts from alternative models.

Originality/value

To the best of the authors’ knowledge, this paper undertakes the first systematic comparison of two filtering models for the Sydney housing market.

Details

International Journal of Housing Markets and Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1753-8270

Keywords

Open Access
Article
Publication date: 6 May 2024

Alejandro Rodriguez-Vahos, Sebastian Aparicio and David Urbano

A debate on whether new ventures should be supported with public funding is taking place. Adopting a position on this discussion requires rigorous assessments of implemented…

Abstract

Purpose

A debate on whether new ventures should be supported with public funding is taking place. Adopting a position on this discussion requires rigorous assessments of implemented programs. However, the few existing efforts have mostly focused on regional cases in developed countries. To fill this gap, this paper aims to measure the effects of a regional acceleration program in a developing country (Medellin, Colombia).

Design/methodology/approach

The economic notion of capabilities is used to frame the analysis of firm characteristics and productivity, which are hypothesized to be heterogeneous within the program. To test these relationships, propensity score matching is used in a sample of 60 treatment and 16,994 control firms.

Findings

This paper finds that treated firms had higher revenue than propensity score-matched controls on average, confirming a positive impact on growth measures. However, such financial growth is mostly observed in service firms rather than other economic sectors.

Research limitations/implications

Further evaluations, with a longer period and using more outcome variables, are suggested in the context of similar publicly funded programs in developing countries.

Originality/value

These findings tip the balance in favor of the literature suggesting supportive programs for high-growth firms as opposed to everyday entrepreneurship. This is an insight, especially under the context of an emerging economy, which has scarce funding to support entrepreneurship.

Details

Journal of Entrepreneurship in Emerging Economies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2053-4604

Keywords

Article
Publication date: 18 August 2023

Gaurav Sarin, Pradeep Kumar and M. Mukund

Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological…

Abstract

Purpose

Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological computing, deep learning has become more popular among academicians and professionals to perform mining and analytical operations. In this work, the authors study the research carried out in field of text classification using deep learning techniques to identify gaps and opportunities for doing research.

Design/methodology/approach

The authors adopted bibliometric-based approach in conjunction with visualization techniques to uncover new insights and findings. The authors collected data of two decades from Scopus global database to perform this study. The authors discuss business applications of deep learning techniques for text classification.

Findings

The study provides overview of various publication sources in field of text classification and deep learning together. The study also presents list of prominent authors and their countries working in this field. The authors also presented list of most cited articles based on citations and country of research. Various visualization techniques such as word cloud, network diagram and thematic map were used to identify collaboration network.

Originality/value

The study performed in this paper helped to understand research gaps that is original contribution to body of literature. To best of the authors' knowledge, in-depth study in the field of text classification and deep learning has not been performed in detail. The study provides high value to scholars and professionals by providing them opportunities of research in this area.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 29 March 2024

Sihao Li, Jiali Wang and Zhao Xu

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…

Abstract

Purpose

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.

Design/methodology/approach

This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.

Findings

Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.

Originality/value

This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 5 March 2024

Sana Ramzan and Mark Lokanan

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This…

Abstract

Purpose

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This paper analyzes the vast FSF literature based on inclusion and exclusion criteria. These criteria filter articles that are present in the accounting fraud domain and are published in peer-reviewed quality journals based on Australian Business Deans Council (ABDC) journal ranking. Lastly, a reverse search, analyzing the articles' abstracts, further narrows the search to 88 peer-reviewed articles. After examining these 88 articles, the results imply that the current literature is shifting from traditional statistical approaches towards computational methods, specifically machine learning (ML), for predicting and detecting FSF. This evolution of the literature is influenced by the impact of micro and macro variables on FSF and the inadequacy of audit procedures to detect red flags of fraud. The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Design/methodology/approach

This paper chronicles the cluster of narratives surrounding the inadequacy of current accounting and auditing practices in preventing and detecting Financial Statement Fraud. The primary objective of this study is to objectively synthesize the volume of accounting literature on financial statement fraud. More specifically, this study will conduct a systematic literature review (SLR) to examine the evolution of financial statement fraud research and the emergence of new computational techniques to detect fraud in the accounting and finance literature.

Findings

The storyline of this study illustrates how the literature has evolved from conventional fraud detection mechanisms to computational techniques such as artificial intelligence (AI) and machine learning (ML). The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Originality/value

This paper contributes to the literature by providing insights to researchers about why the evolution of accounting fraud literature from traditional statistical methods to machine learning algorithms in fraud detection and prediction.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 22 September 2023

Hooman Soleymani, Hamid Reza Saeidnia, Marcel Ausloos and Mohammad Hassanzadeh

In this study, the authors seek to introduce ways that show that in the age of artificial intelligence (AI), selective dissemination of information (SDI) performance can be…

Abstract

Purpose

In this study, the authors seek to introduce ways that show that in the age of artificial intelligence (AI), selective dissemination of information (SDI) performance can be greatly enhanced by leveraging AI technologies and algorithms.

Design/methodology/approach

AI holds significant potential for the SDI. In the age of AI, SDI can be greatly enhanced by leveraging AI technologies and algorithms. The authors discuss SDI technique used to filter and distribute relevant information to stakeholders based on the pertinent modern literature.

Findings

The following conceptual indicators of AI can be utilized for obtaining a better performance measure of SDI: intelligent recommendation systems, natural language processing, automated content classification, contextual understanding, intelligent alert systems, real-time information updates, intelligent alert systems, real-time information updates, adaptive learning, content summarization and synthesis.

Originality/value

The authors propose the general framework in which AI can greatly enhance the performance of SDI but also emphasize that there are challenges to consider. These include ensuring data privacy, avoiding algorithmic biases, ensuring transparency and accountability of AI systems and addressing concerns related to information overload.

Article
Publication date: 3 November 2023

Salam Abdallah and Ashraf Khalil

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two…

123

Abstract

Purpose

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two techniques – text mining and manual review. The proposed methodology would aid researchers in identifying key concepts and research gaps, which in turn, will help them to establish the theoretical background supporting their empirical research objective.

Design/methodology/approach

This paper explores a hybrid methodology for literature review (HMLR), using text mining prior to systematic manual review.

Findings

The proposed rapid methodology is an effective tool to automate and speed up the process required to identify key and emerging concepts and research gaps in any specific research domain while conducting a systematic literature review. It assists in populating a research knowledge graph that does not reach all semantic depths of the examined domain yet provides some science-specific structure.

Originality/value

This study presents a new methodology for conducting a literature review for empirical research articles. This study has explored an “HMLR” that combines text mining and manual systematic literature review. Depending on the purpose of the research, these two techniques can be used in tandem to undertake a comprehensive literature review, by combining pieces of complex textual data together and revealing areas where research might be lacking.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 13 September 2021

Naresh Kattekola, Amol Jawale, Pallab Kumar Nath and Shubhankar Majumdar

This paper aims to improve the performance of approximate multiplier in terms of peak signal to noise ratio (PSNR) and quality of the image.

Abstract

Purpose

This paper aims to improve the performance of approximate multiplier in terms of peak signal to noise ratio (PSNR) and quality of the image.

Design/methodology/approach

The paper proposes an approximate circuit for 4:2 compressor, which shows a significant amount of improvement in performance metrics than that of the existing designs. This paper also reports a hybrid architecture for the Dadda multiplier, which incorporates proposed 4:2 compressor circuit as a basic building block.

Findings

Hybrid Dadda multiplier architecture is used in a median filter for image de-noising application and achieved 20% more PSNR than that of the best available designs.

Originality/value

The proposed 4:2 compressor improves the error metrics of a Hybrid Dadda multiplier.

Details

Circuit World, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0305-6120

Keywords

Access

Year

All dates (333)

Content type

Earlycite article (333)
1 – 10 of 333