Search results

1 – 10 of 240
Article
Publication date: 27 June 2024

Bo Wang, Xin Jin and Ning Ma

Existing research has predominantly concentrated on examining the factors that impact consumer decisions through the lens of potential consumer motivations, neglecting the…

Abstract

Purpose

Existing research has predominantly concentrated on examining the factors that impact consumer decisions through the lens of potential consumer motivations, neglecting the sentiment mechanisms that propel guest behavioral intentions. This study endeavors to systematically analyze the underlying mechanisms governing how negative reviews exert an influence on potential consumer decisions.

Design/methodology/approach

This paper constructs an “Aspect-based sentiment accumulation” index, a negative or positive affect load, reflecting the degree of consumer sentiment based on affect infusion model and aspect-based sentiment analysis. Initially, it verifies the causal relationship between aspect-based negative load and consumer decisions using ordinary least squares regression. Then, it analyzes the threshold effects of negative affect load on positive affect load and the threshold effects of positive affect load on negative affect load using a panel threshold regression model.

Findings

Aspect-based negative reviews significantly impact consumers’ decisions. Negative affect load and positive affect load exhibit threshold effects on each other, with threshold values varying according to the overall volume of reviews. As the total number of reviews increases, the impact of negative affect load diminishes. The threshold effects for positive affect load showed a predominantly U-shaped course of change. Hosts respond promptly and enthusiastically with detailed, lengthy text, which can aid in mitigating the impact of negative reviews.

Originality/value

The study extends the application of the affect infusion model and enriches the conditions for its theoretical scope. It addresses the research gap by focusing on the threshold effects of negative or positive review sentiment on decision-making in sharing accommodations.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 29 November 2023

Tarun Jaiswal, Manju Pandey and Priyanka Tripathi

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional…

Abstract

Purpose

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Typical convolutional neural networks (CNNs) are unable to capture both local and global contextual information effectively and apply a uniform operation to all pixels in an image. To address this, we propose an innovative approach that integrates a dynamic convolution operation at the encoder stage, improving image encoding quality and disease detection. In addition, a decoder based on the gated recurrent unit (GRU) is used for language modeling, and an attention network is incorporated to enhance consistency. This novel combination allows for improved feature extraction, mimicking the expertise of radiologists by selectively focusing on important areas and producing coherent captions with valuable clinical information.

Design/methodology/approach

In this study, we have presented a new report generation approach that utilizes dynamic convolution applied Resnet-101 (DyCNN) as an encoder (Verelst and Tuytelaars, 2019) and GRU as a decoder (Dey and Salemt, 2017; Pan et al., 2020), along with an attention network (see Figure 1). This integration innovatively extends the capabilities of image encoding and sequential caption generation, representing a shift from conventional CNN architectures. With its ability to dynamically adapt receptive fields, the DyCNN excels at capturing features of varying scales within the CXR images. This dynamic adaptability significantly enhances the granularity of feature extraction, enabling precise representation of localized abnormalities and structural intricacies. By incorporating this flexibility into the encoding process, our model can distil meaningful and contextually rich features from the radiographic data. While the attention mechanism enables the model to selectively focus on different regions of the image during caption generation. The attention mechanism enhances the report generation process by allowing the model to assign different importance weights to different regions of the image, mimicking human perception. In parallel, the GRU-based decoder adds a critical dimension to the process by ensuring a smooth, sequential generation of captions.

Findings

The findings of this study highlight the significant advancements achieved in chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Experiments conducted using the IU-Chest X-ray datasets showed that the proposed model outperformed other state-of-the-art approaches. The model achieved notable scores, including a BLEU_1 score of 0.591, a BLEU_2 score of 0.347, a BLEU_3 score of 0.277 and a BLEU_4 score of 0.155. These results highlight the efficiency and efficacy of the model in producing precise radiology reports, enhancing image interpretation and clinical decision-making.

Originality/value

This work is the first of its kind, which employs DyCNN as an encoder to extract features from CXR images. In addition, GRU as the decoder for language modeling was utilized and the attention mechanisms into the model architecture were incorporated.

Details

Data Technologies and Applications, vol. 58 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 June 2024

Delin Yuan and Yang Li

When emergencies occur, the attention of the public towards emergency information on social media in a specific time period forms the emergency information popularity evolution…

42

Abstract

Purpose

When emergencies occur, the attention of the public towards emergency information on social media in a specific time period forms the emergency information popularity evolution patterns. The purpose of this study is to discover the popularity evolution patterns of social media emergency information and make early predictions.

Design/methodology/approach

We collected the data related to the COVID-19 epidemic on the Sina Weibo platform and applied the K-Shape clustering algorithm to identify five distinct patterns of emergency information popularity evolution patterns. These patterns include strong twin peaks, weak twin peaks, short-lived single peak, slow-to-warm-up single peak and slow-to-decay single peak. Oriented toward early monitoring and warning, we developed a comprehensive characteristic system that incorporates publisher features, information features and early features. In the early features, data measurements are taken within a 1-h time window after the release of emergency information. Considering real-time response and analysis speed, we employed classical machine learning methods to predict the relevant patterns. Multiple classification models were trained and evaluated for this purpose.

Findings

The combined prediction results of the best prediction model and random forest (RF) demonstrate impressive performance, with precision, recall and F1-score reaching 88%. Moreover, the F1 value for each pattern prediction surpasses 87%. The results of the feature importance analysis show that the early features contribute the most to the pattern prediction, followed by the information features and publisher features. Among them, the release time in the information features exhibits the most substantial contribution to the prediction outcome.

Originality/value

This study reveals the phenomena and special patterns of growth and decline, appearance and disappearance of social media emergency information popularity from the time dimension and identifies the patterns of social media emergency information popularity evolution. Meanwhile, early prediction of related patterns is made to explore the role factors behind them. These findings contribute to the formulation of social media emergency information release strategies, online public opinion guidance and risk monitoring.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 14 May 2024

Xuemei Tang, Jun Wang and Qi Su

Recent trends have shown the integration of Chinese word segmentation (CWS) and part-of-speech (POS) tagging to enhance syntactic and semantic parsing. However, the potential…

Abstract

Purpose

Recent trends have shown the integration of Chinese word segmentation (CWS) and part-of-speech (POS) tagging to enhance syntactic and semantic parsing. However, the potential utility of hierarchical and structural information in these tasks remains underexplored. This study aims to leverage multiple external knowledge sources (e.g. syntactic and semantic features, lexicons) through various modules for the joint task.

Design/methodology/approach

We introduce a novel learning framework for the joint CWS and POS tagging task, utilizing graph convolutional networks (GCNs) to encode syntactic structure and semantic features. The framework also incorporates a pre-defined lexicon through a lexicon attention module. We evaluate our model on a range of public corpora, including CTB5, PKU and UD, the novel ZX dataset and the comprehensive CTB9 dataset.

Findings

Experimental results on these benchmark corpora demonstrate the effectiveness of our model in improving the performance of the joint task. Notably, we find that syntax information significantly enhances performance, while lexicon information helps mitigate the issue of out-of-vocabulary (OOV) words.

Originality/value

This study introduces a comprehensive approach to the joint CWS and POS tagging task by combining multiple features. Moreover, the proposed framework offers potential adaptability to other sequence labeling tasks, such as named entity recognition (NER).

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 22 August 2024

Guanghui Ye, Songye Li, Lanqi Wu, Jinyu Wei, Chuan Wu, Yujie Wang, Jiarong Li, Bo Liang and Shuyan Liu

Community question answering (CQA) platforms play a significant role in knowledge dissemination and information retrieval. Expert recommendation can assist users by helping them…

Abstract

Purpose

Community question answering (CQA) platforms play a significant role in knowledge dissemination and information retrieval. Expert recommendation can assist users by helping them find valuable answers efficiently. Existing works mainly use content and user behavioural features for expert recommendation, and fail to effectively leverage the correlation across multi-dimensional features.

Design/methodology/approach

To address the above issue, this work proposes a multi-dimensional feature fusion-based method for expert recommendation, aiming to integrate features of question–answerer pairs from three dimensions, including network features, content features and user behaviour features. Specifically, network features are extracted by first learning user and tag representations using network representation learning methods and then calculating questioner–answerer similarities and answerer–tag similarities. Secondly, content features are extracted from textual contents of questions and answerer generated contents using text representation models. Thirdly, user behaviour features are extracted from user actions observed in CQA platforms, such as following and likes. Finally, given a question–answerer pair, the three dimensional features are fused and used to predict the probability of the candidate expert answering the given question.

Findings

The proposed method is evaluated on a data set collected from a publicly available CQA platform. Results show that the proposed method is effective compared with baseline methods. Ablation study shows that network features is the most important dimensional features among all three dimensional features.

Practical implications

This work identifies three dimensional features for expert recommendation in CQA platforms and conducts a comprehensive investigation into the importance of features for the performance of expert recommendation. The results suggest that network features are the most important features among three-dimensional features, which indicates that the performance of expert recommendation in CQA platforms is likely to get improved by further mining network features using advanced techniques, such as graph neural networks. One broader implication is that it is always important to include multi-dimensional features for expert recommendation and conduct systematic investigation to identify the most important features for finding directions for improvement.

Originality/value

This work proposes three-dimensional features given that existing works mostly focus on one or two-dimensional features and demonstrate the effectiveness of the newly proposed features.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 26 April 2024

Chao Zhang, Zenghao Cao, Zhimin Li, Weidong Zhu and Yong Wu

Since the implementation of the regulatory inquiry system, research on its impact on information disclosure in the capital market has been increasing. This article focuses on a…

Abstract

Purpose

Since the implementation of the regulatory inquiry system, research on its impact on information disclosure in the capital market has been increasing. This article focuses on a specific area of study using Chinese annual report inquiry letters as the basis. From a text mining perspective, we explore whether the textual information contained in these inquiry letters can help predict financial restatement behavior of the inquired companies.

Design/methodology/approach

Python was used to process the data, nonparametric tests were conducted for hypothesis testing and indicator selection, and six machine learning models were employed to predict financial restatements.

Findings

Some text feature indicators in the models that exhibit significant differences are useful for predicting financial restatements, particularly the proportion of formal positive words and stopwords, readability, total word count and certain textual topics. Securities regulatory authorities are increasingly focusing on the accounting and financial aspects of companies' annual reports.

Research limitations/implications

This study explores the textual information in annual report inquiry letters, which can provide insights for other scholars into research methods and content. Besides, it can assist with decision making for participants in the capital market.

Originality/value

We use information technology to study the textual information in annual report inquiry letters and apply it to forecast financial restatements, which enriches the research in the field of regulatory inquiries.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 July 2024

Meiwen Li, Liye Xia, Qingtao Wu, Lin Wang, Junlong Zhu and Mingchuan Zhang

In traditional Chinese medicine (TCM), the mechanism of disease (MD) constitutes an essential element of syndrome differentiation and treatment, elucidating the mechanisms…

Abstract

Purpose

In traditional Chinese medicine (TCM), the mechanism of disease (MD) constitutes an essential element of syndrome differentiation and treatment, elucidating the mechanisms underlying the occurrence, progression, alterations and outcomes of diseases. However, there is a dearth of research in the field of intelligent diagnosis concerning the analysis of MD.

Design/methodology/approach

In this paper, we propose a supervised Latent Dirichlet Allocation (LDA) topic model, termed MD-LDA, which elucidates the process of MDs identification. We leverage the label information inherent in the data as prior knowledge and incorporate it into the model’s training. Additionally, we devise two parallel parameter estimation algorithms for efficient training. Furthermore, we introduce a benchmark MD identification dataset, named TMD, for training MD-LDA. Finally, we validate the performance of MD-LDA through comprehensive experiments.

Findings

The results show that MD-LDA is effective and efficient. Moreover, MD-LDA outperforms the state-of-the-art topic models on perplexity, Kullback–Leibler (KL) and classification performance.

Originality/value

The proposed MD-LDA can be applied for the MD discovery and analysis of TCM clinical diagnosis, so as to improve the interpretability and reliability of intelligent diagnosis and treatment.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. 80 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 13 April 2023

Dandan He, Zhong Yao, Futao Zhao and Yue Wang

Retail investors are prone to be affected by information dissemination in social media with the rapid development of Web 2.0. The purpose of this study is to recognize the factors…

Abstract

Purpose

Retail investors are prone to be affected by information dissemination in social media with the rapid development of Web 2.0. The purpose of this study is to recognize the factors that may impact users' retweet behavior, namely information dissemination in the online financial community, through machine learning techniques.

Design/methodology/approach

This paper crawled data from the Chinese online financial community (Xueqiu.com) and extracted author-related, content-related, situation-related, stock-related and stock market-related features from the dataset. The best information dissemination prediction model based on these features was determined by evaluating five classifiers with various performance metrics, and the predictability of different feature groups was tested.

Findings

Five prevalent classifiers were evaluated with various performance metrics and the random forest classifier was proven to be the best retweet prediction model in the authors’ experiments. Moreover, the predictability of author-related, content-related and market-related features was illustrated to be relatively better than that of the other two feature groups. Several particularly important features, such as the author's followers and the rise and fall of the stock index, were recognized in this paper at last.

Originality/value

This study contributes to in-depth research on information dissemination in the financial domain. The findings of this study have important practical implications for government regulators to supervise public opinion in the financial market.

Details

Aslib Journal of Information Management, vol. 76 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 11 June 2024

Ehsanul Hassan, Muhammad Awais-E-Yazdan, Ramona Birau, Peter Wanke and Yong Aaron Tan

This study aims to develop a robust predictive model for anticipating financial distress within Pakistani companies, providing a crucial tool for proactive economic turbulence…

Abstract

Purpose

This study aims to develop a robust predictive model for anticipating financial distress within Pakistani companies, providing a crucial tool for proactive economic turbulence management.

Design/methodology/approach

To achieve this objective, the study examines a comprehensive data set comprising nonfinancial firms listed on the Pakistan Stock Exchange from 2005 to 2022. It investigates 23 financial ratios categorized under profitability, liquidity, leverage, asset efficiency, size and growth.

Findings

The study reveals that financial ratio indices are more effective in forecasting financial distress compared to individual ratios. These indices achieve impressive accuracy rates, ranging from a robust 93.90% in the first year leading up to bankruptcy to a commendable 73.71% in the fifth year. Furthermore, the research identifies profitability, liquidity, leverage, asset efficiency, size and growth as pivotal indicators for financial distress prediction.

Originality/value

This research underscores the utility and practicality of financial ratio indices, offering a comprehensive perspective on risk assessment and management. In conclusion, this pioneering study provides valuable insights into financial distress prediction, highlighting the enhanced information capture made possible by financial ratio indices. It equips stakeholders in the Pakistan Stock Exchange with an effective means to proactively address financial risks.

Details

International Journal of Islamic and Middle Eastern Finance and Management, vol. 17 no. 3
Type: Research Article
ISSN: 1753-8394

Keywords

1 – 10 of 240