Search results
1 – 10 of 459Wei Shi, Jing Zhang and Shaoyi He
With the rapid development of short videos in China, the public has become accustomed to using short videos to express their opinions. This paper aims to solve problems such as…
Abstract
Purpose
With the rapid development of short videos in China, the public has become accustomed to using short videos to express their opinions. This paper aims to solve problems such as how to represent the features of different modalities and achieve effective cross-modal feature fusion when analyzing the multi-modal sentiment of Chinese short videos (CSVs).
Design/methodology/approach
This paper aims to propose a sentiment analysis model MSCNN-CPL-CAFF using multi-scale convolutional neural network and cross attention fusion mechanism to analyze the CSVs. The audio-visual and textual data of CSVs themed on “COVID-19, catering industry” are collected from CSV platform Douyin first, and then a comparative analysis is conducted with advanced baseline models.
Findings
The sample number of the weak negative and neutral sentiment is the largest, and the sample number of the positive and weak positive sentiment is relatively small, accounting for only about 11% of the total samples. The MSCNN-CPL-CAFF model has achieved the Acc-2, Acc-3 and F1 score of 85.01%, 74.16 and 84.84%, respectively, which outperforms the highest value of baseline methods in accuracy and achieves competitive computation speed.
Practical implications
This research offers some implications regarding the impact of COVID-19 on catering industry in China by focusing on multi-modal sentiment of CSVs. The methodology can be utilized to analyze the opinions of the general public on social media platform and to categorize them accordingly.
Originality/value
This paper presents a novel deep-learning multimodal sentiment analysis model, which provides a new perspective for public opinion research on the short video platform.
Details
Keywords
Song Wang, Ying Luo and Xinmin Liu
The overload of user-generated content in online mental health community makes the focus and resonance tendencies of the participating groups less clear. Thus, the purpose of this…
Abstract
Purpose
The overload of user-generated content in online mental health community makes the focus and resonance tendencies of the participating groups less clear. Thus, the purpose of this paper is to build an early identification mechanism for users' high attention content to promote early intervention and effective dissemination of professional medical guidance.
Design/methodology/approach
We decouple the identification mechanism from two processes: early feature combing and algorithmic model construction. Firstly, based on the differentiated needs and concerns of the participant groups, the multiple features of “information content + source users” are refined. Secondly, a multi-level fusion model is constructed for features processing. Specifically, Bidirectional Encoder Representation from Transformers (BERT)-Bi-directional Long-Short Term Memory (BiLSTM)-Linear are used to refine the semantic features, while Graph Attention Networks (GAT) is used to capture the entity attributes and relation features. Finally, the Convolutional Neural Network (CNN) is used to optimize the multi-level fusion features.
Findings
The results show that the ACC of the multi-level fusion model is 84.42%, F1 is 79.43% and R is 76.71%. Compared with other baseline models and single feature elements, the ACC and F1 values are improved to different degrees.
Originality/value
The originality of this paper lies in analyzing multiple features based on early stages and constructing a new multi-level fusion model for processing. Further, the study is valuable for the orientation of psychological patients' needs and early guidance of professional medical care.
Details
Keywords
Mukesh Soni, Nihar Ranjan Nayak, Ashima Kalra, Sheshang Degadwala, Nikhil Kumar Singh and Shweta Singh
The purpose of this paper is to improve the existing paradigm of edge computing to maintain a balanced energy usage.
Abstract
Purpose
The purpose of this paper is to improve the existing paradigm of edge computing to maintain a balanced energy usage.
Design/methodology/approach
The new greedy algorithm is proposed to balance the energy consumption in edge computing.
Findings
The new greedy algorithm can balance energy more efficiently than the random approach by an average of 66.59 percent.
Originality/value
The results are shown in this paper which are better as compared to existing algorithms.
Details
Keywords
Zhanglin Peng, Tianci Yin, Xuhui Zhu, Xiaonong Lu and Xiaoyu Li
To predict the price of battery-grade lithium carbonate accurately and provide proper guidance to investors, a method called MFTBGAM is proposed in this study. This method…
Abstract
Purpose
To predict the price of battery-grade lithium carbonate accurately and provide proper guidance to investors, a method called MFTBGAM is proposed in this study. This method integrates textual and numerical information using TCN-BiGRU–Attention.
Design/methodology/approach
The Word2Vec model is initially employed to process the gathered textual data concerning battery-grade lithium carbonate. Subsequently, a dual-channel text-numerical extraction model, integrating TCN and BiGRU, is constructed to extract textual and numerical features separately. Following this, the attention mechanism is applied to extract fusion features from the textual and numerical data. Finally, the market price prediction results for battery-grade lithium carbonate are calculated and outputted using the fully connected layer.
Findings
Experiments in this study are carried out using datasets consisting of news and investor commentary. The findings reveal that the MFTBGAM model exhibits superior performance compared to alternative models, showing its efficacy in precisely forecasting the future market price of battery-grade lithium carbonate.
Research limitations/implications
The dataset analyzed in this study spans from 2020 to 2023, and thus, the forecast results are specifically relevant to this timeframe. Altering the sample data would necessitate repetition of the experimental process, resulting in different outcomes. Furthermore, recognizing that raw data might include noise and irrelevant information, future endeavors will explore efficient data preprocessing techniques to mitigate such issues, thereby enhancing the model’s predictive capabilities in long-term forecasting tasks.
Social implications
The price prediction model serves as a valuable tool for investors in the battery-grade lithium carbonate industry, facilitating informed investment decisions. By using the results of price prediction, investors can discern opportune moments for investment. Moreover, this study utilizes two distinct types of text information – news and investor comments – as independent sources of textual data input. This approach provides investors with a more precise and comprehensive understanding of market dynamics.
Originality/value
We propose a novel price prediction method based on TCN-BiGRU Attention for “text-numerical” information fusion. We separately use two types of textual information, news and investor comments, for prediction to enhance the model's effectiveness and generalization ability. Additionally, we utilize news datasets including both titles and content to improve the accuracy of battery-grade lithium carbonate market price predictions.
Details
Keywords
Wenshen Xu, Yifan Zhang, Xinhang Jiang, Jun Lian and Ye Lin
In the field of steel defect detection, the existing detection algorithms struggle to achieve a satisfactory balance between detection accuracy, computational cost and inference…
Abstract
Purpose
In the field of steel defect detection, the existing detection algorithms struggle to achieve a satisfactory balance between detection accuracy, computational cost and inference speed due to the interference from complex background information, the variety of defect types and significant variations in defect morphology. To solve this problem, this paper aims to propose an efficient detector based on multi-scale information extraction (MSI-YOLO), which uses YOLOv8s as the baseline model.
Design/methodology/approach
First, the authors introduce an efficient multi-scale convolution with different-sized convolution kernels, which enables the feature extraction network to accommodate significant variations in defect morphology. Furthermore, the authors introduce the channel prior convolutional attention mechanism, which allows the network to focus on defect areas and ignore complex background interference. Considering the lightweight design and accuracy improvement, the authors introduce a more lightweight feature fusion network (Slim-neck) to improve the fusion effect of feature maps.
Findings
MSI-YOLO achieves 79.9% mean average precision on the public data set Northeastern University (NEU)-DET, with a model size of only 19.0 MB and an frames per second of 62.5. Compared with other state-of-the-art detectors, MSI-YOLO greatly improves the recognition accuracy and has significant advantages in computational cost and inference speed. Additionally, the strong generalization ability of MSI-YOLO is verified on the collected industrial site steel data set.
Originality/value
This paper proposes an efficient steel defect detector with high accuracy, low computational cost, excellent detection speed and strong generalization ability, which is more valuable for practical applications in resource-limited industrial production.
Details
Keywords
Yangze Liang and Zhao Xu
Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components…
Abstract
Purpose
Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components during the construction phase is predominantly done manually, resulting in low efficiency and hindering the progress of intelligent construction. This paper presents an intelligent inspection method for assessing the appearance quality of PC components, utilizing an enhanced you look only once (YOLO) model and multi-source data. The aim of this research is to achieve automated management of the appearance quality of precast components in the prefabricated construction process through digital means.
Design/methodology/approach
The paper begins by establishing an improved YOLO model and an image dataset for evaluating appearance quality. Through object detection in the images, a preliminary and efficient assessment of the precast components' appearance quality is achieved. Moreover, the detection results are mapped onto the point cloud for high-precision quality inspection. In the case of precast components with quality defects, precise quality inspection is conducted by combining the three-dimensional model data obtained from forward design conversion with the captured point cloud data through registration. Additionally, the paper proposes a framework for an automated inspection platform dedicated to assessing appearance quality in prefabricated buildings, encompassing the platform's hardware network.
Findings
The improved YOLO model achieved a best mean average precision of 85.02% on the VOC2007 dataset, surpassing the performance of most similar models. After targeted training, the model exhibits excellent recognition capabilities for the four common appearance quality defects. When mapped onto the point cloud, the accuracy of quality inspection based on point cloud data and forward design is within 0.1 mm. The appearance quality inspection platform enables feedback and optimization of quality issues.
Originality/value
The proposed method in this study enables high-precision, visualized and automated detection of the appearance quality of PC components. It effectively meets the demand for quality inspection of precast components on construction sites of prefabricated buildings, providing technological support for the development of intelligent construction. The design of the appearance quality inspection platform's logic and framework facilitates the integration of the method, laying the foundation for efficient quality management in the future.
Details
Keywords
Manju Priya Arthanarisamy Ramaswamy and Suja Palaniswamy
The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG)…
Abstract
Purpose
The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.
Design/methodology/approach
DEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.
Findings
The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.
Originality/value
Many of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.
Details
Keywords
Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Pitipol Choopong, Thanongchai Siriapisith, Nattaporn Tesavibul, Nopasak Phasukkijwatana, Supalert Prakhunhungsit and Sutasinee Boonsopon
This paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could…
Abstract
Purpose
This paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could classify input retinal images into a normal class or an abnormal class, which would be further split into four stages of abnormalities automatically.
Design/methodology/approach
The proposed solution is developed based on a newly proposed CNN architecture, namely, DeepRoot. It consists of one main branch, which is connected by two side branches. The main branch is responsible for the primary feature extractor of both high-level and low-level features of retinal images. Then, the side branches further extract more complex and detailed features from the features outputted from the main branch. They are designed to capture details of small traces of DR in retinal images, using modified zoom-in/zoom-out and attention layers.
Findings
The proposed method is trained, validated and tested on the Kaggle dataset. The regularization of the trained model is evaluated using unseen data samples, which were self-collected from a real scenario from a hospital. It achieves a promising performance with a sensitivity of 98.18% under the two classes scenario.
Originality/value
The new CNN-based architecture (i.e. DeepRoot) is introduced with the concept of a multi-branch network. It could assist in solving a problem of an unbalanced dataset, especially when there are common characteristics across different classes (i.e. four stages of DR). Different classes could be outputted at different depths of the network.
Details
Keywords
B. Maheswari and Rajganesh Nagarajan
A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are…
Abstract
Purpose
A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are gathered from the chat and then the gathered text is fed to pre-processing techniques like tokenization, stemming of words and removal of stop words. Then, the pre-processed data are given to the Natural Learning Process (NLP) for extracting the features, where the XLnet and Bidirectional Encoder Representations from Transformers (BERT) are utilized to extract the features. From these extracted features, the target-based fused feature pools are obtained. Then, the intent detection is carried out to extract the answers related to the user queries via Enhanced 1D-Convolutional Neural Networks with Long Short Term Memory (E1DCNN-LSTM) where the parameters are optimized using Position Averaging of Binary Emperor Penguin Optimizer with Colony Predation Algorithm (PA-BEPOCPA). Finally, the answers are extracted based on the intent of a particular student’s teaching materials like video, image or text. The implementation results are analyzed through different recently developed Chatbot detection models to validate the effectiveness of the newly developed model.
Design/methodology/approach
A smart model for the NLP is developed to help education-related institutions for an easy way of interaction between students and teachers with high prediction of accurate data for the given query. This research work aims to design a new educational Chatbot to assist the teaching-learning process with the NLP. The input data are gathered from the user through chats and given to the pre-processing stage, where tokenization, steaming of words and removal of stop words are used. The output data from the pre-processing stage is given to the feature extraction phase where XLnet and BERT are used. In this feature extraction, the optimal features are extracted using hybrid PA-BEPOCPA to maximize the correlation coefficient. The features from XLnet and features from BERT were given to target-based features fused pool to produce optimal features. Here, the best features are optimally selected using developed PA-BEPOCPA for maximizing the correlation among coefficients. The output of selected features is given to E1DCNN-LSTM for implementation of educational Chatbot with high accuracy and precision.
Findings
The investigation result shows that the implemented model achieves maximum accuracy of 57% more than Bidirectional long short-term memory (BiLSTM), 58% more than One Dimansional Convolutional Neural Network (1DCNN), 59% more than LSTM and 62% more than Ensemble for the given dataset.
Originality/value
The prediction accuracy was high in this proposed deep learning-based educational Chatbot system when compared with various baseline works.
Details
Keywords
Monojit Das, V.N.A. Naikan and Subhash Chandra Panja
The aim of this paper is to review the literature on the prediction of cutting tool life. Tool life is typically estimated by predicting the time to reach the threshold flank wear…
Abstract
Purpose
The aim of this paper is to review the literature on the prediction of cutting tool life. Tool life is typically estimated by predicting the time to reach the threshold flank wear width. The cutting tool is a crucial component in any machining process, and its failure affects the manufacturing process adversely. The prediction of cutting tool life by considering several factors that affect tool life is crucial to managing quality, cost, availability and waste in machining processes.
Design/methodology/approach
This study has undertaken the critical analysis and summarisation of various techniques used in the literature for predicting the life or remaining useful life (RUL) of the cutting tool through monitoring the tool wear, primarily flank wear. The experimental setups that comprise diversified machining processes, including turning, milling, drilling, boring and slotting, are covered in this review.
Findings
Cutting tool life is a stochastic variable. Tool failure depends on various factors, including the type and material of the cutting tool, work material, cutting conditions and machine tool. Thus, the life of the cutting tool for a particular experimental setup must be modelled by considering the cutting parameters.
Originality/value
This submission discusses tool life prediction comprehensively, from monitoring tool wear, primarily flank wear, to modelling tool life, and this type of comprehensive review on cutting tool life prediction has not been reported in the literature till now. The future suggestions provided in this review are expected to provide avenues to solve the unexplored challenges in this field.
Details