Search results
1 – 10 of 69Shruti Garg, Rahul Kumar Patro, Soumyajit Behera, Neha Prerna Tigga and Ranjita Pandey
The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.
Abstract
Purpose
The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.
Design/methodology/approach
Classical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.
Findings
The two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.
Originality/value
The present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.
Details
Keywords
This paper purposed a multi-facet sentiment analysis system.
Abstract
Purpose
This paper purposed a multi-facet sentiment analysis system.
Design/methodology/approach
Hence, This paper uses multidomain resources to build a sentiment analysis system. The manual lexicon based features that are extracted from the resources are fed into a machine learning classifier to compare their performance afterward. The manual lexicon is replaced with a custom BOW to deal with its time consuming construction. To help the system run faster and make the model interpretable, this will be performed by employing different existing and custom approaches such as term occurrence, information gain, principal component analysis, semantic clustering, and POS tagging filters.
Findings
The proposed system featured by lexicon extraction automation and characteristics size optimization proved its efficiency when applied to multidomain and benchmark datasets by reaching 93.59% accuracy which makes it competitive to the state-of-the-art systems.
Originality/value
The construction of a custom BOW. Optimizing features based on existing and custom feature selection and clustering approaches.
Details
Keywords
Andreas Kiky, Apriani Dorkas Rambu Atahau, Linda Ariany Mahastanti and Supatmi Supatmi
This paper aims to explore the development of investment decision tools by understanding the rationality behind the disposition effect. We suspect that not all disposition…
Abstract
Purpose
This paper aims to explore the development of investment decision tools by understanding the rationality behind the disposition effect. We suspect that not all disposition decisions are irrational. The decisions should be evaluated based on the bounded rationality of the individuals’ target and tolerance level, which is not covered in previous literature. Adding the context of individual preference (target and tolerance) in their decision could improve the classic measurement of disposition effect.
Design/methodology/approach
The laboratory web experiment is prepared to collect the responses in holding and selling the stocks within 14 days. Two groups of Gen Z investors are observed. The control group makes a decision based on their judgment without any system recommendation. In contrast, the second group gets help inputting their target and tolerance. Furthermore, the framing effect is also applied as a reminder of their target and tolerance to induce more holding decisions on gain but selling on loss.
Findings
The framing effect is adequate to mitigate the disposition effect but only at the early day of observation. Bounded rationality explains the rationality of liquidating the gain because the participants have reached their goal. The framing effect is not moderated by days to affect the disposition effect; over time, the disposition effect tends to be higher. A new measurement of the disposition effect in the context of bounded rationality is better than the original disposition effect coefficient.
Practical implications
Gen Z investors need a system aid to help their investment decisions set their target and tolerance to mitigate the disposition effect. Investment firms can make a premium feature based on real-time market data for investors to manage their assets rationally in the long run. Bounded rationality theory offers more flexibility in understanding the gap between profit maximization and irrational decisions in behavioral finance. The government can use this finding to develop a suitable policy and ecosystem to help beginner investors understand investment risk and manage their assets based on subjective risk tolerance.
Originality/value
The classic Proportion Gain Realized (PGR) and Proportion Loss Realized (PLR) measurements cannot accommodate several contexts of users’ targets and tolerance in their choices, which we argue need to be re-evaluated with bounded rationality. Therefore, this article proposed new measurements that account for the users’ target and tolerance level to evaluate the rationality of their decision.
Details
Keywords
Nomanyano Primrose Mnyaka-Rulwa and Joseph Olorunfemi Akande
Agency theory motivated this study, posing that leverage mitigates the agency problem. The aim was to examine whether leverage influences the relationship between…
Abstract
Purpose
Agency theory motivated this study, posing that leverage mitigates the agency problem. The aim was to examine whether leverage influences the relationship between executive-employee pay gaps (EEPGs) and firm performance. The study was conducted in the mining and retail sectors between 2012 and 2021.
Design/methodology/approach
Two EEPGs were featured based on their executive fixed pay and variable incentives accumulation. Proxies of firm performance were headline earnings per share; return on assets; earnings before interest, tax, depreciation and amortisation; and return on stock price. Data were collected from 76 JSE-listed firms in the retail and mining sectors and analysed using the two-step generalised method of moments.
Findings
The results revealed the hybrid implication of the pay gap for firm performance in the retail and mining sectors of South Africa, depending on the performance measures emphasised. More importantly, the study shows that with the moderating effects of leverage, firms can improve their performance while shrinking the pay gap.
Practical implications
The results have implications for policy addressing income inequality, debt management, executive compensation and regulatory reforms in South Africa concerning productivity and remuneration decisions.
Originality/value
The article provides specific literature for retail and mining industries on pay gaps, shows that it is possible to reduce the pay gap without compromising performance and suggests a new measure of performance that is more attuned to pay gap effect measurement.
Details
Keywords
Manju Priya Arthanarisamy Ramaswamy and Suja Palaniswamy
The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG)…
Abstract
Purpose
The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.
Design/methodology/approach
DEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.
Findings
The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.
Originality/value
Many of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.
Details
Keywords
Shiqing Wu, Jiahai Wang, Haibin Jiang and Weiye Xue
The purpose of this study is to explore a new assembly process planning and execution mode to realize rapid response, reduce the labor intensity of assembly workers and improve…
Abstract
Purpose
The purpose of this study is to explore a new assembly process planning and execution mode to realize rapid response, reduce the labor intensity of assembly workers and improve the assembly efficiency and quality.
Design/methodology/approach
Based on the related concepts of digital twin, this paper studies the product assembly planning in digital space, the process execution in physical space and the interaction between digital space and physical space. The assembly process planning is simulated and verified in the digital space to generate three-dimensional visual assembly process specification documents, the implementation of the assembly process specification documents in the physical space is monitored and feed back to revise the assembly process and improve the assembly quality.
Findings
Digital twin technology enhances the quality and efficiency of assembly process planning and execution system.
Originality/value
It provides a new perspective for assembly process planning and execution, the architecture, connections and data acquisition approaches of the digital twin-driven framework are proposed in this paper, which is of important theoretical values. What is more, a smart assembly workbench is developed, the specific image classification algorithms are presented in detail too, which is of some industrial application values.
Details
Keywords
Rizwan Ali, Jin Xu, Mushahid Hussain Baig, Hafiz Saif Ur Rehman, Muhammad Waqas Aslam and Kaleem Ullah Qasim
This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates…
Abstract
Purpose
This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates technical and macroeconomic indicators.
Design/methodology/approach
In this study we used advance machine learning techniques, such as gradient boosting regression (GBR), random forest (RF) and notably long short-term memory (LSTM) networks, this research provides a nuanced understanding of the factors driving the performance of AI tokens. The study’s comparative analysis highlights the superior predictive capabilities of LSTM models, as evidenced by their performance across various AI digital tokens such as AGIX-singularity-NET, Cortex and numeraire NMR.
Findings
This study finding shows that through an intricate exploration of feature importance and the impact of speculative behaviour, the research elucidates the long-term patterns and resilience of AI-based tokens against economic shifts. The SHapley Additive exPlanations (SHAP) analysis results show that technical and some macroeconomic factors play a dominant role in price production. It also examines the potential of these models for strategic investment and hedging, underscoring their relevance in an increasingly digital economy.
Originality/value
According to our knowledge, the absence of AI research frameworks for forecasting and modelling current aria-leading AI tokens is apparent. Due to a lack of study on understanding the relationship between the AI token market and other factors, forecasting is outstandingly demanding. This study provides a robust predictive framework to accurately identify the changing trends of AI tokens within a multivariate context and fill the gaps in existing research. We can investigate detailed predictive analytics with the help of modern AI algorithms and correct model interpretation to elaborate on the behaviour patterns of developing decentralised digital AI-based token prices.
Details
Keywords
Xiaobo Tang, Heshen Zhou and Shixuan Li
Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly…
Abstract
Purpose
Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly cited paper prediction studies consider early citation information, so predicting highly cited papers by publication is challenging. Therefore, the authors propose a method for predicting early highly cited papers based on their own features.
Design/methodology/approach
This research analyzed academic papers published in the Journal of the Association for Computing Machinery (ACM) from 2000 to 2013. Five types of features were extracted: paper features, journal features, author features, reference features and semantic features. Subsequently, the authors applied a deep neural network (DNN), support vector machine (SVM), decision tree (DT) and logistic regression (LGR), and they predicted highly cited papers 1–3 years after publication.
Findings
Experimental results showed that early highly cited academic papers are predictable when they are first published. The authors’ prediction models showed considerable performance. This study further confirmed that the features of references and authors play an important role in predicting early highly cited papers. In addition, the proportion of high-quality journal references has a more significant impact on prediction.
Originality/value
Based on the available information at the time of publication, this study proposed an effective early highly cited paper prediction model. This study facilitates the early discovery and realization of the value of scientific and technological achievements.
Details
Keywords
Venkatesh Naramula and Kalaivania A.
This paper aims to focus on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multiple aspect extraction is one of the challenges. Then…
Abstract
Purpose
This paper aims to focus on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multiple aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.
Design/methodology/approach
In the aspect-based sentiment analysis aspect, term extraction is one of the key challenges where different aspects are extracted from online user-generated content. This study focuses on customer tweets/reviews on different mobile products which is an important form of opinionated content by looking at different aspects. Different deep learning techniques are used to extract all aspects from customer tweets which are extracted using Twitter API.
Findings
The comparison of the results with traditional machine learning methods such as random forest algorithm, K-nearest neighbour and support vector machine using two data sets iPhone tweets and Samsung tweets have been presented for better accuracy.
Originality/value
In this paper, the authors have focused on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multi-aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.
Details
Keywords
Loris Nanni and Sheryl Brahnam
Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or…
Abstract
Purpose
Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or two datasets/tasks. The purpose of this study is to create the most optimal and universal system for DNA-BP classification, one that performs competitively across several DNA-BP classification tasks.
Design/methodology/approach
Efficient DNA-BP classifier systems require the discovery of powerful protein representations and feature extraction methods. Experiments were performed that combined and compared descriptors extracted from state-of-the-art matrix/image protein representations. These descriptors were trained on separate support vector machines (SVMs) and evaluated. Convolutional neural networks with different parameter settings were fine-tuned on two matrix representations of proteins. Decisions were fused with the SVMs using the weighted sum rule and evaluated to experimentally derive the most powerful general-purpose DNA-BP classifier system.
Findings
The best ensemble proposed here produced comparable, if not superior, classification results on a broad and fair comparison with the literature across four different datasets representing a variety of DNA-BP classification tasks, thereby demonstrating both the power and generalizability of the proposed system.
Originality/value
Most DNA-BP methods proposed in the literature are only validated on one (rarely two) datasets/tasks. In this work, the authors report the performance of our general-purpose DNA-BP system on four datasets representing different DNA-BP classification tasks. The excellent results of the proposed best classifier system demonstrate the power of the proposed approach. These results can now be used for baseline comparisons by other researchers in the field.
Details