Search results
1 – 10 of 481Fung Yuen Chin, Kong Hoong Lem and Khye Mun Wong
The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the…
Abstract
Purpose
The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the employment of a feature selection algorithm becomes crucial for successful classification modeling, because the inclusion of irrelevant or redundant features can mislead the modeling algorithms, resulting in overfitting and decrease in efficiency.
Design/methodology/approach
The minimum redundancy and maximum relevance (mRMR) and the recursive feature elimination (RFE) are two frequently used feature selection algorithms. While mRMR is capable of identifying a subset of features that are highly relevant to the targeted classification variable, mRMR still carries the weakness of capturing redundant features along with the algorithm. On the other hand, RFE is flawed by the fact that those features selected by RFE are not ranked by importance, albeit RFE can effectively eliminate the less important features and exclude redundant features.
Findings
The hybrid method was exemplified in a binary classification between digits “4” and “9” and between digits “6” and “8” from a multiple features dataset. The result showed that the hybrid mRMR + support vector machine recursive feature elimination (SVMRFE) is better than both the sole support vector machine (SVM) and mRMR.
Originality/value
In view of the respective strength and deficiency mRMR and RFE, this study combined both these methods and used an SVM as the underlying classifier anticipating the mRMR to make an excellent complement to the SVMRFE.
Details
Keywords
Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye
Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…
Abstract
Purpose
Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.
Design/methodology/approach
To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.
Findings
The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.
Practical implications
Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.
Originality/value
This research has not been published anywhere else.
Details
Keywords
Abdelhadi Ifleh and Mounime El Kabbouri
The prediction of stock market (SM) indices is a fascinating task. An in-depth analysis in this field can provide valuable information to investors, traders and policy makers in…
Abstract
Purpose
The prediction of stock market (SM) indices is a fascinating task. An in-depth analysis in this field can provide valuable information to investors, traders and policy makers in attractive SMs. This article aims to apply a correlation feature selection model to identify important technical indicators (TIs), which are combined with multiple deep learning (DL) algorithms for forecasting SM indices.
Design/methodology/approach
The methodology involves using a correlation feature selection model to select the most relevant features. These features are then used to predict the fluctuations of six markets using various DL algorithms, and the results are compared with predictions made using all features by using a range of performance measures.
Findings
The experimental results show that the combination of TIs selected through correlation and Artificial Neural Network (ANN) provides good results in the MADEX market. The combination of selected indicators and Convolutional Neural Network (CNN) in the NASDAQ 100 market outperforms all other combinations of variables and models. In other markets, the combination of all variables with ANN provides the best results.
Originality/value
This article makes several significant contributions, including the use of a correlation feature selection model to select pertinent variables, comparison between multiple DL algorithms (ANN, CNN and Long-Short-Term Memory (LSTM)), combining selected variables with algorithms to improve predictions, evaluation of the suggested model on six datasets (MASI, MADEX, FTSE 100, SP500, NASDAQ 100 and EGX 30) and application of various performance measures (Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error(RMSE), Mean Squared Logarithmic Error (MSLE) and Root Mean Squared Logarithmic Error (RMSLE)).
Details
Keywords
Khalid Iqbal and Muhammad Shehrayar Khan
In this digital era, email is the most pervasive form of communication between people. Many users become a victim of spam emails and their data have been exposed.
Abstract
Purpose
In this digital era, email is the most pervasive form of communication between people. Many users become a victim of spam emails and their data have been exposed.
Design/methodology/approach
Researchers contribute to solving this problem by a focus on advanced machine learning algorithms and improved models for detecting spam emails but there is still a gap in features. To achieve good results, features also play an important role. To evaluate the performance of applied classifiers, 10-fold cross-validation is used.
Findings
The results approve that the spam emails are correctly classified with the accuracy of 98.00% for the Support Vector Machine and 98.06% for the Artificial Neural Network as compared to other applied machine learning classifiers.
Originality/value
In this paper, Point-Biserial correlation is applied to each feature concerning the class label of the University of California Irvine (UCI) spambase email dataset to select the best features. Extensive experiments are conducted on selected features by training the different classifiers.
Details
Keywords
Ema Utami, Irwan Oyong, Suwanto Raharjo, Anggit Dwi Hartanto and Sumarni Adi
Gathering knowledge regarding personality traits has long been the interest of academics and researchers in the fields of psychology and in computer science. Analyzing profile…
Abstract
Purpose
Gathering knowledge regarding personality traits has long been the interest of academics and researchers in the fields of psychology and in computer science. Analyzing profile data from personal social media accounts reduces data collection time, as this method does not require users to fill any questionnaires. A pure natural language processing (NLP) approach can give decent results, and its reliability can be improved by combining it with machine learning (as shown by previous studies).
Design/methodology/approach
In this, cleaning the dataset and extracting relevant potential features “as assessed by psychological experts” are essential, as Indonesians tend to mix formal words, non-formal words, slang and abbreviations when writing social media posts. For this article, raw data were derived from a predefined dominance, influence, stability and conscientious (DISC) quiz website, returning 316,967 tweets from 1,244 Twitter accounts “filtered to include only personal and Indonesian-language accounts”. Using a combination of NLP techniques and machine learning, the authors aim to develop a better approach and more robust model, especially for the Indonesian language.
Findings
The authors find that employing a SMOTETomek re-sampling technique and hyperparameter tuning boosts the model’s performance on formalized datasets by 57% (as measured through the F1-score).
Originality/value
The process of cleaning dataset and extracting relevant potential features assessed by psychological experts from it are essential because Indonesian people tend to mix formal words, non-formal words, slang words and abbreviations when writing tweets. Organic data derived from a predefined DISC quiz website resulting 1244 records of Twitter accounts and 316.967 tweets.
Details
Keywords
This paper purposed a multi-facet sentiment analysis system.
Abstract
Purpose
This paper purposed a multi-facet sentiment analysis system.
Design/methodology/approach
Hence, This paper uses multidomain resources to build a sentiment analysis system. The manual lexicon based features that are extracted from the resources are fed into a machine learning classifier to compare their performance afterward. The manual lexicon is replaced with a custom BOW to deal with its time consuming construction. To help the system run faster and make the model interpretable, this will be performed by employing different existing and custom approaches such as term occurrence, information gain, principal component analysis, semantic clustering, and POS tagging filters.
Findings
The proposed system featured by lexicon extraction automation and characteristics size optimization proved its efficiency when applied to multidomain and benchmark datasets by reaching 93.59% accuracy which makes it competitive to the state-of-the-art systems.
Originality/value
The construction of a custom BOW. Optimizing features based on existing and custom feature selection and clustering approaches.
Details
Keywords
Abstract
Purpose
Competency frameworks can support public procurement capacity development and performance. However, literature on connecting professionalisation with national procurement contexts is limited. This paper aims to explain and conceptualise recent Romanian experience with developing bespoke competency frameworks at national level for public procurement that reflect the features of the Romanian public procurement system. The approach used could guide in broad-brush, mutatis mutandis, other (national) public procurement systems with comparable features, mainly those seeking a shift from a rather administrative function of public procurement towards a strategic function.
Design/methodology/approach
This case study reflects on the methodology used for analysing the Romanian public procurement environment in EU context to develop bespoke professionalisation instruments, and on ways to integrate competency management approaches in Romanian public procurement culture. That methodological mix has been mainly qualitative and constructionist, within an applied research approach. It combined desk research with empirical research and included legal research in this context.
Findings
A principled, methodological and pragmatic approach tailored to the procurement environment in question is essential for developing competency frameworks capable to resonate to and address the specific practical needs of that procurement system.
Social implications
Competency frameworks can uphold societal objectives through public procurement.
Originality/value
Using valuable insights into the development of the Romanian public procurement competency frameworks, the paper provides a conceptual framework for instilling competency management approaches to public procurement professional development where the latter is governed by a rather distinct, public administration, paradigm. This conceptual framework can guide other public procurement systems and stimulate further research.
Details
Keywords
Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…
Abstract
Purpose
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.
Design/methodology/approach
This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.
Findings
The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.
Originality/value
A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.
Details
Keywords
Michael Rothgang and Bernhard Lageman
This study, a conceptual paper, aims an answer the question, how significant cluster ambidexterity is for the resilience of individual clusters.
Abstract
Purpose
This study, a conceptual paper, aims an answer the question, how significant cluster ambidexterity is for the resilience of individual clusters.
Design/methodology/approach
The authors draw up an abductive synopsis of empirical information and relevant theoretical sources. A case study is used to illustrate some of the findings.
Findings
The results of the analysis show that the ambidexterity of a cluster can contribute to its resilience when adverse external developments arise. Ambidexterity proves to be simultaneously a common strategy of key cluster actors and a mechanism for coping with critical situations and developments that can be activated by the cluster actors and may – eventually – lead to cluster resilience. While ambidexterity does not guarantee cluster survival, it can contribute significantly to their economic resilience under adverse conditions.
Research limitations/implications
The concept is developed on a limited empirical basis and would need to be tested and deepened by comparing a wide range of case studies from different clusters.
Practical implications
A better understanding of the importance of ambidexterity for the development of industrial clusters contributes to a better fine-tuning of cluster support policies.
Originality/value
Ambidexterity as a concept originating from business administration has so far only been rudimentarily tapped for empirical and theoretical cluster research. The paper identifies and develops a path how this could be accomplished to a greater extent in the future.
Details
Keywords
Bahareh Farhoudinia, Selcen Ozturkcan and Nihat Kasap
This paper aims to conduct an interdisciplinary systematic literature review (SLR) of fake news research and to advance the socio-technical understanding of digital information…
Abstract
Purpose
This paper aims to conduct an interdisciplinary systematic literature review (SLR) of fake news research and to advance the socio-technical understanding of digital information practices and platforms in business and management studies.
Design/methodology/approach
The paper applies a focused, SLR method to analyze articles on fake news in business and management journals from 2010 to 2020.
Findings
The paper analyzes the definition, theoretical frameworks, methods and research gaps of fake news in the business and management domains. It also identifies some promising research opportunities for future scholars.
Practical implications
The paper offers practical implications for various stakeholders who are affected by or involved in fake news dissemination, such as brands, consumers and policymakers. It provides recommendations to cope with the challenges and risks of fake news.
Social implications
The paper discusses the social consequences and future threats of fake news, especially in relation to social networking and social media. It calls for more awareness and responsibility from online communities to prevent and combat fake news.
Originality/value
The paper contributes to the literature on information management by showing the importance and consequences of fake news sharing for societies. It is among the frontier systematic reviews in the field that covers studies from different disciplines and focuses on business and management studies.
Details