Search results

1 – 10 of 115
Open Access
Article
Publication date: 18 March 2022

Loris Nanni, Alessandra Lumini and Sheryl Brahnam

Automatic anatomical therapeutic chemical (ATC) classification is progressing at a rapid pace because of its potential in drug development. Predicting an unknown compound's…

Abstract

Purpose

Automatic anatomical therapeutic chemical (ATC) classification is progressing at a rapid pace because of its potential in drug development. Predicting an unknown compound's therapeutic and chemical characteristics in terms of how it affects multiple organs and physiological systems makes automatic ATC classification a vital yet challenging multilabel problem. The aim of this paper is to experimentally derive an ensemble of different feature descriptors and classifiers for ATC classification that outperforms the state-of-the-art.

Design/methodology/approach

The proposed method is an ensemble generated by the fusion of neural networks (i.e. a tabular model and long short-term memory networks (LSTM)) and multilabel classifiers based on multiple linear regression (hMuLab). All classifiers are trained on three sets of descriptors. Features extracted from the trained LSTMs are also fed into hMuLab. Evaluations of ensembles are compared on a benchmark data set of 3883 ATC-coded pharmaceuticals taken from KEGG, a publicly available drug databank.

Findings

Experiments demonstrate the power of the authors’ best ensemble, EnsATC, which is shown to outperform the best methods reported in the literature, including the state-of-the-art developed by the fast.ai research group. The MATLAB source code of the authors’ system is freely available to the public at https://github.com/LorisNanni/Neural-networks-for-anatomical-therapeutic-chemical-ATC-classification.

Originality/value

This study demonstrates the power of extracting LSTM features and combining them with ATC descriptors in ensembles for ATC classification.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 21 February 2024

Aysu Coşkun and Sándor Bilicz

This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a…

Abstract

Purpose

This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a robust classification method by considering an incident angle with minor random fluctuations and using a physical optics simulation to generate data sets.

Design/methodology/approach

The approach involves several supervised machine learning and classification methods, including traditional algorithms and a deep neural network classifier. It uses histogram-based definitions of the RCS for feature extraction, with an emphasis on resilience against noise in the RCS data. Data enrichment techniques are incorporated, including the use of noise-impacted histogram data sets.

Findings

The classification algorithms are extensively evaluated, highlighting their efficacy in feature extraction from RCS histograms. Among the studied algorithms, the K-nearest neighbour is found to be the most accurate of the traditional methods, but it is surpassed in accuracy by a deep learning network classifier. The results demonstrate the robustness of the feature extraction from the RCS histograms, motivated by mm-wave radar applications.

Originality/value

This study presents a novel approach to target classification that extends beyond traditional methods by integrating deep neural networks and focusing on histogram-based methodologies. It also incorporates data enrichment techniques to enhance the analysis, providing a comprehensive perspective for target detection using RCS.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 17 October 2023

Abdelhadi Ifleh and Mounime El Kabbouri

The prediction of stock market (SM) indices is a fascinating task. An in-depth analysis in this field can provide valuable information to investors, traders and policy makers in…

Abstract

Purpose

The prediction of stock market (SM) indices is a fascinating task. An in-depth analysis in this field can provide valuable information to investors, traders and policy makers in attractive SMs. This article aims to apply a correlation feature selection model to identify important technical indicators (TIs), which are combined with multiple deep learning (DL) algorithms for forecasting SM indices.

Design/methodology/approach

The methodology involves using a correlation feature selection model to select the most relevant features. These features are then used to predict the fluctuations of six markets using various DL algorithms, and the results are compared with predictions made using all features by using a range of performance measures.

Findings

The experimental results show that the combination of TIs selected through correlation and Artificial Neural Network (ANN) provides good results in the MADEX market. The combination of selected indicators and Convolutional Neural Network (CNN) in the NASDAQ 100 market outperforms all other combinations of variables and models. In other markets, the combination of all variables with ANN provides the best results.

Originality/value

This article makes several significant contributions, including the use of a correlation feature selection model to select pertinent variables, comparison between multiple DL algorithms (ANN, CNN and Long-Short-Term Memory (LSTM)), combining selected variables with algorithms to improve predictions, evaluation of the suggested model on six datasets (MASI, MADEX, FTSE 100, SP500, NASDAQ 100 and EGX 30) and application of various performance measures (Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error(RMSE), Mean Squared Logarithmic Error (MSLE) and Root Mean Squared Logarithmic Error (RMSLE)).

Details

Arab Gulf Journal of Scientific Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 21 June 2022

Abhishek Das and Mihir Narayan Mohanty

In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent…

Abstract

Purpose

In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.

Design/methodology/approach

In this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.

Findings

The proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.

Research limitations/implications

Research in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.

Originality/value

The proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 25 September 2023

Gayatri Panda, Manoj Kumar Dash, Ashutosh Samadhiya, Anil Kumar and Eyob Mulat-weldemeskel

Artificial intelligence (AI) can enhance human resource resiliency (HRR) by providing the insights and resources needed to adapt to unexpected changes and disruptions. Therefore…

2278

Abstract

Purpose

Artificial intelligence (AI) can enhance human resource resiliency (HRR) by providing the insights and resources needed to adapt to unexpected changes and disruptions. Therefore, the present research attempts to develop a framework for future researchers to gain insights into the actions of AI to enable HRR.

Design/methodology/approach

The present study used a systematic literature review, bibliometric analysis, and network analysis followed by content analysis. In doing so, we reviewed the literature to explore the present state of research in AI and HRR. A total of 98 articles were included, extracted from the Scopus database in the selected field of research.

Findings

The authors found that AI or AI-associated techniques help deliver various HRR-oriented outcomes, such as enhancing employee competency, performance management and risk management; enhancing leadership competencies and employee well-being measures; and developing effective compensation and reward management.

Research limitations/implications

The present research has certain implications, such as increasing the HR team's proficiency, addressing the problem of job loss and how to fix it, improving working conditions and improving decision-making in HR.

Originality/value

The present research explores the role of AI in HRR following the COVID-19 pandemic, which has not been explored extensively.

Details

International Journal of Industrial Engineering and Operations Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2690-6090

Keywords

Open Access
Article
Publication date: 23 March 2023

María Belén Prados-Peña, George Pavlidis and Ana García-López

This study aims to analyze the impact of Artificial Intelligence (AI) and Machine Learning (ML) on heritage conservation and preservation, and to identify relevant future research…

Abstract

Purpose

This study aims to analyze the impact of Artificial Intelligence (AI) and Machine Learning (ML) on heritage conservation and preservation, and to identify relevant future research trends, by applying scientometrics.

Design/methodology/approach

A total of 1,646 articles, published between 1985 and 2021, concerning research on the application of ML and AI in cultural heritage were collected from the Scopus database and analyzed using bibliometric methodologies.

Findings

The findings of this study have shown that although there is a very important increase in academic literature in relation to AI and ML, publications that specifically deal with these issues in relation to cultural heritage and its conservation and preservation are significantly limited.

Originality/value

This study enriches the academic outline by highlighting the limited literature in this context and therefore the need to advance the study of AI and ML as key elements that support heritage researchers and practitioners in conservation and preservation work.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-1266

Keywords

Open Access
Article
Publication date: 26 August 2021

Shruti Garg, Rahul Kumar Patro, Soumyajit Behera, Neha Prerna Tigga and Ranjita Pandey

The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.

3190

Abstract

Purpose

The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.

Design/methodology/approach

Classical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.

Findings

The two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.

Originality/value

The present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 20 February 2024

Li Chen, Dirk Ifenthaler, Jane Yin-Kim Yau and Wenting Sun

The study aims to identify the status quo of artificial intelligence in entrepreneurship education with a view to identifying potential research gaps, especially in the adoption…

1200

Abstract

Purpose

The study aims to identify the status quo of artificial intelligence in entrepreneurship education with a view to identifying potential research gaps, especially in the adoption of certain intelligent technologies and pedagogical designs applied in this domain.

Design/methodology/approach

A scoping review was conducted using six inclusive and exclusive criteria agreed upon by the author team. The collected studies, which focused on the adoption of AI in entrepreneurship education, were analysed by the team with regards to various aspects including the definition of intelligent technology, research question, educational purpose, research method, sample size, research quality and publication. The results of this analysis were presented in tables and figures.

Findings

Educators introduced big data and algorithms of machine learning in entrepreneurship education. Big data analytics use multimodal data to improve the effectiveness of entrepreneurship education and spot entrepreneurial opportunities. Entrepreneurial analytics analysis entrepreneurial projects with low costs and high effectiveness. Machine learning releases educators’ burdens and improves the accuracy of the assessment. However, AI in entrepreneurship education needs more sophisticated pedagogical designs in diagnosis, prediction, intervention, prevention and recommendation, combined with specific entrepreneurial learning content and entrepreneurial procedure, obeying entrepreneurial pedagogy.

Originality/value

This study holds significant implications as it can shift the focus of entrepreneurs and educators towards the educational potential of artificial intelligence, prompting them to consider the ways in which it can be used effectively. By providing valuable insights, the study can stimulate further research and exploration, potentially opening up new avenues for the application of artificial intelligence in entrepreneurship education.

Details

Education + Training, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0040-0912

Keywords

Open Access
Article
Publication date: 8 December 2023

Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…

Abstract

Purpose

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).

Design/methodology/approach

In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.

Findings

Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.

Originality/value

In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

Journal of Capital Markets Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-4774

Keywords

Open Access
Article
Publication date: 7 October 2021

Enas M.F. El Houby

Diabetic retinopathy (DR) is one of the dangerous complications of diabetes. Its grade level must be tracked to manage its progress and to start the appropriate decision for…

2571

Abstract

Purpose

Diabetic retinopathy (DR) is one of the dangerous complications of diabetes. Its grade level must be tracked to manage its progress and to start the appropriate decision for treatment in time. Effective automated methods for the detection of DR and the classification of its severity stage are necessary to reduce the burden on ophthalmologists and diagnostic contradictions among manual readers.

Design/methodology/approach

In this research, convolutional neural network (CNN) was used based on colored retinal fundus images for the detection of DR and classification of its stages. CNN can recognize sophisticated features on the retina and provides an automatic diagnosis. The pre-trained VGG-16 CNN model was applied using a transfer learning (TL) approach to utilize the already learned parameters in the detection.

Findings

By conducting different experiments set up with different severity groupings, the achieved results are promising. The best-achieved accuracies for 2-class, 3-class, 4-class and 5-class classifications are 86.5, 80.5, 63.5 and 73.7, respectively.

Originality/value

In this research, VGG-16 was used to detect and classify DR stages using the TL approach. Different combinations of classes were used in the classification of DR severity stages to illustrate the ability of the model to differentiate between the classes and verify the effect of these changes on the performance of the model.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Access

Only Open Access

Year

Content type

Earlycite article (115)
1 – 10 of 115