Search results

1 – 10 of 849
Article
Publication date: 16 August 2019

Shuangshuang Liu and Xiaoling Li

Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing. In…

Abstract

Purpose

Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing. In order to solve such problems, the purpose of this paper is to propose a novel image super-resolution algorithm based on improved generative adversarial networks (GANs) with Wasserstein distance and gradient penalty.

Design/methodology/approach

The proposed algorithm first introduces the conventional GANs architecture, the Wasserstein distance and the gradient penalty for the task of image super-resolution reconstruction (SRWGANs-GP). In addition, a novel perceptual loss function is designed for the SRWGANs-GP to meet the task of image super-resolution reconstruction. The content loss is extracted from the deep model’s feature maps, and such features are introduced to calculate mean square error (MSE) for the loss calculation of generators.

Findings

To validate the effectiveness and feasibility of the proposed algorithm, a lot of compared experiments are applied on three common data sets, i.e. Set5, Set14 and BSD100. Experimental results have shown that the proposed SRWGANs-GP architecture has a stable error gradient and iteratively convergence. Compared with the baseline deep models, the proposed GANs models have a significant improvement on performance and efficiency for image super-resolution reconstruction. The MSE calculated by the deep model’s feature maps gives more advantages for constructing contour and texture.

Originality/value

Compared with the state-of-the-art algorithms, the proposed algorithm obtains a better performance on image super-resolution and better reconstruction results on contour and texture.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 30 August 2021

Jinchao Huang

Multi-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to…

Abstract

Purpose

Multi-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be tracked move rapid or the appearances of moving objects vary dramatically, the conventional MDCNN model will suffer from the model drift problem. To solve such problem in tracking rapid objects under limiting environment for MDCNN model, this paper proposed an auto-attentional mechanism-based MDCNN (AA-MDCNN) model for the rapid moving and changing objects tracking under limiting environment.

Design/methodology/approach

First, to distinguish the foreground object between background and other similar objects, the auto-attentional mechanism is used to selectively aggregate the weighted summation of all feature maps to make the similar features related to each other. Then, the bidirectional gated recurrent unit (Bi-GRU) architecture is used to integrate all the feature maps to selectively emphasize the importance of the correlated feature maps. Finally, the final feature map is obtained by fusion the above two feature maps for object tracking. In addition, a composite loss function is constructed to solve the similar but different attribute sequences tracking using conventional MDCNN model.

Findings

In order to validate the effectiveness and feasibility of the proposed AA-MDCNN model, this paper used ImageNet-Vid dataset to train the object tracking model, and the OTB-50 dataset is used to validate the AA-MDCNN tracking model. Experimental results have shown that the augmentation of auto-attentional mechanism will improve the accuracy rate 2.75% and success rate 2.41%, respectively. In addition, the authors also selected six complex tracking scenarios in OTB-50 dataset; over eleven attributes have been validated that the proposed AA-MDCNN model outperformed than the comparative models over nine attributes. In addition, except for the scenario of multi-objects moving with each other, the proposed AA-MDCNN model solved the majority rapid moving objects tracking scenarios and outperformed than the comparative models on such complex scenarios.

Originality/value

This paper introduced the auto-attentional mechanism into MDCNN model and adopted Bi-GRU architecture to extract key features. By using the proposed AA-MDCNN model, rapid object tracking under complex background, motion blur and occlusion objects has better effect, and such model is expected to be further applied to the rapid object tracking in the real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 27 October 2015

Santi Furnari

Research has highlighted the cognitive nature of the business model intended as a cognitive representation describing a business’ value creation and value capture…

Abstract

Research has highlighted the cognitive nature of the business model intended as a cognitive representation describing a business’ value creation and value capture activities. Although the content of the business model has been extensively investigated from this perspective, less attention has been paid to the business model’s causal structure – that is the pattern of cause-effect relations that, in top managers’ or entrepreneurs’ understandings, link value creation and value capture activities. Building on the strategic cognition literature, this paper argues that conceptualizing and analysing business models as cognitive maps can shed light on four important properties of a business model’s causal structure: the levels of complexity, focus and clustering that characterize the causal structure and the mechanisms underlying the causal links featured in that structure. I use examples of business models drawn from the literature as illustrations to describe these four properties. Finally, I discuss the value of a cognitive mapping approach for augmenting extant theories and practices of business model design.

Details

Business Models and Modelling
Type: Book
ISBN: 978-1-78560-462-1

Keywords

Article
Publication date: 25 June 2020

Minghua Wei and Feng Lin

Aiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this…

Abstract

Purpose

Aiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this paper proposes an EEG signals classification method based on multi-dimensional fusion features.

Design/methodology/approach

First, the improved Morlet wavelet is used to extract the spectrum feature maps from EEG signals. Then, the spatial-frequency features are extracted from the PSD maps by using the three-dimensional convolutional neural networks (3DCNNs) model. Finally, the spatial-frequency features are incorporated to the bidirectional gated recurrent units (Bi-GRUs) models to extract the spatial-frequency-sequential multi-dimensional fusion features for recognition of brain's sensorimotor region activated task.

Findings

In the comparative experiments, the data sets of motor imagery (MI)/action observation (AO)/action execution (AE) tasks are selected to test the classification performance and robustness of the proposed algorithm. In addition, the impact of extracted features on the sensorimotor region and the impact on the classification processing are also analyzed by visualization during experiments.

Originality/value

The experimental results show that the proposed algorithm extracts the corresponding brain activation features for different action related tasks, so as to achieve more stable classification performance in dealing with AO/MI/AE tasks, and has the best robustness on EEG signals of different subjects.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 7 October 2021

Juan Yang, Xu Du, Jui-Long Hung and Chih-hsiung Tu

Critical thinking is considered important in psychological science because it enables students to make effective decisions and optimizes their performance. Aiming at the…

Abstract

Purpose

Critical thinking is considered important in psychological science because it enables students to make effective decisions and optimizes their performance. Aiming at the challenges and issues of understanding the student's critical thinking, the objective of this study is to analyze online discussion data through an advanced multi-feature fusion modeling (MFFM) approach for automatically and accurately understanding the student's critical thinking levels.

Design/methodology/approach

An advanced MFFM approach is proposed in this study. Specifically, with considering the time-series characteristic and the high correlations between adjacent words in discussion contents, the long short-term memory–convolutional neural network (LSTM-CNN) architecture is proposed to extract deep semantic features, and then these semantic features are combined with linguistic and psychological knowledge generated by the LIWC2015 tool as the inputs of full-connected layers to automatically and accurately predict students' critical thinking levels that are hidden in online discussion data.

Findings

A series of experiments with 94 students' 7,691 posts were conducted to verify the effectiveness of the proposed approach. The experimental results show that the proposed MFFM approach that combines two types of textual features outperforms baseline methods, and the semantic-based padding can further improve the prediction performance of MFFM. It can achieve 0.8205 overall accuracy and 0.6172 F1 score for the “high” category on the validation dataset. Furthermore, it is found that the semantic features extracted by LSTM-CNN are more powerful for identifying self-introduction or off-topic discussions, while the linguistic, as well as psychological features, can better distinguish the discussion posts with the highest critical thinking level.

Originality/value

With the support of the proposed MFFM approach, online teachers can conveniently and effectively understand the interaction quality of online discussions, which can support instructional decision-making to better promote the student's knowledge construction process and improve learning performance.

Details

Data Technologies and Applications, vol. 56 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 18 January 2022

Gomathi V., Kalaiselvi S. and Thamarai Selvi D

This work aims to develop a novel fuzzy associator rule-based fuzzified deep convolutional neural network (FDCNN) architecture for the classification of smartphone…

Abstract

Purpose

This work aims to develop a novel fuzzy associator rule-based fuzzified deep convolutional neural network (FDCNN) architecture for the classification of smartphone sensor-based human activity recognition. This work mainly focuses on fusing the λmax method for weight initialization, as a data normalization technique, to achieve high accuracy of classification.

Design/methodology/approach

The major contributions of this work are modeled as FDCNN architecture, which is initially fused with a fuzzy logic based data aggregator. This work significantly focuses on normalizing the University of California, Irvine data set’s statistical parameters before feeding that to convolutional neural network layers. This FDCNN model with λmax method is instrumental in ensuring the faster convergence with improved performance accuracy in sensor based human activity recognition. Impact analysis is carried out to validate the appropriateness of the results with hyper-parameter tuning on the proposed FDCNN model with λmax method.

Findings

The effectiveness of the proposed FDCNN model with λmax method was outperformed than state-of-the-art models and attained with overall accuracy of 97.89% with overall F1 score as 0.9795.

Practical implications

The proposed fuzzy associate rule layer (FAL) layer is responsible for feature association based on fuzzy rules and regulates the uncertainty in the sensor data because of signal inferences and noises. Also, the normalized data is subjectively grouped based on the FAL kernel structure weights assigned with the λmax method.

Social implications

Contributed a novel FDCNN architecture that can support those who are keen in advancing human activity recognition (HAR) recognition.

Originality/value

A novel FDCNN architecture is implemented with appropriate FAL kernel structures.

Article
Publication date: 31 January 2022

Yejun Wu, Xiaxian Wang, Peilin Yu and YongKai Huang

The purpose of this research is to achieve automatic and accurate book purchase forecasts for the university libraries and improve efficiency of manual book purchase.

Abstract

Purpose

The purpose of this research is to achieve automatic and accurate book purchase forecasts for the university libraries and improve efficiency of manual book purchase.

Design/methodology/approach

The authors presented a Book Purchase Forecast model with A Lite BERT(ALBERT-BPF) to achieve their goals. First, the authors process all the book data to unify format of books' features, such as ISBN, title, authors, brief introduction and so on. Second, they exploit the book order data to label all books supplied by booksellers with “purchased” or “non-purchased”. The labelled data will be used for model training. Last, the authors regard the book purchase task as a text classification problem and present a model named ALBERT-BPF, which applies ALBERT to extract text features of books and BPF classification layer to forecast purchased books, to solve the problem.

Findings

The application of deep learning in book purchase task is effective. The data the authors exploited are the historical book purchase data from their university library. The authors’ experiments on the data show that ALBERT-BPF can seek out the books that need to be purchased with an accuracy of over 82%. And the highest accuracy reached is 88.06%. These indicate that the deep learning model is sufficient to assist the traditional manual book purchase way.

Originality/value

This research applies ALBERT, which is based on the latest Natural Language Processing (NLP) architecture Transformer, to library book purchase task.

Details

Aslib Journal of Information Management, vol. 74 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 2 December 2021

Jiawei Lian, Junhong He, Yun Niu and Tianze Wang

The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy…

236

Abstract

Purpose

The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems.

Design/methodology/approach

On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects.

Findings

The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model.

Originality/value

This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.

Details

Assembly Automation, vol. 42 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 2 February 2022

Wenzhong Gao, Xingzong Huang, Mengya Lin, Jing Jia and Zhen Tian

The purpose of this paper is to target on designing a short-term load prediction framework that can accurately predict the cooling load of office buildings.

Abstract

Purpose

The purpose of this paper is to target on designing a short-term load prediction framework that can accurately predict the cooling load of office buildings.

Design/methodology/approach

A feature selection scheme and stacking ensemble model to fulfill cooling load prediction task was proposed. Firstly, the abnormal data were identified by the data density estimation algorithm. Secondly, the crucial input features were clarified from three aspects (i.e. historical load information, time information and meteorological information). Thirdly, the stacking ensemble model combined long short-term memory network and light gradient boosting machine was utilized to predict the cooling load. Finally, the proposed framework performances by predicting cooling load of office buildings were verified with indicators.

Findings

The identified input features can improve the prediction performance. The prediction accuracy of the proposed model is preferable to the existing ones. The stacking ensemble model is robust to weather forecasting errors.

Originality/value

The stacking ensemble model was used to fulfill cooling load prediction task which can overcome the shortcomings of deep learning models. The input features of the model, which are less focused on in most studies, are taken as an important step in this paper.

Details

Engineering Computations, vol. 39 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 16 August 2021

Rajshree Varma, Yugandhara Verma, Priya Vijayvargiya and Prathamesh P. Churi

The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a…

Abstract

Purpose

The rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.

Design/methodology/approach

The detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.

Findings

The paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.

Originality/value

The study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 849