Search results

1 – 10 of over 7000
Article
Publication date: 13 July 2018

M. Arif Wani and Saduf Afzal

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients…

Abstract

Purpose

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.

Design/methodology/approach

The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set.

Findings

Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.

Originality/value

This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 4 August 2020

Alessandra Lumini, Loris Nanni and Gianluca Maguolo

In this paper, we present a study about an automated system for monitoring underwater ecosystems. The system here proposed is based on the fusion of different deep learning…

2422

Abstract

In this paper, we present a study about an automated system for monitoring underwater ecosystems. The system here proposed is based on the fusion of different deep learning methods. We study how to create an ensemble based of different Convolutional Neural Network (CNN) models, fine-tuned on several datasets with the aim of exploiting their diversity. The aim of our study is to experiment the possibility of fine-tuning CNNs for underwater imagery analysis, the opportunity of using different datasets for pre-training models, the possibility to design an ensemble using the same architecture with small variations in the training procedure.

Our experiments, performed on 5 well-known datasets (3 plankton and 2 coral datasets) show that the combination of such different CNN models in a heterogeneous ensemble grants a substantial performance improvement with respect to other state-of-the-art approaches in all the tested problems. One of the main contributions of this work is a wide experimental evaluation of famous CNN architectures to report the performance of both the single CNN and the ensemble of CNNs in different problems. Moreover, we show how to create an ensemble which improves the performance of the best single model. The MATLAB source code is freely link provided in title page.

Details

Applied Computing and Informatics, vol. 19 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 13 February 2024

Wenzhen Yang, Shuo Shan, Mengting Jin, Yu Liu, Yang Zhang and Dongya Li

This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.

Abstract

Purpose

This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.

Design/methodology/approach

The proposed in-situ quality inspection system consists of an injection machine, USB camera, programmable logic controller and personal computer, interconnected via OPC or USB communication interfaces. This configuration enables seamless automation of the IM process, real-time quality inspection and automated decision-making. In addition, a MobileNet-based deep learning (DL) model is proposed for quality inspection of injection parts, fine-tuned using the TL approach.

Findings

Using the TL approach, the MobileNet-based DL model demonstrates exceptional performance, achieving validation accuracy of 99.1% with the utilization of merely 50 images per category. Its detection speed and accuracy surpass those of DenseNet121-based, VGG16-based, ResNet50-based and Xception-based convolutional neural networks. Further evaluation using a random data set of 120 images, as assessed through the confusion matrix, attests to an accuracy rate of 96.67%.

Originality/value

The proposed MobileNet-based DL model achieves higher accuracy with less resource consumption using the TL approach. It is integrated with automation technologies to build the in-situ quality inspection system of injection parts, which improves the cost-efficiency by facilitating the acquisition and labeling of task-specific images, enabling automatic defect detection and decision-making online, thus holding profound significance for the IM industry and its pursuit of enhanced quality inspection measures.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 October 2005

John G. Vlachogiannis and Ranjit K. Roy

The aim of the paper is the fine‐tuning of proportional integral derivative (PID) controllers under model parameter uncertainties (noise).

2024

Abstract

Purpose

The aim of the paper is the fine‐tuning of proportional integral derivative (PID) controllers under model parameter uncertainties (noise).

Design/methodology/approach

The fine‐tuning of PID controllers achieved using the Taguchi method following the steps given: selection of the control factors of the PID with their levels; identification of the noise factors that cause undesirable variation on the quality characteristic of PID; design of the matrix experiment and definition of the data analysis procedure; analysis of the data; decision regarding optimum settings of the control parameters and predictions of the performance at optimum levels of control factors; calculation of the expected cost savings under optimum condition; and confirmation of experimental results.

Findings

An example of the proposed method is presented and demonstrates that given certain performance criteria, the Taguchi method can indeed provide sub‐optimal values for fine PID tuning in the presence of model parameter uncertainties (noise). The contribution of each factor to the variation of the mean and the variability of error is also calculated. The expected cost savings for PID under optimum condition are calculated. The confirmation experiments are conducted on a real PID controller.

Research limitations/implications

As a further research it is proposed the contiguous fine‐tuning of PID controllers under a number of a variant controllable models (noise).

Practical implications

The enhancement of PID controllers by Taguchi method is proposed with the form of a hardware mechanism. This mechanism will be incorporated in the PID controller and automatically regulate the PID parameters reducing the noise influence.

Originality/value

Application of Taguchi method in the scientific field of automation control.

Details

The TQM Magazine, vol. 17 no. 5
Type: Research Article
ISSN: 0954-478X

Keywords

Article
Publication date: 4 March 2021

Abhishek Gupta, Dwijendra Nath Dwivedi and Ashish Jain

Transaction monitoring system set up by financial institutions is one of the most used ways to track money laundering and terrorist financing activities. While being effective to…

Abstract

Purpose

Transaction monitoring system set up by financial institutions is one of the most used ways to track money laundering and terrorist financing activities. While being effective to a large extent, the system generates very high false positives. With evolving patterns of financial transactions, it also needs effective mechanism for scenario fine-tuning. The purpose of this paper is to highlight quantitative method for optimizing scenarios in money laundering context. While anomaly detection and unsupervised learning can identify huge patterns of false negatives, that can reveal new patterns, for existing scenarios, business generally rely on judgment/data analysis-based threshold finetuning of existing scenario. The objective of such exercises is productivity rate enhancement.

Design/methodology/approach

In this paper, the authors propose an approach called linear/non-linear optimization on threshold finetuning. This traditional operations research technique has been often used for many optimization problems. Current problem of threshold finetuning for scenario has two key features that warrant linear optimization. First, scenario-based suspicious transaction reporting (STR) cases and overall customer level catch rate has a very high overlap, i.e. more than one scenario captures same customer with different degree of abnormal behavior. This implies that scenarios can be better coordinated to catch more non-overlapping customers. Second, different customer segments have differing degree of transaction behavior; hence, segmenting and then reducing slack (redundant catch of suspect) can result in better productivity rate (defined as productive alerts divided by total alerts) in a money laundering context.

Findings

Theresults show that by implementing the optimization technique, the productivity rate can be improved. This is done through two drivers. First, the team gets to know the best possible combination of threshold across scenarios for maximizing the STR observations better coverage of STR – fine-tuned thresholds are able to better cover the suspected transactions as compared to traditional approaches. Second, there is reduction of redundancy/slack margins on thresholds, thereby improving the overall productivity rate. The experiments focused on six scenario combinations, resulted in reduction of 5.4% of alerts and 1.6% of unique customers for same number of STR capture.

Originality/value

The authors propose an approach called linear/non-linear optimization on threshold finetuning, as very little work is done on optimizing scenarios itself, which is the most widely used practice to monitor enterprise-wide anti-money laundering solutions. This proves that by adding a layer of mathematical optimization, financial institutions can additionally save few million dollars, without compromising on their STR capture capability. This hopefully will go a long way in leveraging artificial intelligence for further making financial institutions more efficient in controlling financial crimes and save some hard-earned dollars.

Details

Journal of Money Laundering Control, vol. 25 no. 1
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 30 November 2021

Minh Thanh Vo, Anh H. Vo and Tuong Le

Medical images are increasingly popular; therefore, the analysis of these images based on deep learning helps diagnose diseases become more and more essential and necessary…

Abstract

Purpose

Medical images are increasingly popular; therefore, the analysis of these images based on deep learning helps diagnose diseases become more and more essential and necessary. Recently, the shoulder implant X-ray image classification (SIXIC) dataset that includes X-ray images of implanted shoulder prostheses produced by four manufacturers was released. The implant's model detection helps to select the correct equipment and procedures in the upcoming surgery.

Design/methodology/approach

This study proposes a robust model named X-Net to improve the predictability for shoulder implants X-ray image classification in the SIXIC dataset. The X-Net model utilizes the Squeeze and Excitation (SE) block integrated into Residual Network (ResNet) module. The SE module aims to weigh each feature map extracted from ResNet, which aids in improving the performance. The feature extraction process of X-Net model is performed by both modules: ResNet and SE modules. The final feature is obtained by incorporating the extracted features from the above steps, which brings more important characteristics of X-ray images in the input dataset. Next, X-Net uses this fine-grained feature to classify the input images into four classes (Cofield, Depuy, Zimmer and Tornier) in the SIXIC dataset.

Findings

Experiments are conducted to show the proposed approach's effectiveness compared with other state-of-the-art methods for SIXIC. The experimental results indicate that the approach outperforms the various experimental methods in terms of several performance metrics. In addition, the proposed approach provides the new state of the art results in all performance metrics, such as accuracy, precision, recall, F1-score and area under the curve (AUC), for the experimental dataset.

Originality/value

The proposed method with high predictive performance can be used to assist in the treatment of injured shoulder joints.

Details

Data Technologies and Applications, vol. 56 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Content available
Article
Publication date: 13 August 2020

Shuyi Wang, Chengzhi Zhang and Alexis Palmer

Abstract

Details

Information Discovery and Delivery, vol. 48 no. 3
Type: Research Article
ISSN: 2398-6247

Open Access
Article
Publication date: 21 April 2022

Warot Moungsouy, Thanawat Tawanbunjerd, Nutcha Liamsomboon and Worapan Kusakunniran

This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face…

2688

Abstract

Purpose

This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face recognition. So, the proposed solution is developed to recognize human faces on any available facial components which could be varied depending on wearing or not wearing a mask.

Design/methodology/approach

The proposed solution is developed based on the FaceNet framework, aiming to modify the existing facial recognition model to improve the performance of both scenarios of mask-wearing and without mask-wearing. Then, simulated masked-face images are computed on top of the original face images, to be used in the learning process of face recognition. In addition, feature heatmaps are also drawn out to visualize majority of parts of facial images that are significant in recognizing faces under mask-wearing.

Findings

The proposed method is validated using several scenarios of experiments. The result shows an outstanding accuracy of 99.2% on a scenario of mask-wearing faces. The feature heatmaps also show that non-occluded components including eyes and nose become more significant for recognizing human faces, when compared with the lower part of human faces which could be occluded under masks.

Originality/value

The convolutional neural network based solution is tuned up for recognizing human faces under a scenario of mask-wearing. The simulated masks on original face images are augmented for training the face recognition model. The heatmaps are then computed to prove that features generated from the top half of face images are correctly chosen for the face recognition.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 15 May 2017

André de Waal and Ivo Heijtel

The purpose of this study is to help managers in their constant quest to create and implement new sources of competitive advantage and ways to achieve sustainable high performance…

1450

Abstract

Purpose

The purpose of this study is to help managers in their constant quest to create and implement new sources of competitive advantage and ways to achieve sustainable high performance to become a high performance organization (HPO) – defined as an organization that achieves financial and non-financial results that are exceedingly better than those of its peer group over a period of five years or more to by focusing in a disciplined way on issues of genuine importance to the organization. One way to become an HPO is by applying the HPO Framework, which has been validated in multiple countries and shown to indeed help organizations to improve their performance. However, a change approach for implementing the HPO Framework that is valid in different contexts has not been developed to date. Such an approach is important as change initiatives suffer from a high failure rate.

Design/methodology/approach

The goal of this research was to identify an appropriate change approach for implementing the HPO Framework. A theoretical framework for an HPO change initiative was constructed, which subsequently was tested at an organization undergoing a transformation to become an HPO.

Findings

The results show that the theoretical approach in practice was indeed useful at the case company. A continuous rate of change is needed to implement a corporate-wide change strategy that will enable the organization to constantly adapt to the demands of its business environment. The scale of the transformation differs for each HPO change initiative, depending on the results of the HPO diagnosis. Directly after the HPO diagnosis and at the beginning of the HPO transformation, a planned approach predominates; conversely, while maintaining the HPO, the emergent approach predominates.

Research limitations/implications

This study is relevant by enabling managers to learn the essentials of a change approach for creating an HPO in the present-day business environment. Based on these essentials, managers can start to develop a change approach that is appropriate for creating their own HPO.

Originality/value

The theoretical relevance of this paper is that, although much literature exists concerning approaches for organizational change initiatives, no change approaches specifically designed for creating an HPO can be found in the literature. This paper provides such an approach.

Details

Measuring Business Excellence, vol. 21 no. 2
Type: Research Article
ISSN: 1368-3047

Keywords

Article
Publication date: 1 April 1987

John Pheby

In preparing this article, I have approached Shackle's “fundamentalist” interpretation of Keynes from a slightly different angle. Much of the discussion as to whether Shackle is…

Abstract

In preparing this article, I have approached Shackle's “fundamentalist” interpretation of Keynes from a slightly different angle. Much of the discussion as to whether Shackle is right tends to get bogged down in a rather narrow textual exegisis. That is, debates that focus on whether the 1937 Quarterly Journal of Economics article means more, or less, than Keynes's apparent “endorsement” of IS/LM Keynesianism contained in his famous letter to Hicks. This type of discussion too easily overlooks the more fundamental methodological considerations that motivate Shackle. Such issues as the way in which economic actors acquire knowledge and the nature of economics as a social science are important to him. Therefore], a more meaningful way of assessing Shackle's views would be to consider whether they are in sympathy with Keynes's views on such matters.

Details

Journal of Economic Studies, vol. 14 no. 4
Type: Research Article
ISSN: 0144-3585

1 – 10 of over 7000