Search results

1 – 10 of 93
Open Access
Article
Publication date: 25 July 2022

Fung Yuen Chin, Kong Hoong Lem and Khye Mun Wong

The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the…

1052

Abstract

Purpose

The amount of features in handwritten digit data is often very large due to the different aspects in personal handwriting, leading to high-dimensional data. Therefore, the employment of a feature selection algorithm becomes crucial for successful classification modeling, because the inclusion of irrelevant or redundant features can mislead the modeling algorithms, resulting in overfitting and decrease in efficiency.

Design/methodology/approach

The minimum redundancy and maximum relevance (mRMR) and the recursive feature elimination (RFE) are two frequently used feature selection algorithms. While mRMR is capable of identifying a subset of features that are highly relevant to the targeted classification variable, mRMR still carries the weakness of capturing redundant features along with the algorithm. On the other hand, RFE is flawed by the fact that those features selected by RFE are not ranked by importance, albeit RFE can effectively eliminate the less important features and exclude redundant features.

Findings

The hybrid method was exemplified in a binary classification between digits “4” and “9” and between digits “6” and “8” from a multiple features dataset. The result showed that the hybrid mRMR +  support vector machine recursive feature elimination (SVMRFE) is better than both the sole support vector machine (SVM) and mRMR.

Originality/value

In view of the respective strength and deficiency mRMR and RFE, this study combined both these methods and used an SVM as the underlying classifier anticipating the mRMR to make an excellent complement to the SVMRFE.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 29 April 2014

Mohammad Amin Shayegan and Saeed Aghabozorgi

Pattern recognition systems often have to handle problem of large volume of training data sets including duplicate and similar training samples. This problem leads to large memory…

Abstract

Purpose

Pattern recognition systems often have to handle problem of large volume of training data sets including duplicate and similar training samples. This problem leads to large memory requirement for saving and processing data, and the time complexity for training algorithms. The purpose of the paper is to reduce the volume of training part of a data set – in order to increase the system speed, without any significant decrease in system accuracy.

Design/methodology/approach

A new technique for data set size reduction – using a version of modified frequency diagram approach – is presented. In order to reduce processing time, the proposed method compares the samples of a class to other samples in the same class, instead of comparing samples from different classes. It only removes patterns that are similar to the generated class template in each class. To achieve this aim, no feature extraction operation was carried out, in order to produce more precise assessment on the proposed data size reduction technique.

Findings

The results from the experiments, and according to one of the biggest handwritten numeral standard optical character recognition (OCR) data sets, Hoda, show a 14.88 percent decrease in data set volume without significant decrease in performance.

Practical implications

The proposed technique is effective for size reduction for all pictorial databases such as OCR data sets.

Originality/value

State-of-the-art algorithms currently used for data set size reduction usually remove samples near to class's centers, or support vector (SV) samples between different classes. However, the samples near to a class center have valuable information about class characteristics, and they are necessary to build a system model. Also, SV s are important samples to evaluate the system efficiency. The proposed technique, unlike the other available methods, keeps both outlier samples, as well as the samples close to the class centers.

Book part
Publication date: 13 June 2013

Li Xiao, Hye-jin Kim and Min Ding

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing…

Abstract

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing scholars become more aware of the value of audio and visual data and the technologies required to reveal insights into marketing problems. This chapter aims to introduce marketing scholars into this field of research.Design/methodology/approach – This chapter reviews the current technology in audio and visual data analysis and discusses rewarding research opportunities in marketing using these data.Findings – Compared with traditional data like survey and scanner data, audio and visual data provides richer information and is easier to collect. Given these superiority, data availability, feasibility of storage, and increasing computational power, we believe that these data will contribute to better marketing practices with the help of marketing scholars in the near future.Practical implications: The adoption of audio and visual data in marketing practices will help practitioners to get better insights into marketing problems and thus make better decisions.Value/originality – This chapter makes first attempt in the marketing literature to review the current technology in audio and visual data analysis and proposes promising applications of such technology. We hope it will inspire scholars to utilize audio and visual data in marketing research.

Details

Review of Marketing Research
Type: Book
ISBN: 978-1-78190-761-0

Keywords

Article
Publication date: 13 July 2018

M. Arif Wani and Saduf Afzal

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients…

Abstract

Purpose

Many strategies have been put forward for training deep network models, however, stacking of several layers of non-linearities typically results in poor propagation of gradients and activations. The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning. A number of fine tuning algorithms are explored in this work for optimizing deep learning models. This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.

Design/methodology/approach

The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining. The proposed technique is then used to perform supervised fine tuning of the deep neural network model. Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets: USPS, Gisette and MNIST. The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20, 50, 70 and 100 percent from the original data set.

Findings

Through extensive experimental study, it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.

Originality/value

This paper proposes employing several algorithms for fine tuning of deep network model. A new approach that integrates adaptive gain Backpropagation (BP) algorithm with Dropout technique is proposed for fine tuning of deep networks. Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 18 January 2023

Steven J. Hyde, Eric Bachura and Joseph S. Harrison

Machine learning (ML) has recently gained momentum as a method for measurement in strategy research. Yet, little guidance exists regarding how to appropriately apply the method…

Abstract

Machine learning (ML) has recently gained momentum as a method for measurement in strategy research. Yet, little guidance exists regarding how to appropriately apply the method for this purpose in our discipline. We address this by offering a guide to the application of ML in strategy research, with a particular emphasis on data handling practices that should improve our ability to accurately measure our constructs of interest using ML techniques. We offer a brief overview of ML methodologies that can be used for measurement before describing key challenges that exist when applying those methods for this purpose in strategy research (i.e., sample sizes, data noise, and construct complexity). We then outline a theory-driven approach to help scholars overcome these challenges and improve data handling and the subsequent application of ML techniques in strategy research. We demonstrate the efficacy of our approach by applying it to create a linguistic measure of CEOs' motivational needs in a sample of S&P 500 firms. We conclude by describing steps scholars can take after creating ML-based measures to continue to improve the application of ML in strategy research.

Article
Publication date: 16 October 2017

Jiajun Li, Jianguo Tao, Liang Ding, Haibo Gao, Zongquan Deng, Yang Luo and Zhandong Li

The purpose of this paper is to extend the usage of stroke gestures in manipulation tasks to make the interaction between human and robot more efficient.

Abstract

Purpose

The purpose of this paper is to extend the usage of stroke gestures in manipulation tasks to make the interaction between human and robot more efficient.

Design/methodology/approach

In this paper, a set of stroke gestures is designed for typical manipulation tasks. A gesture recognition and parameter extraction system is proposed to exploit the information in stroke gestures drawn by the users.

Findings

The results show that the designed gesture recognition subsystem can reach a recognition accuracy of 99.00 per cent. The parameter extraction subsystem can successfully extract parameters needed for typical manipulation tasks with a success rate about 86.30 per cent. The system shows an acceptable performance in the experiments.

Practical implications

Using stroke gesture in manipulation tasks can make the transmission of human intentions to the robots more efficient. The proposed gesture recognition subsystem is based on convolutional neural network which is robust to different input. The parameter extraction subsystem can extract the spatial information encoded in stroke gestures.

Originality/value

The author designs stroke gestures for manipulation tasks which is an extension of the usage of stroke gestures. The proposed gesture recognition and parameter extraction system can make use of stroke gestures to get the type of the task and important parameters for the task simultaneously.

Details

Industrial Robot: An International Journal, vol. 44 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 1 June 2005

Alex M. Andrew

41

Abstract

Details

Kybernetes, vol. 34 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 July 2021

Gang Li, Yongqiang Chen, Jian Zhou, Xuan Zheng and Xue Li

Periodic inspection and maintenance are essential for effective pavement preservation. Cracks not only affect the appearance of the road and reduce the levelness, but also shorten…

Abstract

Purpose

Periodic inspection and maintenance are essential for effective pavement preservation. Cracks not only affect the appearance of the road and reduce the levelness, but also shorten the life of road. However, traditional road crack detection methods based on manual investigations and image processing are costly, inefficiency and unreliable. The research aims to replace the traditional road crack detection method and further improve the detection effect.

Design/methodology/approach

In this paper, a crack detection method based on matrix network fusing corner-based detection and segmentation network is proposed to effectively identify cracks. The method combines ResNet 152 with matrix network as the backbone network to achieve feature reuse of the crack. The crack region is identified by corners, and segmentation network is constructed to extract the crack. Finally, parameters such as the length and width of the cracks were calculated from the geometric characteristics of the cracks and the relative errors with the actual values were 4.23 and 6.98% respectively.

Findings

To improve the accuracy of crack detection, the model was optimized with the Adam algorithm and mixed with two publicly available datasets for model training and testing and compared with various methods. The results show that the detection performance of our method is better than many excellent algorithms, and the anti-interference ability is strong.

Originality/value

This paper proposed a new type of road crack detection method. The detection effect is better than a variety of detection algorithms and has strong anti-interference ability, which can completely replace traditional crack detection methods and meet engineering needs.

Details

Engineering Computations, vol. 39 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 March 2016

Mahdi Salehi, Mahmoud Mousavi Shiri and Mohammad Bolandraftar Pasikhani

Financial distress is the most notable distress for companies. During the past four decades, predicting corporate bankruptcy and financial distress has become a significant…

1702

Abstract

Purpose

Financial distress is the most notable distress for companies. During the past four decades, predicting corporate bankruptcy and financial distress has become a significant concern for the various stakeholders in firms. This paper aims to predict financial distress of Iranian firms, with four techniques: support vector machines, artificial neural networks (ANN), k-nearest neighbor and na

i

ve bayesian classifier by using accounting information of the firms for two years prior to financial distress.

Design/methodology/approach

The distressed companies in this study are chosen based on Article 141 of Iranian Commercial Codes, i.e. accumulated losses exceeds half of equity, based on which 117 companies qualified for the current study. The research population includes all the companies listed on Tehran Stock Exchange during the financial period from 2011-2012 to 2013-2014, that is, three consecutive periods.

Findings

By making a comparison between performances of models, it is concluded that ANN outperforms other techniques.

Originality/value

The current study is almost the first study in Iran which used such methods to analyzing the data. So, the results may be helpful in the Iranian condition as well for other developing nations.

Details

International Journal of Law and Management, vol. 58 no. 2
Type: Research Article
ISSN: 1754-243X

Keywords

Article
Publication date: 8 June 2010

Pablo A.D. Castro and Fernando J. Von Zuben

The purpose of this paper is to apply a multi‐objective Bayesian artificial immune system (MOBAIS) to feature selection in classification problems aiming at minimizing both the…

Abstract

Purpose

The purpose of this paper is to apply a multi‐objective Bayesian artificial immune system (MOBAIS) to feature selection in classification problems aiming at minimizing both the classification error and cardinality of the subset of features. The algorithm is able to perform a multimodal search maintaining population diversity and controlling automatically the population size according to the problem. In addition, it is capable of identifying and preserving building blocks (partial components of the whole solution) effectively.

Design/methodology/approach

The algorithm evolves candidate subsets of features by replacing the traditional mutation operator in immune‐inspired algorithms with a probabilistic model which represents the probability distribution of the promising solutions found so far. Then, the probabilistic model is used to generate new individuals. A Bayesian network is adopted as the probabilistic model due to its capability of capturing expressive interactions among the variables of the problem. In order to evaluate the proposal, it was applied to ten datasets and the results compared with those generated by state‐of‐the‐art algorithms.

Findings

The experiments demonstrate the effectiveness of the multi‐objective approach to feature selection. The algorithm found parsimonious subsets of features and the classifiers produced a significant improvement in the accuracy. In addition, the maintenance of building blocks avoids the disruption of partial solutions, leading to a quick convergence.

Originality/value

The originality of this paper relies on the proposal of a novel algorithm to multi‐objective feature selection.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 93