Search results

1 – 10 of over 3000
Article
Publication date: 21 November 2018

Mahmoud Elish

Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to…

Abstract

Purpose

Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models.

Design/methodology/approach

An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure.

Findings

The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components.

Originality/value

This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.

Details

International Journal of Web Information Systems, vol. 15 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 23 August 2011

Ch. Aswani Kumar

The purpose of this paper is to introduce a new hybrid method for reducing dimensionality of high dimensional data.

Abstract

Purpose

The purpose of this paper is to introduce a new hybrid method for reducing dimensionality of high dimensional data.

Design/methodology/approach

Literature on dimensionality reduction (DR) witnesses the research efforts that combine random projections (RP) and singular value decomposition (SVD) so as to derive the benefit of both of these methods. However, SVD is well known for its computational complexity. Clustering under the notion of concept decomposition is proved to be less computationally complex than SVD and useful for DR. The method proposed in this paper combines RP and fuzzy k‐means clustering (FKM) for reducing dimensionality of the data.

Findings

The proposed RP‐FKM is computationally less complex than SVD, RP‐SVD. On the image data, the proposed RP‐FKM has produced less amount of distortion when compared with RP. The proposed RP‐FKM provides better text retrieval results when compared with conventional RP and performs similar to RP‐SVD. For the text retrieval task, superiority of SVD over other DR methods noted here is in good agreement with the analysis reported by Moravec.

Originality/value

The hybrid method proposed in this paper, combining RP and FKM, is new. Experimental results indicate that the proposed method is useful for reducing dimensionality of high‐dimensional data such as images, text, etc.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 4 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 11 June 2018

Wang Jian Hong and Daobo Wang

The purpose of this paper is to probe the recursive identification of piecewise affine Hammerstein models directly by using input-output data. To explain the identification…

Abstract

Purpose

The purpose of this paper is to probe the recursive identification of piecewise affine Hammerstein models directly by using input-output data. To explain the identification process of a parametric piecewise affine nonlinear function, the authors prove that the inverse function corresponding to the given piecewise affine nonlinear function is also an equivalent piecewise affine form. Based on this equivalent property, during the detailed identification process with respect to piecewise affine function and linear dynamical system, three recursive least squares methods are proposed to identify those unknown parameters under the probabilistic description or bounded property of noise.

Design/methodology/approach

First, the basic recursive least squares method is used to identify those unknown parameters under the probabilistic description of noise. Second, multi-innovation recursive least squares method is proposed to improve the efficiency lacked in basic recursive least squares method. Third, to relax the strict probabilistic description on noise, the authors provide a projection algorithm with a dead zone in the presence of bounded noise and analyze its two properties.

Findings

Based on complex mathematical derivation, the inverse function of a given piecewise affine nonlinear function is also an equivalent piecewise affine form. As the least squares method is suited under one condition that the considered noise may be a zero mean random signal, a projection algorithm with a dead zone in the presence of bounded noise can enhance the robustness in the parameter update equation.

Originality/value

To the best knowledge of the authors, this is the first attempt at identifying piecewise affine Hammerstein models, which combine a piecewise affine function and a linear dynamical system. In the presence of bounded noise, the modified recursive least squares methods are efficient in identifying two kinds of unknown parameters, so that the common set membership method can be replaced by the proposed methods.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 March 1987

Natasha Owen

It is estimated that managers spend more time in meetings than in any other single activity — these will vary from informal discussion or briefing sessions around the workplace to…

Abstract

It is estimated that managers spend more time in meetings than in any other single activity — these will vary from informal discussion or briefing sessions around the workplace to carefully planned formal presentations involving sophisticated audio‐visual support Even in the smallest of organisations, the provision of comfortable, well‐designed and adequately equipped meeting rooms is essential. Not only is the meeting room a prime tool for management, training and selling the organisation's service, it is also the most potent indicator of corporate image and an important element in the overall design brief.

Details

Facilities, vol. 5 no. 3
Type: Research Article
ISSN: 0263-2772

Article
Publication date: 14 August 2017

Fei Cheng, Kai Liu, Mao-Guo Gong, Kaiyuan Fu and Jiangbo Xi

The purpose of this paper is to design a robust tracking algorithm which is suitable for the real-time requirement and solves the mistake labeling issue in the appearance model of…

Abstract

Purpose

The purpose of this paper is to design a robust tracking algorithm which is suitable for the real-time requirement and solves the mistake labeling issue in the appearance model of trackers with the spare features.

Design/methodology/approach

This paper proposes a tracker to select the most discriminative randomly projected ferns and integrates a coarse-to-fine search strategy in this framework. First, the authors exploit multiple instance boosting learning to maximize the bag likelihood and select randomly projected fern from feature pool to degrade the effect of mistake labeling. Second, a coarse-to-fine search approach is first integrated into the framework of multiple instance learning (MIL) for less detections.

Findings

The quantitative and qualitative experiments demonstrate that the tracker has shown favorable performance in efficiency and effective among the competitors of tracking algorithms.

Originality/value

The proposed method selects the feature from the compressive domain by MIL AnyBoost and integrates the coarse-to-fine search strategy first to reduce the burden of detection. This paper designs a tracker with high speed and favorable results which is more suitable for real-time scene.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

Article
Publication date: 7 November 2017

Naveed Riaz, Ayesha Riaz and Sajid Ali Khan

The security of the stored biometric template is itself a challenge. Feature transformation techniques and biometric cryptosystems are used to address the concerns and improve the…

Abstract

Purpose

The security of the stored biometric template is itself a challenge. Feature transformation techniques and biometric cryptosystems are used to address the concerns and improve the general acceptance of biometrics. The purpose of this paper is to provide an overview of different techniques and processes for securing the biometric templates. Furthermore, the paper explores current research trends in this area.

Design/methodology/approach

In this paper, the authors provide an overview and survey of different features transformation techniques and biometric cryptosystems.

Findings

Feature transformation techniques and biometric cryptosystems provide reliable biometric security at a high level. There are many techniques that provide provable security with practical viable recognition rates. However, there remain several issues and challenges that are being faced during the deployment of these technologies.

Originality/value

This paper provides an overview of currently used techniques for securing biometric templates and also outlines the related issues and challenges.

Details

Sensor Review, vol. 38 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 21 June 2011

Yi‐ling Lin, Peter Brusilovsky and Daqing He

The goal of the research is to explore whether the use of higher‐level semantic features can help us to build better self‐organising map (SOM) representation as measured from a…

Abstract

Purpose

The goal of the research is to explore whether the use of higher‐level semantic features can help us to build better self‐organising map (SOM) representation as measured from a human‐centred perspective. The authors also explore an automatic evaluation method that utilises human expert knowledge encapsulated in the structure of traditional textbooks to determine map representation quality.

Design/methodology/approach

Two types of document representations involving semantic features have been explored – i.e. using only one individual semantic feature, and mixing a semantic feature with keywords. Experiments were conducted to investigate the impact of semantic representation quality on the map. The experiments were performed on data collections from a single book corpus and a multiple book corpus.

Findings

Combining keywords with certain semantic features achieves significant improvement of representation quality over the keywords‐only approach in a relatively homogeneous single book corpus. Changing the ratios in combining different features also affects the performance. While semantic mixtures can work well in a single book corpus, they lose their advantages over keywords in the multiple book corpus. This raises a concern about whether the semantic representations in the multiple book corpus are homogeneous and coherent enough for applying semantic features. The terminology issue among textbooks affects the ability of the SOM to generate a high quality map for heterogeneous collections.

Originality/value

The authors explored the use of higher‐level document representation features for the development of better quality SOM. In addition the authors have piloted a specific method for evaluating the SOM quality based on the organisation of information content in the map.

Details

Online Information Review, vol. 35 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 January 1988

Joachim Lauer and Terrence O'Brien

A forecasting method involving construction and interpretation of the business cycle is presented. Definition and development of lead indicators are discussed. These tools provide…

Abstract

A forecasting method involving construction and interpretation of the business cycle is presented. Definition and development of lead indicators are discussed. These tools provide management with short‐ to medium‐term forecasts of sales activity. Insights into the reasonableness of the forecasts and guidance for appropriate management actions are discussed. Data from an actual company are used to illustrate computation and interpretation procedures.

Details

Journal of Business & Industrial Marketing, vol. 3 no. 1
Type: Research Article
ISSN: 0885-8624

1 – 10 of over 3000