Search results

1 – 10 of over 78000
Article
Publication date: 1 January 1996

Danny Shiem‐Shin Then

Aims to raise awareness of the cost premiums involved in data collection. Examines benchmarking, its goals and appropriate support, and explains the process of benchmarking…

720

Abstract

Aims to raise awareness of the cost premiums involved in data collection. Examines benchmarking, its goals and appropriate support, and explains the process of benchmarking. Highlights the importance of distinguishing between data and information.

Details

Facilities, vol. 14 no. 1/2
Type: Research Article
ISSN: 0263-2772

Keywords

Article
Publication date: 21 November 2018

Mahmoud Elish

Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to…

Abstract

Purpose

Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models.

Design/methodology/approach

An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure.

Findings

The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components.

Originality/value

This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.

Details

International Journal of Web Information Systems, vol. 15 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 16 August 2023

Taraprasad Mohapatra, Sudhansu Sekhar Mishra, Mukesh Bathre and Sudhansu Sekhar Sahoo

The study aims to determine the the optimal value of output parameters of a variable compression ratio (CR) diesel engine are investigated at different loads, CR and fuel modes of…

Abstract

Purpose

The study aims to determine the the optimal value of output parameters of a variable compression ratio (CR) diesel engine are investigated at different loads, CR and fuel modes of operation experimentally. The output parameters of a variable compression ratio (CR) diesel engine are investigated at different loads, CR and fuel modes of operation experimentally. The performance parameters like brake thermal efficiency (BTE) and brake specific energy consumption (BSEC), whereas CO emission, HC emission, CO2 emission, NOx emission, exhaust gas temperature (EGT) and opacity are the emission parameters measured during the test. Tests are conducted for 2, 6 and 10 kg of load, 16.5 and 17.5 of CR.

Design/methodology/approach

In this investigation, the first engine was fueled with 100% diesel and 100% Calophyllum inophyllum oil in single-fuel mode. Then Calophyllum inophyllum oil with producer gas was fed to the engine. Calophyllum inophyllum oil offers lower BTE, CO and HC emissions, opacity and higher EGT, BSEC, CO2 emission and NOx emissions compared to diesel fuel in both fuel modes of operation observed. The performance optimization using the Taguchi approach is carried out to determine the optimal input parameters for maximum performance and minimum emissions for the test engine. The optimized value of the input parameters is then fed into the prediction techniques, such as the artificial neural network (ANN).

Findings

From multiple response optimization, the minimum emissions of 0.58% of CO, 42% of HC, 191 ppm NOx and maximum BTE of 21.56% for 16.5 CR, 10 kg load and dual fuel mode of operation are determined. Based on generated errors, the ANN is also ranked for precision. The proposed ANN model provides better prediction with minimum experimental data sets. The values of the R2 correlation coefficient are 1, 0.95552, 0.94367 and 0.97789 for training, validation, testing and all, respectively. The said biodiesel may be used as a substitute for conventional diesel fuel.

Originality/value

The blend of Calophyllum inophyllum oil-producer gas is used to run the diesel engine. Performance and emission analysis has been carried out, compared, optimized and validated.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 1 February 2003

Patrick Xavier

There is growing concern that some groups without access to high‐speed broadband networks, e.g. those residing in rural and remote areas, will be unable to benefit from online…

1165

Abstract

There is growing concern that some groups without access to high‐speed broadband networks, e.g. those residing in rural and remote areas, will be unable to benefit from online education, health and government services, etc. Such concerns have led to arguments that universal service obligations (USOs) should be upgraded to include access to broadband. This paper reviews the arguments and concludes that, at this stage of broadband development and diffusion, there is no convincing case for USO‐type mandates. Since the case for broadband USOs should be intermittently revisited, the paper proceeds, nevertheless, to explore what would be involved in a systematic review of this issue.

Details

info, vol. 5 no. 1
Type: Research Article
ISSN: 1463-6697

Keywords

Article
Publication date: 29 April 2014

Mohammad Amin Shayegan and Saeed Aghabozorgi

Pattern recognition systems often have to handle problem of large volume of training data sets including duplicate and similar training samples. This problem leads to large memory…

Abstract

Purpose

Pattern recognition systems often have to handle problem of large volume of training data sets including duplicate and similar training samples. This problem leads to large memory requirement for saving and processing data, and the time complexity for training algorithms. The purpose of the paper is to reduce the volume of training part of a data set – in order to increase the system speed, without any significant decrease in system accuracy.

Design/methodology/approach

A new technique for data set size reduction – using a version of modified frequency diagram approach – is presented. In order to reduce processing time, the proposed method compares the samples of a class to other samples in the same class, instead of comparing samples from different classes. It only removes patterns that are similar to the generated class template in each class. To achieve this aim, no feature extraction operation was carried out, in order to produce more precise assessment on the proposed data size reduction technique.

Findings

The results from the experiments, and according to one of the biggest handwritten numeral standard optical character recognition (OCR) data sets, Hoda, show a 14.88 percent decrease in data set volume without significant decrease in performance.

Practical implications

The proposed technique is effective for size reduction for all pictorial databases such as OCR data sets.

Originality/value

State-of-the-art algorithms currently used for data set size reduction usually remove samples near to class's centers, or support vector (SV) samples between different classes. However, the samples near to a class center have valuable information about class characteristics, and they are necessary to build a system model. Also, SV s are important samples to evaluate the system efficiency. The proposed technique, unlike the other available methods, keeps both outlier samples, as well as the samples close to the class centers.

Article
Publication date: 9 May 2016

Melinda Hodkiewicz and Mark Tien-Wei Ho

The purpose of this paper is to identify quality issues with using historical work order (WO) data from computerised maintenance management systems for reliability analysis; and…

1167

Abstract

Purpose

The purpose of this paper is to identify quality issues with using historical work order (WO) data from computerised maintenance management systems for reliability analysis; and develop an efficient and transparent process to correct these data quality issues to ensure data is fit for purpose in a timely manner.

Design/methodology/approach

This paper develops a rule-based approach to data cleansing and demonstrates the process on data for heavy mobile equipment from a number of organisations.

Findings

Although historical WO records frequently contain missing or incorrect functional location, failure mode, maintenance action and WO status fields the authors demonstrate it is possible to make these records fit for purpose by using data in the freeform text fields; an understanding of the maintenance tactics and practices at the operation; and knowledge of where the asset is in its life cycle. The authors demonstrate that it is possible to have a repeatable and transparent process to deal with the data cleaning activities.

Originality/value

How engineers deal with raw maintenance data and the decisions they make in order to produce a data set for reliability analysis is seldom discussed in detail. Assumptions and actions are often left undocumented. This paper describes typical data cleaning decisions we all have to make as a routine part of the analysis and presents a process to support the data cleaning decisions in a repeatable and transparent fashion.

Details

Journal of Quality in Maintenance Engineering, vol. 22 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation…

2301

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Article
Publication date: 2 May 2017

Kannan S. and Somasundaram K.

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…

Abstract

Purpose

Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.

Design/methodology/approach

The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.

Findings

The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.

Research limitations/implications

The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.

Originality/value

The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.

Details

Journal of Money Laundering Control, vol. 20 no. 2
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 5 October 2022

Sophiya Shiekh, Mohammad Shahid, Manas Sambare, Raza Abbas Haidri and Dileep Kumar Yadav

Cloud computing gives several on-demand infrastructural services by dynamically pooling heterogeneous resources to cater to users’ applications. The task scheduling needs to be…

67

Abstract

Purpose

Cloud computing gives several on-demand infrastructural services by dynamically pooling heterogeneous resources to cater to users’ applications. The task scheduling needs to be done optimally to achieve proficient results in a cloud computing environment. While satisfying the user’s requirements in a cloud environment, scheduling has been proven an NP-hard problem. Therefore, it leaves scope to develop new allocation models for the problem. The aim of the study is to develop load balancing method to maximize the resource utilization in cloud environment.

Design/methodology/approach

In this paper, the parallelized task allocation with load balancing (PTAL) hybrid heuristic is proposed for jobs coming from various users. These jobs are allocated on the resources one by one in a parallelized manner as they arrive in the cloud system. The novel algorithm works in three phases: parallelization, task allocation and task reallocation. The proposed model is designed for efficient task allocation, reallocation of resources and adequate load balancing to achieve better quality of service (QoS) results.

Findings

The acquired empirical results show that PTAL performs better than other scheduling strategies under various cases for different QoS parameters under study.

Originality/value

The outcome has been examined for the real data set to evaluate it with different state-of-the-art heuristics having comparable objective parameters.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 March 2006

Jane Parkinson

The growing interest in the mental health and well‐being of populations raises questions about traditional measures of public mental health, which have largely focused on levels…

Abstract

The growing interest in the mental health and well‐being of populations raises questions about traditional measures of public mental health, which have largely focused on levels of psychiatric morbidity. This paper describes work in progress to identify a set of national mental health and well‐being indicators for Scotland that could be used to establish a summary mental health profile, as a starting point for monitoring future trends. The process in taking this work forward involves identifying a desirable set of indicators, scoping the data that are currently collected nationally in Scotland, identifying additional data needs, and ensuring existing data collection systems include mental health and well‐being. It is expected that an indicator set for adults will have been identified by 2007. The paper presents some of the conceptual and practical challenges involved in defining and measuring positive mental health and is presented here as a contribution to ongoing debates in this field.

Details

Journal of Public Mental Health, vol. 5 no. 1
Type: Research Article
ISSN: 1746-5729

Keywords

1 – 10 of over 78000