Search results

1 – 10 of over 144000
Click here to view access options
Article
Publication date: 18 April 2008

J. Rodrigues Dias and Paulo Infante

The purpose of this paper is to investigate a new sampling methodology previously proposed for systems with a known lifetime distribution: the Predetermined Sampling

Downloads
1118

Abstract

Purpose

The purpose of this paper is to investigate a new sampling methodology previously proposed for systems with a known lifetime distribution: the Predetermined Sampling Intervals (PSI) method.

Design/methodology/approach

The methodology is defined on basis of system hazard cumulative rate, and is compared with other approaches, particularly those whose parameters may change in real time, taking into account current sample information.

Findings

For different lifetime distributions, the results obtained for adjusted average time to signal (AATS) using a control chart for the sample mean are presented and analysed. They demonstrate the high degree of statistical performance of this sampling procedure, particularly when used in systems with an increasing failure rate distribution.

Practical implications

This PSI method is important from a quality and reliability management point of view.

Originality/value

This methodology involves a process by which sampling instants are obtained at the beginning of the process to be controlled. Also this new approach allows for statistical comparison with other sampling schemes, which is a novel feature.

Details

International Journal of Quality & Reliability Management, vol. 25 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Click here to view access options
Article
Publication date: 5 October 2015

Xiaoke Li, Haobo Qiu, Zhenzhong Chen, Liang Gao and Xinyu Shao

Kriging model has been widely adopted to reduce the high computational costs of simulations in Reliability-based design optimization (RBDO). To construct the Kriging model…

Downloads
419

Abstract

Purpose

Kriging model has been widely adopted to reduce the high computational costs of simulations in Reliability-based design optimization (RBDO). To construct the Kriging model accurately and efficiently in the region of significance, a local sampling method with variable radius (LSVR) is proposed. The paper aims to discuss these issues.

Design/methodology/approach

In LSVR, the sequential sampling points are mainly selected within the local region around the current design point. The size of the local region is adaptively defined according to the target reliability and the nonlinearity of the probabilistic constraint. Every probabilistic constraint has its own local region instead of all constraints sharing one local region. In the local sampling region, the points located on the constraint boundary and the points with high uncertainty are considered simultaneously.

Findings

The computational capability of the proposed method is demonstrated using two mathematical problems, a reducer design and a box girder design of a super heavy machine tool. The comparison results show that the proposed method is very efficient and accurate.

Originality/value

The main contribution of this paper lies in: a new local sampling region computational criterion is proposed for Kriging. The originality of this paper is using expected feasible function (EFF) criterion and the shortest distance to the existing sample points instead of the other types of sequential sampling criterion to deal with the low efficiency problem.

Details

Engineering Computations, vol. 32 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Click here to view access options
Article
Publication date: 2 October 2017

Mengni Zhang, Can Wang, Jiajun Bu, Liangcheng Li and Zhi Yu

As existing studies show the accuracy of sampling methods depends heavily on the evaluation metric in web accessibility evaluation, the purpose of this paper is to propose…

Abstract

Purpose

As existing studies show the accuracy of sampling methods depends heavily on the evaluation metric in web accessibility evaluation, the purpose of this paper is to propose a sampling method OPS-WAQM optimized for Web Accessibility Quantitative Metric (WAQM). Furthermore, to support quick accessibility evaluation or real-time website accessibility monitoring, the authors also provide online extension for the sampling method.

Design/methodology/approach

In the OPS-WAQM method, the authors propose a minimal sampling error model for WAQM and use a greedy algorithm to approximately solve the optimization problem to determine the sample numbers in different layers. To make OPS-WAQM online, the authors apply the sampling in crawling strategy.

Findings

The sampling method OPS-WAQM and its online extension can both achieve good sampling quality by choosing the optimal sample numbers in different layers. Moreover, the online extension can also support quick accessibility evaluation by sampling and evaluating the pages in crawling.

Originality/value

To the best of the authors’ knowledge, the sampling method OPS-WAQM in this paper is the first attempt to optimize for a specific evaluation metric. Meanwhile, the online extension not only greatly reduces the serious I/O issues in existing web accessibility evaluation, but also supports quick web accessibility evaluation by sampling in crawling.

Details

Internet Research, vol. 27 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

Click here to view access options
Article
Publication date: 18 November 2019

Guanying Huo, Xin Jiang, Zhiming Zheng and Deyi Xue

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required…

Abstract

Purpose

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to collect the data to build the relations. This paper aims to develop a new sequential sampling method for adaptive metamodeling by using the data with highly nonlinear relation between input and output parameters.

Design/methodology/approach

In this method, the Latin hypercube sampling method is used to sample the initial data, and kriging method is used to construct the metamodel. In this work, input parameter values for collecting the next output data to update the currently achieved metamodel are determined based on qualities of data in both the input and output parameter spaces. Uniformity is used to evaluate data in the input parameter space. Leave-one-out errors and sensitivities are considered to evaluate data in the output parameter space.

Findings

This new method has been compared with the existing methods to demonstrate its effectiveness in approximation. This new method has also been compared with the existing methods in solving global optimization problems. An engineering case is used at last to verify the method further.

Originality/value

This paper provides an effective sequential sampling method for adaptive metamodeling to approximate highly nonlinear relations between input and output parameters.

Details

Engineering Computations, vol. 37 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Click here to view access options
Article
Publication date: 29 April 2014

Manuel do Carmo, Paulo Infante and Jorge M Mendes

– The purpose of this paper is to measure the performance of a sampling method through the average number of samples drawn in control.

Downloads
1037

Abstract

Purpose

The purpose of this paper is to measure the performance of a sampling method through the average number of samples drawn in control.

Design/methodology/approach

Matching the adjusted average time to signal (AATS) of sampling methods, using as a reference the AATS of one of them the paper obtains the design parameters of the others. Thus, it will be possible to obtain, in control, the average number of samples required, so that the AATS of the mentioned sampling methods may be equal to the AATS of the method that the paper uses as the reference.

Findings

A more robust performance measure to compare sampling methods because in many cases the period of time where the process is in control is greater than the out of control period. With this performance measure the paper compares different sampling methods through the average total cost per cycle, in systems with Weibull lifetime distributions: three systems with an increasing hazard rate (shape parameter β=2, 4 and 7) and one system with a decreasing failure rate (β=0, 8).

Practical implications

In a usual production cycle where the in control period is much larger than the out of control period, particularly if the sampling costs and false alarms costs are high in relation to malfunction costs, the paper thinks that this methodology allows us a more careful choice of the appropriate sampling method.

Originality/value

To compare the statistical performance between different sampling methods using the average number of samples need to be inspected when the process is in control. Particularly, the paper compares the statistical and economic performance between different sampling methods in contexts not previously considered in literature. The paper presents an approximation for the average time between the instant that failure occurs and the first sample with the process out of control, as well.

Details

International Journal of Quality & Reliability Management, vol. 31 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Click here to view access options
Article
Publication date: 16 April 2018

Jinglai Wu, Zhen Luo, Nong Zhang and Wei Gao

This paper aims to study the sampling methods (or design of experiments) which have a large influence on the performance of the surrogate model. To improve the…

Abstract

Purpose

This paper aims to study the sampling methods (or design of experiments) which have a large influence on the performance of the surrogate model. To improve the adaptability of modelling, a new sequential sampling method termed as sequential Chebyshev sampling method (SCSM) is proposed in this study.

Design/methodology/approach

The high-order polynomials are used to construct the global surrogated model, which retains the advantages of the traditional low-order polynomial models while overcoming their disadvantage in accuracy. First, the zeros of Chebyshev polynomials with the highest allowable order will be used as sampling candidates to improve the stability and accuracy of the high-order polynomial model. In the second step, some initial sampling points will be selected from the candidates by using a coordinate alternation algorithm, which keeps the initial sampling set uniformly distributed. Third, a fast sequential sampling scheme based on the space-filling principle is developed to collect more samples from the candidates, and the order of polynomial model is also updated in this procedure. The final surrogate model will be determined as the polynomial that has the largest adjusted R-square after the sequential sampling is terminated.

Findings

The SCSM has better performance in efficiency, accuracy and stability compared with several popular sequential sampling methods, e.g. LOLA-Voronoi algorithm and global Monte Carlo method from the SED toolbox, and the Halton sequence.

Originality/value

The SCSM has good performance in building the high-order surrogate model, including the high stability and accuracy, which may save a large amount of cost in solving complicated engineering design or optimisation problems.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Click here to view access options
Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as…

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Click here to view access options
Book part
Publication date: 14 September 2007

Peter R. Stopher

Abstract

Details

Handbook of Transport Modelling
Type: Book
ISBN: 978-0-08-045376-7

Click here to view access options
Article
Publication date: 1 June 2001

Sven Berg

Aims to use some of the sampling techniques and sampling routines, mentioned in Part 1 of the article, to perform practical tests to determine their differences in…

Downloads
1165

Abstract

Aims to use some of the sampling techniques and sampling routines, mentioned in Part 1 of the article, to perform practical tests to determine their differences in withdrawing samples. Uses two different types of systems, a hydraulic system and a gear system, together with some of the investigated sampling techniques. In order to find out the optimum sampling method for each of the two systems, uses a specification of requirements and a systematic approach, together with practical sample withdrawal from the two systems. For the hydraulic system, uses an on‐line particle counter and bottle samples from valves, and for the gear system, applies drain‐plug and vacuum pump sampling. It was found that for hydraulic systems on‐line sampling is the most appropriate, if information on the elements is not required. If information on the elements is required, bottle sampling from a valve together with flushing of the valve should be performed. For the gear system no difference was seen between the samples taken with a vacuum pump and those taken from the drain‐plug, and therefore an alternative method is suggested to improve the reliability of the sampling.

Details

Industrial Lubrication and Tribology, vol. 53 no. 3
Type: Research Article
ISSN: 0036-8792

Keywords

Click here to view access options
Article
Publication date: 1 May 1992

David H. Baillie

Selects one of Hamaker′s procedures for deriving a “σ” method (i.e. known process standard deviation) double sampling plan and exploits some of its properties to develop a…

Abstract

Selects one of Hamaker′s procedures for deriving a “σ” method (i.e. known process standard deviation) double sampling plan and exploits some of its properties to develop a system of “smethod (i.e. unknown process standard deviation) double sampling plans by variables that match the system of single specification limit “smethod single sampling plans of the current edition of the international standard on sampling by variables. ISO 3951: 1989. The new system is presented in two forms, the second of which may also be used for combined double specification limits and multivariate acceptance sampling.

Details

International Journal of Quality & Reliability Management, vol. 9 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 144000