Search results

1 – 10 of over 20000
Article
Publication date: 18 November 2019

Guanying Huo, Xin Jiang, Zhiming Zheng and Deyi Xue

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to…

Abstract

Purpose

Metamodeling is an effective method to approximate the relations between input and output parameters when significant efforts of experiments and simulations are required to collect the data to build the relations. This paper aims to develop a new sequential sampling method for adaptive metamodeling by using the data with highly nonlinear relation between input and output parameters.

Design/methodology/approach

In this method, the Latin hypercube sampling method is used to sample the initial data, and kriging method is used to construct the metamodel. In this work, input parameter values for collecting the next output data to update the currently achieved metamodel are determined based on qualities of data in both the input and output parameter spaces. Uniformity is used to evaluate data in the input parameter space. Leave-one-out errors and sensitivities are considered to evaluate data in the output parameter space.

Findings

This new method has been compared with the existing methods to demonstrate its effectiveness in approximation. This new method has also been compared with the existing methods in solving global optimization problems. An engineering case is used at last to verify the method further.

Originality/value

This paper provides an effective sequential sampling method for adaptive metamodeling to approximate highly nonlinear relations between input and output parameters.

Details

Engineering Computations, vol. 37 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 1 January 2008

Paolo Giordani and Robert Kohn

Our paper discusses simulation-based Bayesian inference using information from previous draws to build the proposals. The aim is to produce samplers that are easy to implement…

Abstract

Our paper discusses simulation-based Bayesian inference using information from previous draws to build the proposals. The aim is to produce samplers that are easy to implement, that explore the target distribution effectively, and that are computationally efficient and mix well.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Article
Publication date: 24 May 2013

Marc Guénot, Ingrid Lepot, Caroline Sainvitu, Jordan Goblet and Rajan Filomeno Coelho

The purpose of this paper is to propose a novel contribution to adaptive sampling strategies for non‐intrusive reduced order models based on Proper Orthogonal Decomposition (POD)…

Abstract

Purpose

The purpose of this paper is to propose a novel contribution to adaptive sampling strategies for non‐intrusive reduced order models based on Proper Orthogonal Decomposition (POD). These strategies aim at reducing the cost of optimization by improving the efficiency and accuracy of POD data‐fitting surrogate models to be used in an online surrogate‐assisted optimization framework for industrial design.

Design/methodology/approach

The effect of the strategies on the model accuracy is investigated considering the snapshot scaling, the design of experiment size and the truncation level of the POD basis and compared to a state‐of‐the‐art radial basis function network surrogate model on objectives and constraints. The selected test case is a Mach number and angle of attack domain exploration of the well‐known RAE2822 airfoil. Preliminary airfoil shape optimization results are also shown.

Findings

The numerical results demonstrate the potential of the capture/recapture schemes proposed for adequately filling the parametric space and maximizing the surrogates relevance at minimum computational cost.

Originality/value

The proposed approaches help in building POD‐based surrogate models more efficiently.

Article
Publication date: 29 April 2014

Manuel do Carmo, Paulo Infante and Jorge M Mendes

– The purpose of this paper is to measure the performance of a sampling method through the average number of samples drawn in control.

1106

Abstract

Purpose

The purpose of this paper is to measure the performance of a sampling method through the average number of samples drawn in control.

Design/methodology/approach

Matching the adjusted average time to signal (AATS) of sampling methods, using as a reference the AATS of one of them the paper obtains the design parameters of the others. Thus, it will be possible to obtain, in control, the average number of samples required, so that the AATS of the mentioned sampling methods may be equal to the AATS of the method that the paper uses as the reference.

Findings

A more robust performance measure to compare sampling methods because in many cases the period of time where the process is in control is greater than the out of control period. With this performance measure the paper compares different sampling methods through the average total cost per cycle, in systems with Weibull lifetime distributions: three systems with an increasing hazard rate (shape parameter β=2, 4 and 7) and one system with a decreasing failure rate (β=0, 8).

Practical implications

In a usual production cycle where the in control period is much larger than the out of control period, particularly if the sampling costs and false alarms costs are high in relation to malfunction costs, the paper thinks that this methodology allows us a more careful choice of the appropriate sampling method.

Originality/value

To compare the statistical performance between different sampling methods using the average number of samples need to be inspected when the process is in control. Particularly, the paper compares the statistical and economic performance between different sampling methods in contexts not previously considered in literature. The paper presents an approximation for the average time between the instant that failure occurs and the first sample with the process out of control, as well.

Details

International Journal of Quality & Reliability Management, vol. 31 no. 5
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 6 February 2023

Hong Zhang, Lu-Kai Song, Guang-Chen Bai and Xue-Qin Li

The purpose of this study is to improve the computational efficiency and accuracy of fatigue reliability analysis.

Abstract

Purpose

The purpose of this study is to improve the computational efficiency and accuracy of fatigue reliability analysis.

Design/methodology/approach

By absorbing the advantages of Markov chain and active Kriging model into the hierarchical collaborative strategy, an enhanced active Kriging-based hierarchical collaborative model (DCEAK) is proposed.

Findings

The analysis results show that the proposed DCEAK method holds high accuracy and efficiency in dealing with fatigue reliability analysis with high nonlinearity and small failure probability.

Research limitations/implications

The effectiveness of the presented method in more complex reliability analysis problems (i.e. noisy problems, high-dimensional issues etc.) should be further validated.

Practical implications

The current efforts can provide a feasible way to analyze the reliability performance and identify the sensitive variables in aeroengine mechanisms.

Originality/value

To improve the computational efficiency and accuracy of fatigue reliability analysis, an enhanced active DCEAK is proposed and the corresponding fatigue reliability framework is established for the first time.

Details

International Journal of Structural Integrity, vol. 14 no. 2
Type: Research Article
ISSN: 1757-9864

Keywords

Open Access
Article
Publication date: 9 December 2022

Rui Wang, Shunjie Zhang, Shengqiang Liu, Weidong Liu and Ao Ding

The purpose is using generative adversarial network (GAN) to solve the problem of sample augmentation in the case of imbalanced bearing fault data sets and improving residual…

Abstract

Purpose

The purpose is using generative adversarial network (GAN) to solve the problem of sample augmentation in the case of imbalanced bearing fault data sets and improving residual network is used to improve the diagnostic accuracy of the bearing fault intelligent diagnosis model in the environment of high signal noise.

Design/methodology/approach

A bearing vibration data generation model based on conditional GAN (CGAN) framework is proposed. The method generates data based on the adversarial mechanism of GANs and uses a small number of real samples to generate data, thereby effectively expanding imbalanced data sets. Combined with the data augmentation method based on CGAN, a fault diagnosis model of rolling bearing under the condition of data imbalance based on CGAN and improved residual network with attention mechanism is proposed.

Findings

The method proposed in this paper is verified by the western reserve data set and the truck bearing test bench data set, proving that the CGAN-based data generation method can form a high-quality augmented data set, while the CGAN-based and improved residual with attention mechanism. The diagnostic model of the network has better diagnostic accuracy under low signal-to-noise ratio samples.

Originality/value

A bearing vibration data generation model based on CGAN framework is proposed. The method generates data based on the adversarial mechanism of GAN and uses a small number of real samples to generate data, thereby effectively expanding imbalanced data sets. Combined with the data augmentation method based on CGAN, a fault diagnosis model of rolling bearing under the condition of data imbalance based on CGAN and improved residual network with attention mechanism is proposed.

Details

Smart and Resilient Transportation, vol. 5 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Article
Publication date: 19 May 2023

Hasan Baş, Fatih Yapıcı and İbrahim İnanç

Binder jetting is one of the essential additive manufacturing methods because it is cost-effective, has no thermal stress problems and has a wide range of different materials…

Abstract

Purpose

Binder jetting is one of the essential additive manufacturing methods because it is cost-effective, has no thermal stress problems and has a wide range of different materials. Using binder jetting technology in the industry is becoming more common recently. However, it has disadvantages compared to traditional manufacturing methods regarding speed. This study aims to increase the manufacturing speed of binder jetting.

Design/methodology/approach

This study used adaptive slicing to increase the manufacturing speed of binder jetting. In addition, a variable binder amount algorithm has been developed to use adaptive slicing efficiently. Quarter-spherical shaped samples were manufactured using a variable binder amount algorithm and adaptive slicing method.

Findings

Samples were sintered at 1250°C for 2 h with 10°C/min heating and cooling ramp. Scanning electron microscope analysis, surface roughness tests, and density calculations were done. According to the results obtained from the analyzes, similar surface quality is achieved by using 38% fewer layers than uniform slicing.

Research limitations/implications

More work is needed to implement adaptive slicing to binder jetting. Because the software of commercial printers is very difficult to modify, an open-source printer was used. For this reason, it can be challenging to produce perfect samples. However, a good start has been made in this area.

Originality/value

To the best of the authors’ knowledge, the actual use of adaptive slicing in binder jetting was applied for the first time in this study. A variable binder amount algorithm has been developed to implement adaptive slicing in binder jetting.

Details

Rapid Prototyping Journal, vol. 29 no. 8
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 1 December 2020

Rui Lin, Haibo Huang and Maohai Li

This study aims to present an automated guided logistics robot mainly designed for pallet transportation. Logistics robot is compactly designed. It could pick up the pallet…

Abstract

Purpose

This study aims to present an automated guided logistics robot mainly designed for pallet transportation. Logistics robot is compactly designed. It could pick up the pallet precisely and transport the pallet up to 1,000 kg automatically in the warehouse. It could move freely in all directions without turning the chassis. It could work without any additional infrastructure based on laser navigation system proposed in this work.

Design/methodology/approach

Logistics robot should be able to move underneath and lift up the pallet accurately. Logistics robot mainly consists of two sub-robots, like two forks of the forklift. Each sub-robot has front and rear driving units. A new compact driving unit is compactly designed as a key component to ensure access to the narrow free entry of the pallet. Besides synchronous motions in all directions, the two sub-robots should also perform synchronous lifting up and laying down the pallet. Logistics robot uses a front laser to detect obstacles and locate itself using on-board navigation system. A rear laser is used to recognize and guide the sub-robots to pick up the pallet precisely within ± 5mm/1o in x-/yaw direction. Path planning algorithm under different constraints is proposed for logistics robot to obey the traffic rules of pallet logistics.

Findings

Compared with the traditional forklift vehicles, logistics robot has the advantages of more compact structure and higher expandability. It can realize the omnidirectional movement flexibly without turning the chassis and take zero-radius turn by controlling compact driving units synchronously. Logistics robot can move collision-free into any pallet that has not been precisely placed. It can plan the paths for returning to charge station and charge automatically. So it can work uninterruptedly for 7 × 24 h. Path planning algorithm proposed can avoid traffic congestion and improve the passability of the narrow roads to improve logistics efficiencies. Logistics robot is quite suitable for the standardized logistics factory with small working space.

Originality/value

This is a new innovation for pallet transportation vehicle to improve logistics automation.

Details

Assembly Automation, vol. 41 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 18 April 2008

J. Rodrigues Dias and Paulo Infante

The purpose of this paper is to investigate a new sampling methodology previously proposed for systems with a known lifetime distribution: the Predetermined Sampling Intervals…

1180

Abstract

Purpose

The purpose of this paper is to investigate a new sampling methodology previously proposed for systems with a known lifetime distribution: the Predetermined Sampling Intervals (PSI) method.

Design/methodology/approach

The methodology is defined on basis of system hazard cumulative rate, and is compared with other approaches, particularly those whose parameters may change in real time, taking into account current sample information.

Findings

For different lifetime distributions, the results obtained for adjusted average time to signal (AATS) using a control chart for the sample mean are presented and analysed. They demonstrate the high degree of statistical performance of this sampling procedure, particularly when used in systems with an increasing failure rate distribution.

Practical implications

This PSI method is important from a quality and reliability management point of view.

Originality/value

This methodology involves a process by which sampling instants are obtained at the beginning of the process to be controlled. Also this new approach allows for statistical comparison with other sampling schemes, which is a novel feature.

Details

International Journal of Quality & Reliability Management, vol. 25 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 17 March 2021

Luosong Jin, Weidong Liu, Cheng Chen, Wei Wang and Houyin Long

With the advent of the information age, this paper aims to apply risk analysis theories to study the risk prevention mechanism of information disclosure, thus supporting the green…

Abstract

Purpose

With the advent of the information age, this paper aims to apply risk analysis theories to study the risk prevention mechanism of information disclosure, thus supporting the green electricity supply.

Design/methodology/approach

This paper conducts a comprehensive evaluation and analysis of the impact of power market transactions, power market operations and effective government supervision, so as to figure out the core risk content of power market information disclosure. Moreover, AHP-entropy method is adopted to weigh different indicators of information disclosure risks for the participants in the electricity market.

Findings

The potential reasons for information disclosure risk in the electricity market include insufficient information disclosure, high cost of obtaining information, inaccurate information disclosure, untimely information disclosure and unfairness of information disclosure.

Originality/value

Some suggestions and implications on risk prevention mechanism of information disclosure in the electricity market are provided, so as to ensure the green electricity supply and promote the electricity market reform in China.

Details

Journal of Enterprise Information Management, vol. 35 no. 2
Type: Research Article
ISSN: 1741-0398

Keywords

1 – 10 of over 20000