Search results
1 – 10 of over 65000The purpose of this study is to present a newly proposed and developed sorting algorithm-based merging weighted fraction Monte Carlo (SAMWFMC) method for solving the population…
Abstract
Purpose
The purpose of this study is to present a newly proposed and developed sorting algorithm-based merging weighted fraction Monte Carlo (SAMWFMC) method for solving the population balance equation for the weighted fraction coagulation process in aerosol dynamics with high computational accuracy and efficiency.
Design/methodology/approach
In the new SAMWFMC method, the jump Markov process is constructed as the weighted fraction Monte Carlo (WFMC) method (Jiang and Chan, 2021) with a fraction function. Both adjustable and constant fraction functions are used to validate the computational accuracy and efficiency. A new merging scheme is also proposed to ensure a constant-number and constant-volume scheme.
Findings
The new SAMWFMC method is fully validated by comparing with existing analytical solutions for six benchmark test cases. The numerical results obtained from the SAMWFMC method with both adjustable and constant fraction functions show excellent agreement with the analytical solutions and low stochastic errors. Compared with the WFMC method (Jiang and Chan, 2021), the SAMWFMC method can significantly reduce the stochastic error in the total particle number concentration without increasing the stochastic errors in high-order moments of the particle size distribution at only slightly higher computational cost.
Originality/value
The WFMC method (Jiang and Chan, 2021) has a stringent restriction on the fraction functions, making few fraction functions applicable to the WFMC method except for several specifically selected adjustable fraction functions, while the stochastic error in the total particle number concentration is considerably large. The newly developed SAMWFMC method shows significant improvement and advantage in dealing with weighted fraction coagulation process in aerosol dynamics and provides an excellent potential to deal with various fraction functions with higher computational accuracy and efficiency.
Details
Keywords
Morteza Naghipour, Ali Akbar Gholampour and Mehdi Nematzadeh
The purpose of this paper is to present weighted residual method (WRM) for evaluating damping ratio of unreinforced glued‐laminated (glulam) wood beams and also reinforced glulam…
Abstract
Purpose
The purpose of this paper is to present weighted residual method (WRM) for evaluating damping ratio of unreinforced glued‐laminated (glulam) wood beams and also reinforced glulam beams with E‐glass reinforced epoxy polymer (GRP) plates.
Design/methodology/approach
In this method, created error from the regression curve to the peak points of experimental displacement values is minimized. Several weight functions such as Galerkin weight function, Petrov‐Galerkin weight functions, and least square weight function are used for minimizing this error and results from these methods are compared to the existing methods as; logarithmic decrement analysis (LDA), Hilbert transform analysis (HTA), moving block analysis (MBA), and half power bandwidth (HPB).
Findings
Because WRM tries to minimize the error function provided from differences between theoretical and experimental fitted curves, comparison among these methods indicate that proposed procedure is useful for any range of damping ratios and it gives better values in comparison with the other methods. Due to the initial conditions and weight function used in Galerkin weighted residual method, damping ratio values obtained from this method have different values from the other weighted residual methods. Among the existing methods, HPB method could not predict damping ratio of the glulam beams accurately.
Originality/value
This paper is a high quality research paper that presents weighted residual method (WRM) for evaluating damping ratio of unreinforced glued‐laminated (glulam) wood beams and also reinforced glulam beams with E‐glass reinforced epoxy polymer (GRP) plates. In this paper, LDA, HTA, MBA, and HPB methods are used and an analytical investigation of damping ratios of glulam beams unreinforced and reinforced with GRP plates is proposed by using weighted residual method (WRM). Although there is a simplifier assumption in some of existing methods, proposed method shows the damping ratio can be calculated without any requirement to simplifier assumption.
Details
Keywords
Iraj Rahmani and Jeffrey M. Wooldridge
We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general…
Abstract
We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general estimation problems – such as linear and nonlinear least squares, Poisson regression and fractional response models, to name just a few – and not only to maximum likelihood settings. With stratified sampling, we show how the difference in objective functions should be weighted in order to obtain a suitable test statistic. Interestingly, the weights are needed in computing the model-selection statistic even in cases where stratification is appropriately exogenous, in which case the usual unweighted estimators for the parameters are consistent. With cluster samples and panel data, we show how to combine the weighted objective function with a cluster-robust variance estimator in order to expand the scope of the model-selection tests. A small simulation study shows that the weighted test is promising.
Details
Keywords
The purpose of this study is to investigate the aerosol dynamics of the particle coagulation process using a newly developed weighted fraction Monte Carlo (WFMC) method.
Abstract
Purpose
The purpose of this study is to investigate the aerosol dynamics of the particle coagulation process using a newly developed weighted fraction Monte Carlo (WFMC) method.
Design/methodology/approach
The weighted numerical particles are adopted in a similar manner to the multi-Monte Carlo (MMC) method, with the addition of a new fraction function (α). Probabilistic removal is also introduced to maintain a constant number scheme.
Findings
Three typical cases with constant kernel, free-molecular coagulation kernel and different initial distributions for particle coagulation are simulated and validated. The results show an excellent agreement between the Monte Carlo (MC) method and the corresponding analytical solutions or sectional method results. Further numerical results show that the critical stochastic error in the newly proposed WFMC method is significantly reduced when compared with the traditional MMC method for higher-order moments with only a slight increase in computational cost. The particle size distribution is also found to extend for the larger size regime with the WFMC method, which is traditionally insufficient in the classical direct simulation MC and MMC methods. The effects of different fraction functions on the weight function are also investigated.
Originality Value
Stochastic error is inevitable in MC simulations of aerosol dynamics. To minimize this critical stochastic error, many algorithms, such as MMC method, have been proposed. However, the weight of the numerical particles is not adjustable. This newly developed algorithm with an adjustable weight of the numerical particles can provide improved stochastic error reduction.
Details
Keywords
Glenn W. Harrison and J. Todd Swarthout
We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset…
Abstract
We take Cumulative Prospect Theory (CPT) seriously by rigorously estimating structural models using the full set of CPT parameters. Much of the literature only estimates a subset of CPT parameters, or more simply assumes CPT parameter values from prior studies. Our data are from laboratory experiments with undergraduate students and MBA students facing substantial real incentives and losses. We also estimate structural models from Expected Utility Theory (EUT), Dual Theory (DT), Rank-Dependent Utility (RDU), and Disappointment Aversion (DA) for comparison. Our major finding is that a majority of individuals in our sample locally asset integrate. That is, they see a loss frame for what it is, a frame, and behave as if they evaluate the net payment rather than the gross loss when one is presented to them. This finding is devastating to the direct application of CPT to these data for those subjects. Support for CPT is greater when losses are covered out of an earned endowment rather than house money, but RDU is still the best single characterization of individual and pooled choices. Defenders of the CPT model claim, correctly, that the CPT model exists “because the data says it should.” In other words, the CPT model was borne from a wide range of stylized facts culled from parts of the cognitive psychology literature. If one is to take the CPT model seriously and rigorously then it needs to do a much better job of explaining the data than we see here.
Details
Keywords
Ashwani Dhingra and Pankaj Chandna
In order to achieve excellence in manufacturing, goals like lean, economic and quality production with enhanced productivity play a crucial role in this competitive environment…
Abstract
Purpose
In order to achieve excellence in manufacturing, goals like lean, economic and quality production with enhanced productivity play a crucial role in this competitive environment. It also necessitates major improvements in generally three primary technical areas: variation reduction, equipment reliability, and production scheduling. Complexity of the real world scheduling problems also increases with interactive multiple decision‐making criteria. This paper aims to deal with multi‐objective flow shop scheduling problems, including sequence dependent set up time (SDST). The paper also aims to consider the objective of minimizing the weighted sum of total weighted tardiness, total weighted earliness and makespan simultaneously. It proposes a new heuristic‐based hybrid simulated annealing (HSA) for near optimal solutions in a reasonable time.
Design/methodology/approach
Six modified NEH's based HSA algorithms are proposed for efficient scheduling of jobs in a multi‐objective SDST flow shop. Problems of up to 200 jobs and 20 machines are tested by the proposed HSA and a defined relative percentage improvement index is used for analysis and comparison of different MNEH's based hybrid simulated annealing algorithms.
Findings
From the results, it has been derived that performance of SA_EWDD (NEH) up to ten machines' problems, and SA_EPWDD (NEH) up to 20 machines' problems, were better over others especially for large sized SDST flow shop scheduling problems for the considered multi‐objective fitness function.
Originality/value
HSA and multi‐objective decision making proposed in the present work is a modified approach in the area of SDST flow shop scheduling.
Details
Keywords
Ruirui Shao, Zhigeng Fang, Liangyan Tao, Su Gao and Weiqing You
During the service period of communication satellite systems, their performance is often degraded due to the depletion mechanism. In this paper, the grey system theory is applied…
Abstract
Purpose
During the service period of communication satellite systems, their performance is often degraded due to the depletion mechanism. In this paper, the grey system theory is applied to the multi-state system effectiveness evaluation and the grey Lz-transformation ADC (availability, dependability and capability) effectiveness evaluation model is constructed to address the characteristics of the communication satellite system such as different constituent subsystems, numerous states and the inaccuracy and insufficiency of data.
Design/methodology/approach
The model is based on the ADC effectiveness evaluation method, combined with the Lz transformation and uses the definite weighted function of the three-parameter interval grey number as a bridge to incorporate the possibility of system performance being greater than the task demand into the effectiveness solution algorithm. At the same time, using MATLAB (Matrix laboratory) to solve each state probability, the same performance level in the Lz transform is combined. Then, the system effectiveness is obtained by Python.
Findings
The results show that the G-Lz-ADC model constructed in this paper can accurately evaluate the effectiveness of static/dynamic systems and certain/uncertain system and also has better applicability in evaluating the effectiveness of the multi-state complex system.
Practical implications
The G-Lz-ADC effectiveness evaluation model constructed in this paper can effectively reduce the complexity of traditional effectiveness evaluation models by combining the same performance levels in the Lz-transform and solving the effectiveness of the system with the help of computer programming, providing a new method for the effectiveness evaluation of the complex MSS. At the same time, the weaknesses of the system can be identified, providing a theoretical basis for improving the system’s effectiveness.
Originality/value
The possibility solution method based on the definite weighted function comparing the two three-parameter interval grey numbers is constructed, which compensates for the traditional calculation of the probability based on numerical values and subjective preferences of decision-makers. Meanwhile, the effectiveness evaluation model integrates the basic theories of three-parameter interval grey number and its definite weighted function, Grey−Markov, grey universal generating function (GUGF), grey multi-state system (GMSS), etc., which is an innovative method to solve the effectiveness of a multi-state instantaneous communication satellite system.
Details
Keywords
Masatoshi Muramatsu and Takeo Kato
The purpose of this paper is to propose the selection guide of the multi-objective optimization methods for the ergonomic design. The proposed guide enables designers to select an…
Abstract
Purpose
The purpose of this paper is to propose the selection guide of the multi-objective optimization methods for the ergonomic design. The proposed guide enables designers to select an appropriate method for optimizing the human characteristics composed of the engineering characteristics (e.g. users’ height, weight and muscular strength) and the physiological characteristics (e.g. brain wave, pulse-beat and myoelectric signal) in the trade-off relationships.
Design/methodology/approach
This paper focuses on the types of the relationships between engineering or physiological characteristics and their psychological characteristics (e.g. comfort and usability). Using these relationships and the characteristics of the multi-objective optimization methods, this paper classified them and constructed a flow chart for selecting them.
Findings
This paper applied the proposed selection guide to a geometric design of a comfortable seat and confirmed its applicability. The selected multi-objective optimization method optimized the contact area of seat back (engineering characteristic associated with the comfortable fit of the seat backrest) and the blood flow volume (physiological characteristic associated with the numbness in the lower limb) on the basis of each design intent such as a deep-vein thrombosis after long flight.
Originality/value
Because of the lack of the selection guide of the multi-objective optimization methods, an inappropriate method is often applied in industry. This paper proposed the selection guide applied in the ergonomic design having a lot of the multi-objective optimization problem.
Details
Keywords
Moaaz Elkabalawy and Osama Moselhi
This paper aims to present an integrated method for optimized project duration and costs, considering the size and cost of crews assigned to project activities' execution modes.
Abstract
Purpose
This paper aims to present an integrated method for optimized project duration and costs, considering the size and cost of crews assigned to project activities' execution modes.
Design/methodology/approach
The proposed method utilizes fuzzy set theory (FSs) for modeling uncertainties associated with activities' duration and cost and genetic algorithm (GA) for optimizing project schedule. The method has four main modules that support two optimization methods: modeling uncertainty and defuzzification module; scheduling module; cost calculations module; and decision-support module. The first optimization method uses the elitist non-dominated sorting genetic algorithm (NSGA-II), while the second uses a dynamic weighted optimization genetic algorithm. The developed scheduling and optimization methods are coded in python as a stand-alone automated computerized tool to facilitate the developed method's application.
Findings
The developed method is applied to a numerical example to demonstrate its use and illustrate its capabilities. The method was validated using a multi-layered comparative analysis that involves performance evaluation, statistical comparisons and stability evaluation. Results indicated that NSGA-II outperformed the weighted optimization method, resulting in a better global optimum solution, which avoided local minima entrapment. Moreover, the developed method was constructed under a deterministic scenario to evaluate its performance in finding optimal solutions against the previously developed literature methods. Results showed the developed method's superiority in finding a better optimal set of solutions in a reasonable processing time.
Originality/value
The novelty of the proposed method lies in its capacity to consider resource planning and project scheduling under uncertainty simultaneously while accounting for activity splitting.
Details