Search results
1 – 10 of 43Ying Cai, Peijiang Yuan and Dongdong Chen
To improve the accuracy of the industrial robots’ absolute positioning, a Kriging calibration is proposed.
Abstract
Purpose
To improve the accuracy of the industrial robots’ absolute positioning, a Kriging calibration is proposed.
Design/methodology/approach
This method particularly designs a semivariogram for connecting the joint space and the working space. After that, Kriging equations are determined and solved to predict the position errors of targets. Subsequently, a simple and convenient error compensation, which can be implemented on the control command, is proposed.
Findings
The verification experiment of the position-error multiplicity and the Kriging calibration experiment are done in the KUKA R210 R2700 industrial robot. The position-error multiplicity experiment reveals that the position error of the industrial robot varies with the joint angle sets. Besides, the Kriging calibration experiment shows that the maximum of the spatial position errors is reduced from 1.2906 to 0.2484 mm, which reveals the validity of the Kriging calibration.
Originality/value
The special designed semivariation allows this method to be flexible and practical. It can be used in various fields where the angle solutions of industrial robots should be adapted according to the optimal demand and the environment, such as the optimal trajectory planning and the obstacle avoidance. Besides, this method can provide accuracy positioning results.
Details
Keywords
Guozhi Li, Fuhai Zhang, Yili Fu and Shuguo Wang
The purpose of this paper is to propose an error model for serial robot kinematic calibration based on dual quaternions.
Abstract
Purpose
The purpose of this paper is to propose an error model for serial robot kinematic calibration based on dual quaternions.
Design/methodology/approach
The dual quaternions are the combination of dual-number theory and quaternion algebra, which means that they can represent spatial transformation. The dual quaternions can represent the screw displacement in a compact and efficient way, so that they are used for the kinematic analysis of serial robot. The error model proposed in this paper is derived from the forward kinematic equations via using dual quaternion algebra. The full pose measurements are considered to apply the error model to the serial robot by using Leica Geosystems Absolute Tracker (AT960) and tracker machine control (T-MAC) probe.
Findings
Two kinematic-parameter identification algorithms are derived from the proposed error model based on dual quaternions, and they can be used for serial robot calibration. The error model uses Denavit–Hartenberg (DH) notation in the kinematic analysis, so that it gives the intuitive geometrical meaning of the kinematic parameters. The absolute tracker system can measure the position and orientation of the end-effector (EE) simultaneously via using T-MAC.
Originality/value
The error model formulated by dual quaternion algebra contains all the basic geometrical parameters of serial robot during the kinematic calibration process. The vector of dual quaternion error can be used as an indicator to represent the trend of error change of robot’s EE between the nominal value and the actual value. The accuracy of the EE is improved after nearly 20 measurements in the experiment conduct on robot SDA5F. The simulation and experiment verify the effectiveness of the error model and the calibration algorithms.
Details
Keywords
Abstract
Purpose
Compared with the low-fidelity model, the high-fidelity model has both the advantage of high accuracy, and the disadvantage of low efficiency and high cost. A series of multi-fidelity surrogate modelling method were developed to give full play to the respective advantages of both low-fidelity and high-fidelity models. However, most multi-fidelity surrogate modelling methods are sensitive to the amount of high-fidelity data. The purpose of this paper is to propose a multi fidelity surrogate modelling method whose accuracy is less dependent on the amount of high-fidelity data.
Design/methodology/approach
A multi-fidelity surrogate modelling method based on neural networks was proposed in this paper, which utilizes transfer learning ideas to explore the correlation between different fidelity datasets. A low-fidelity neural network was built by using a sufficient amount of low-fidelity data, which was then finetuned by a very small amount of HF data to obtain a multi-fidelity neural network based on this correlation.
Findings
Numerical examples were used in this paper, which proved the validity of the proposed method, and the influence of neural network hyper-parameters on the prediction accuracy of the multi-fidelity model was discussed.
Originality/value
Through the comparison with existing methods, case study shows that when the number of high-fidelity sample points is very small, the R-square of the proposed model exceeds the existing model by more than 0.3, which shows that the proposed method can be applied to reducing the cost of complex engineering design problems.
Details
Keywords
William J. McCluskey and Richard A. Borst
The purpose of this research is to explore from a mass appraisal perspective how the effects of location are reflected within valuation models. The paper sets out to detail the…
Abstract
Purpose
The purpose of this research is to explore from a mass appraisal perspective how the effects of location are reflected within valuation models. The paper sets out to detail the various techniques and the efficacy of their application.
Design/methodology/approach
The approach adopted is analytical and based upon the development of locational attributes. An extensive literature base is synthesized with methods being evaluated in their application to mass appraisal.
Findings
This research has identified that the three main groups interested in residential property valuation, namely, academia, industry and commerce have to a certain extent been unfamiliar with the research developments occurring in the other groups. The impact of this is important, given the need for integration and collaboration in terms of future model development.
Research limitations/implications
The research underpinning this paper will provide a solid basis for further research into this area. The importance of measuring the effect that location has on value is of major significance in the determination of objective estimates of property value.
Practical implications
Those within the assessment community could be described as pragmatists working in a situation that requires feasible and suitable solutions to the problem of measuring location value. It is our contention that the third generation techniques of spatially varying parameter models and spatial autocorrelation models will require greater industry verification before their use becomes more widely accepted.
Originality/value
This paper provides a detailed analysis of methodologies used to reflect the value of location over the last 50 years. The debate is taken forward by describing what will be the contribution to the development of the next generation of location‐specific modeling techniques.
Details
Keywords
Qiangqiang Zhai, Zhao Liu, Zhouzhou Song and Ping Zhu
Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to…
Abstract
Purpose
Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to problems with high-dimensional input variables, it may be difficult to obtain a model with high accuracy and efficiency due to the curse of dimensionality. To meet this challenge, an improved high-dimensional Kriging modeling method based on maximal information coefficient (MIC) is developed in this work.
Design/methodology/approach
The hyperparameter domain is first derived and the dataset of hyperparameter and likelihood function is collected by Latin Hypercube Sampling. MIC values are innovatively calculated from the dataset and used as prior knowledge for optimizing hyperparameters. Then, an auxiliary parameter is introduced to establish the relationship between MIC values and hyperparameters. Next, the hyperparameters are obtained by transforming the optimized auxiliary parameter. Finally, to further improve the modeling accuracy, a novel local optimization step is performed to discover more suitable hyperparameters.
Findings
The proposed method is then applied to five representative mathematical functions with dimensions ranging from 20 to 100 and an engineering case with 30 design variables.
Originality/value
The results show that the proposed high-dimensional Kriging modeling method can obtain more accurate results than the other three methods, and it has an acceptable modeling efficiency. Moreover, the proposed method is also suitable for high-dimensional problems with limited sample points.
Details
Keywords
Slawomir Koziel, Yonatan Tesfahunegn and Leifur Leifsson
Strategies for accelerated multi-objective optimization of aerodynamic surfaces are investigated, including the possibility of exploiting surrogate modeling techniques for…
Abstract
Purpose
Strategies for accelerated multi-objective optimization of aerodynamic surfaces are investigated, including the possibility of exploiting surrogate modeling techniques for computational fluid dynamic (CFD)-driven design speedup of such surfaces. The purpose of this paper is to reduce the overall optimization time.
Design/methodology/approach
An algorithmic framework is described that is composed of: a search space reduction, fast surrogate models constructed using variable-fidelity CFD models and co-Kriging, and Pareto front refinement. Numerical case studies are provided demonstrating the feasibility of solving real-world problems involving multi-objective optimization of transonic airfoil shapes and accurate CFD simulation models of such surfaces.
Findings
It is possible, through appropriate combination of surrogate modeling techniques and variable-fidelity models, to identify a set of alternative designs representing the best possible trade-offs between conflicting design objectives in a realistic time frame corresponding to a few dozen of high-fidelity CFD simulations of the respective surfaces.
Originality/value
The proposed aerodynamic design optimization algorithmic framework is novel and holistic. It proved useful for fast design of aerodynamic surfaces using high-fidelity simulation data in moderately sized search space, which is extremely challenging when using conventional methods due to the excessive computational cost.
Details
Keywords
Bence Tipary and Ferenc Gábor Erdős
The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic…
Abstract
Purpose
The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic machine (PKM). The aim is to present a low-cost calibration alternative, for small and medium-sized enterprises, as well as educational and research teams, with no expensive measuring devices at their disposal.
Design/methodology/approach
Using a chessboard pattern on a ground-truth plane, a digital indicator, a two-dimensional eye-in-hand camera and a laser pointer, positioning errors are explored in the machine workspace. With the help of these measurements, interpolation functions are set up per direction, resulting in an interpolation vector function to compensate the volumetric errors in the workspace.
Findings
Based on the proof-of-concept system for the linear-delta PKM, it is shown that using the proposed measurement technique and modelless calibration method, positioning accuracy is significantly improved using simple setups.
Originality/value
In the proposed method, a combination of low-cost devices is applied to improve the three-dimensional positioning accuracy of a PKM. By using the presented tools, the parametric kinematic model is not required; furthermore, the calibration setup is simple, there is no need for hand–eye calibration and special fixturing in the machine workspace.
Details
Keywords
Variable-fidelity optimization (VFO) frameworks generally aim at taking full advantage of high-fidelity (HF) and low-fidelity (LF) models to solve computationally expensive…
Abstract
Purpose
Variable-fidelity optimization (VFO) frameworks generally aim at taking full advantage of high-fidelity (HF) and low-fidelity (LF) models to solve computationally expensive problems. The purpose of this paper is to develop a novel modified trust-region assisted variable-fidelity optimization (MTR-VFO) framework that can improve the optimization efficiency for computationally expensive engineering design problems.
Design/methodology/approach
Though the LF model is rough and inaccurate, it probably contains the gradient information and trend of the computationally expensive HF model. In the proposed framework, the extreme locations of the LF kriging model are firstly utilized to enhance the HF kriging model, and then a modified trust-region (MTR) method is presented for efficient local search. The proposed MTR-VFO framework is verified through comparison with three typical methods on some benchmark problems, and it is also applied to optimize the configuration of underwater tandem wings.
Findings
The results indicate that the proposed MTR-VFO framework is more effective than some existing typical methods and it has the potential of solving computationally expensive problems more efficiently.
Originality/value
The extreme locations of LF models are utilized to improve the accuracy of HF models and a MTR method is first proposed for local search without utilizing HF gradient. Besides, a novel MTR-VFO framework is presented which is verified to be more effective than some existing typical methods and shows great potential of solving computationally expensive problems effectively.
Details
Keywords
Edmund Baffoe-Twum, Eric Asa and Bright Awuku
Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations…
Abstract
Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations, travel demand, safety-performance evaluation, and maintenance. Regular updates help to determine traffic patterns for decision-making. Unfortunately, the luxury of having permanent recorders on all road segments, especially low-volume roads, is virtually impossible. Consequently, insufficient AADT information is acquired for planning and new developments. A growing number of statistical, mathematical, and machine-learning algorithms have helped estimate AADT data values accurately, to some extent, at both sampled and unsampled locations on low-volume roadways. In some cases, roads with no representative AADT data are resolved with information from roadways with similar traffic patterns.
Methods: This study adopted an integrative approach with a combined systematic literature review (SLR) and meta-analysis (MA) to identify and to evaluate the performance, the sources of error, and possible advantages and disadvantages of the techniques utilized most for estimating AADT data. As a result, an SLR of various peer-reviewed articles and reports was completed to answer four research questions.
Results: The study showed that the most frequent techniques utilized to estimate AADT data on low-volume roadways were regression, artificial neural-network techniques, travel-demand models, the traditional factor approach, and spatial interpolation techniques. These AADT data-estimating methods' performance was subjected to meta-analysis. Three studies were completed: R squared, root means square error, and mean absolute percentage error. The meta-analysis results indicated a mixed summary effect: 1. all studies were equal; 2. all studies were not comparable. However, the integrated qualitative and quantitative approach indicated that spatial-interpolation (Kriging) methods outperformed the others.
Conclusions: Spatial-interpolation methods may be selected over others to generate accurate AADT data by practitioners at all levels for decision making. Besides, the resulting cross-validation statistics give statistics like the other methods' performance measures.
Details
Keywords
Emad Samadiani and Yogendra Joshi
The purpose of this paper is to review the available reduced order modeling approaches in the literature for predicting the flow and specially temperature fields inside data…
Abstract
Purpose
The purpose of this paper is to review the available reduced order modeling approaches in the literature for predicting the flow and specially temperature fields inside data centers in terms of the involved design parameters.
Design/methodology/approach
This paper begins with a motivation for flow/thermal modeling needs for designing an energy‐efficient thermal management system in data centers. Recent studies on air velocity and temperature field simulations in data centers through computational fluid dynamics/heat transfer (CFD/HT) are reviewed. Meta‐modeling and reduced order modeling are tools to generate accurate and rapid surrogate models for a complex system. These tools, with a focus on low‐dimensional models of turbulent flows are reviewed. Reduced order modeling techniques based on turbulent coherent structures identification, in particular the proper orthogonal decomposition (POD) are explained and reviewed in more details. Then, the available approaches for rapid thermal modeling of data centers are reviewed. Finally, recent studies on generating POD‐based reduced order thermal models of data centers are reviewed and representative results are presented and compared for a case study.
Findings
It is concluded that low‐dimensional models are needed in order to predict the multi‐parameter dependent thermal behavior of data centers accurately and rapidly for design and control purposes. POD‐based techniques have shown great approximation for multi‐parameter thermal modeling of data centers. It is believed that wavelet‐based techniques due to the their ability to separate between coherent and incoherent structures – something that POD cannot do – can be considered as new promising tools for reduced order thermal modeling of complex electronic systems such as data centers
Originality/value
The paper reviews different numerical methods and provides the reader with some insight for reduced order thermal modeling of complex convective systems such as data centers.
Details