Search results
1 – 10 of 59Vasileios Stamatis, Michail Salampasis and Konstantinos Diamantaras
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the…
Abstract
Purpose
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the results merging process. In this work, the authors apply machine learning methods for results merging in federated patent search. Even though several methods for results merging have been developed, none of them were tested on patent data nor considered several machine learning models. Thus, the authors experiment with state-of-the-art methods using patent data and they propose two new methods for results merging that use machine learning models.
Design/methodology/approach
The methods are based on a centralized index containing samples of documents from all the remote resources, and they implement machine learning models to estimate comparable scores for the documents retrieved by different resources. The authors examine the new methods in cooperative and uncooperative settings where document scores from the remote search engines are available and not, respectively. In uncooperative environments, they propose two methods for assigning document scores.
Findings
The effectiveness of the new results merging methods was measured against state-of-the-art models and found to be superior to them in many cases with significant improvements. The random forest model achieves the best results in comparison to all other models and presents new insights for the results merging problem.
Originality/value
In this article the authors prove that machine learning models can substitute other standard methods and models that used for results merging for many years. Our methods outperformed state-of-the-art estimation methods for results merging, and they proved that they are more effective for federated patent search.
Details
Keywords
Assad Mehmood, Kashif Zia, Arshad Muhammad and Dinesh Kumar Saini
Participatory wireless sensor networks (PWSN) is an emerging paradigm that leverages existing sensing and communication infrastructures for the sensing task. Various environmental…
Abstract
Purpose
Participatory wireless sensor networks (PWSN) is an emerging paradigm that leverages existing sensing and communication infrastructures for the sensing task. Various environmental phenomenon – P monitoring applications dealing with noise pollution, road traffic, requiring spatio-temporal data samples of P (to capture its variations and its profile construction) in the region of interest – can be enabled using PWSN. Because of irregular distribution and uncontrollable mobility of people (with mobile phones), and their willingness to participate, complete spatio-temporal (CST) coverage of P may not be ensured. Therefore, unobserved data values must be estimated for CST profile construction of P and presented in this paper.
Design/methodology/approach
In this paper, the estimation of these missing data samples both in spatial and temporal dimension is being discussed, and the paper shows that non-parametric technique – Kernel Regression – provides better estimation compared to parametric regression techniques in PWSN context for spatial estimation. Furthermore, the preliminary results for estimation in temporal dimension have been provided. The deterministic and stochastic approaches toward estimation in the context of PWSN have also been discussed.
Findings
For the task of spatial profile reconstruction, it is shown that non-parametric estimation technique (kernel regression) gives a better estimation of the unobserved data points. In case of temporal estimation, few preliminary techniques have been studied and have shown that further investigations are required to find out best estimation technique(s) which may approximate the missing observations (temporally) with considerably less error.
Originality/value
This study addresses the environmental informatics issues related to deterministic and stochastic approaches using PWSN.
Details
Keywords
Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen
Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation…
Abstract
Purpose
Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.
Design/methodology/approach
In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.
Findings
The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.
Research limitations/implications
Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.
Practical implications
This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.
Social implications
The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.
Originality/value
Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.
Details
Keywords
Slawomir Koziel and Adrian Bekasiewicz
The purpose of this paper is to exploit a database of pre-existing designs to accelerate parametric optimization of antenna structures is investigated.
Abstract
Purpose
The purpose of this paper is to exploit a database of pre-existing designs to accelerate parametric optimization of antenna structures is investigated.
Design/methodology/approach
The usefulness of pre-existing designs for rapid design of antennas is investigated. The proposed approach exploits the database existing antenna base designs to determine a good starting point for structure optimization and its response sensitivities. The considered method is suitable for handling computationally expensive models, which are evaluated using full-wave electromagnetic (EM) simulations. Numerical case studies are provided demonstrating the feasibility of the framework for the design of real-world structures.
Findings
The use of pre-existing designs enables rapid identification of a good starting point for antenna optimization and speeds-up estimation of the structure response sensitivities. The base designs can be arranged into subsets (simplexes) in the objective space and used to represent the target vector, i.e. the starting point for structure design. The base closest base point w.r.t. the initial design can be used to initialize Jacobian for local optimization. Moreover, local optimization costs can be reduced through the use of Broyden formula for Jacobian updates in consecutive iterations.
Research limitations/implications
The study investigates the possibility of reusing pre-existing designs for the acceleration of antenna optimization. The proposed technique enables the identification of a good starting point and reduces the number of expensive EM simulations required to obtain the final design.
Originality/value
The proposed design framework proved to be useful for the identification of good initial design and rapid optimization of modern antennas. Identification of the starting point for the design of such structures is extremely challenging when using conventional methods involving parametric studies or repetitive local optimizations. The presented methodology proved to be a useful design and geometry scaling tool when previously obtained designs are available for the same antenna structure.
Details
Keywords
Jinbao Zhang, Yongqiang Zhao, Ming Liu and Lingxian Kong
A generalized distribution with wide range of skewness and elongation will be suitable for the data mining and compatible for the misspecification of the distribution. Hence, the…
Abstract
Purpose
A generalized distribution with wide range of skewness and elongation will be suitable for the data mining and compatible for the misspecification of the distribution. Hence, the purpose of this paper is to present a distribution-based approach for estimating degradation reliability considering these conditions.
Design/methodology/approach
Tukey’s g-and-h distribution with the quantile expression is introduced to fit the degradation paths of the population over time. The Newton–Raphson algorithm is used to approximately evaluate the reliability. Simulation verification for parameter estimation with particle swarm optimization (PSO) is carried out. The effectiveness and validity of the proposed approach for degradation reliability is verified by the two-stage verification and the comparison with others’ work.
Findings
Simulation studies have proved the effectiveness of PSO in the parameter estimation. Two degradation datasets of GaAs laser devices and crack growth are performed by the proposed approach. The results show that it can well match the initial failure time and be more compatible than the normal distribution and the Weibull distribution.
Originality/value
Tukey’s g-and-h distribution is first proposed to investigate the influence of the tail and the skewness on the degradation reliability. In addition, the parameters of the Tukey’s g-and-h distribution is estimated by PSO with root-mean-square error as the object function.
Details
Keywords
Sohail R. Reddy, Matthias K. Scharrer, Franz Pichler, Daniel Watzenig and George S. Dulikravich
This paper aims to solve the parameter identification problem to estimate the parameters in electrochemical models of the lithium-ion battery.
Abstract
Purpose
This paper aims to solve the parameter identification problem to estimate the parameters in electrochemical models of the lithium-ion battery.
Design/methodology/approach
The parameter estimation framework is applied to the Doyle-Fuller-Newman (DFN) model containing a total of 44 parameters. The DFN model is fit to experimental data obtained through the cycling of Li-ion cells. The parameter estimation is performed by minimizing the least-squares difference between the experimentally measured and numerically computed voltage curves. The minimization is performed using a state-of-the-art hybrid minimization algorithm.
Findings
The DFN model parameter estimation is performed within 14 h, which is a significant improvement over previous works. The mean absolute error for the converged parameters is less than 7 mV.
Originality/value
To the best of the authors’ knowledge, application of a hybrid optimization framework is new in the field of electrical modelling of lithium-ion cells. This approach saves much time in parameterization of models with a high number of parameters while achieving a high-quality fit.
Details
Keywords
Bolin Gao, Kaiyuan Zheng, Fan Zhang, Ruiqi Su, Junying Zhang and Yimin Wu
Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental…
Abstract
Purpose
Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental perception. Existing research works on multitarget tracking based on multisensor fusion mostly focuses on the vehicle perspective, but limited by the principal defects of the vehicle sensor platform, it is difficult to comprehensively and accurately describe the surrounding environment information.
Design/methodology/approach
In this paper, a multitarget tracking method based on roadside multisensor fusion is proposed, including a multisensor fusion method based on measurement noise adaptive Kalman filtering, a global nearest neighbor data association method based on adaptive tracking gate, and a Track life cycle management method based on M/N logic rules.
Findings
Compared with fixed-size tracking gates, the adaptive tracking gates proposed in this paper can comprehensively improve the data association performance in the multitarget tracking process. Compared with single sensor measurement, the proposed method improves the position estimation accuracy by 13.5% and the velocity estimation accuracy by 22.2%. Compared with the control method, the proposed method improves the position estimation accuracy by 23.8% and the velocity estimation accuracy by 8.9%.
Originality/value
A multisensor fusion method with adaptive Kalman filtering of measurement noise is proposed to realize the adaptive adjustment of measurement noise. A global nearest neighbor data association method based on adaptive tracking gate is proposed to realize the adaptive adjustment of the tracking gate.
Details
Keywords
Fangli Mou and Dan Wu
In recent years, owing to the rapidly increasing labor costs, the demand for robots in daily services and industrial operations has been increased significantly. For further…
Abstract
Purpose
In recent years, owing to the rapidly increasing labor costs, the demand for robots in daily services and industrial operations has been increased significantly. For further applications and human–robot interaction in an unstructured open environment, fast and accurate tracking and strong disturbance rejection ability are required. However, utilizing a conventional controller can make it difficult for the robot to meet these demands, and when a robot is required to perform at a high-speed and large range of motion, conventional controllers may not perform effectively or even lead to the instability.
Design/methodology/approach
The main idea is to develop the control law by combining the SMC feedback with the ADRC control architecture to improve the robustness and control quality of a conventional SMC controller. The problem is formulated and solved in the framework of ADRC. For better estimation and control performance, a generalized proportional integral observer (GPIO) technique is employed to estimate and compensate for unmodeled dynamics and other unknown time-varying disturbances. And benefiting from the usage of GPIO, a new SMC law can be designed by synthesizing the estimation and its history.
Findings
The employed methodology introduced a significant improvement in handling the uncertainties of the system parameters without compromising the nominal system control quality and intuitiveness of the conventional ADRC design. First, the proposed method combines the advantages of the ADRC and SMC method, which achieved the best tracking performance among these controllers. Second, the proposed controller is sufficiently robust to various disturbances and results in smaller tracking errors. Third, the proposed control method is insensitive to control parameters which indicates a good application potential.
Originality/value
High-performance robot tracking control is the basis for further robot applications in open environments and human–robot interfaces, which require high tracking accuracy and strong disturbance rejection. However, both the varied dynamics of the system and rapidly changing nonlinear coupling characteristic significantly increase the control difficulty. The proposed method gives a new replacement of PID controller in robot systems, which does not require an accurate dynamic system model, is insensitive to control parameters and can perform promisingly for response rapidity and steady-state accuracy, as well as in the presence of strong unknown disturbances.
Details
Keywords
Sezer Kahyaoglu Bozkus, Hakan Kahyaoglu and Atahirou Mahamane Mahamane Lawali
The purpose of this study aims to analyze the dynamic behavior of the relationship between atmospheric carbon emissions and the Organisation for Economic Co-operation and…
Abstract
Purpose
The purpose of this study aims to analyze the dynamic behavior of the relationship between atmospheric carbon emissions and the Organisation for Economic Co-operation and Development (OECD) industrial production index (IPI) in the short and long term by applying multifractal techniques.
Design/methodology/approach
Multifractal de-trended cross-correlation technique is used for this analysis based on the relevant literature. In addition, it is the most widely used approach to estimate multifractality because it generates robust empirical results against non-stationarities in the time series.
Findings
It is revealed that industrial production causes long and short term environmental costs. The OECD IPI and atmospheric carbon emissions were found to have a strong correlation between the time domain. However, this relationship does not mostly take into account the frequency-based correlations with the tail effects caused by shocks that are effective on the economy. In this study, the long-term dependence of the relationship between the OECD IPI and atmospheric carbon emissions differs from the correlation obtained by linear methods, as the analysis is based on the frequency. The major finding is that the Hurst coefficient is in the range 0.40-0.75 indicating.
Research limitations/implications
In this study, the local singular behavior of the time-series is analyzed to test for the multifractality characteristics of the series. In this context, the scaling exponents and the singularity spectrum are obtained to determine the origins of this multifractality. The multifractal time series are defined as the set of points with a given singularity exponent a where this exponent a is illustrated as a fractal with fractal dimension f(α). Therefore, the multifractality term indicates the existence of fluctuations, which are non-uniform and more importantly, their relative frequencies are also scale-dependent.
Practical implications
The results provide information based on the fluctuation in IPI, which determines the main conjuncture of the economy. An optimal strategy for shaping the consequences of climate change resulting from industrial production activities will not only need to be quite comprehensive and global in scale but also policies will need to be applicable to the national and local conditions of the given nation and adaptable to the needs of the country.
Social implications
The results provide information for the analysis of the environmental cost of climate change depending on the magnitude of the impact on the total supply. In addition to environmental problems, climate change leads to economic problems, and hence, policy instruments are introduced to fight against the adverse effects of it.
Originality/value
This study may be of practical and technical importance in regional climate change forecasting, extreme carbon emission regulations and industrial production resource management in the world economy. Hence, the major contribution of this study is to introduce an approach to sustainability for the analysis of the environmental cost of growth in the supply side economy.
Details
Keywords
Tao Peng, Xingliang Liu, Rui Fang, Ronghui Zhang, Yanwei Pang, Tao Wang and Yike Tong
This study aims to develop an automatic lane-change mechanism on highways for self-driving articulated trucks to improve traffic safety.
Abstract
Purpose
This study aims to develop an automatic lane-change mechanism on highways for self-driving articulated trucks to improve traffic safety.
Design/methodology/approach
The authors proposed a novel safety lane-change path planning and tracking control method for articulated vehicles. A double-Gaussian distribution was introduced to deduce the lane-change trajectories of tractor and trailer coupling characteristics of intelligent vehicles and roads. With different steering and braking maneuvers, minimum safe distances were modeled and calculated. Considering safety and ergonomics, the authors invested multilevel self-driving modes that serve as the basis of decision-making for vehicle lane-change. Furthermore, a combined controller was designed by feedback linearization and single-point preview optimization to ensure the path tracking and robust stability. Specialized hardware in the loop simulation platform was built to verify the effectiveness of the designed method.
Findings
The numerical simulation results demonstrated the path-planning model feasibility and controller-combined decision mechanism effectiveness to self-driving trucks. The proposed trajectory model could provide safety lane-change path planning, and the designed controller could ensure good tracking and robust stability for the closed-loop nonlinear system.
Originality/value
This is a fundamental research of intelligent local path planning and automatic control for articulated vehicles. There are two main contributions: the first is a more quantifiable trajectory model for self-driving articulated vehicles, which provides the opportunity to adapt vehicle and scene changes. The second involves designing a feedback linearization controller, combined with a multi-objective decision-making mode, to improve the comprehensive performance of intelligent vehicles. This study provides a valuable reference to develop advanced driving assistant system and intelligent control systems for self-driving articulated vehicles.
Details