Search results

1 – 10 of over 7000
Article
Publication date: 20 April 2015

Luciano Andrea Catalano, Domenico Quagliarella and Pier Luigi Vitagliano

The purpose of this paper is to propose an accurate and efficient technique for computing flow sensitivities by finite differences of perturbed flow fields. It relies on…

Abstract

Purpose

The purpose of this paper is to propose an accurate and efficient technique for computing flow sensitivities by finite differences of perturbed flow fields. It relies on computing the perturbed flows on coarser grid levels only: to achieve the same fine-grid accuracy, the approximate value of the relative local truncation error between coarser and finest grids unperturbed flow fields, provided by a standard multigrid method, is added to the coarse grid equations. The gradient computation is introduced in a hybrid genetic algorithm (HGA) that takes advantage of the presented method to accelerate the gradient-based search. An application to a classical transonic airfoil design is reported.

Design/methodology/approach

Genetic optimization algorithm hybridized with classical gradient-based search techniques; usage of fast and accurate gradient computation technique.

Findings

The new variant of the prolongation operator with weighting terms based on the volume of grid cells improves the accuracy of the MAFD method for turbulent viscous flows. The hybrid GA is capable to efficiently handle and compensate for the error that, although very limited, is present in the multigrid-aided finite-difference (MAFD) gradient evaluation method.

Research limitations/implications

The proposed new variants of HGA, while outperforming the simple genetic algorithm, still require tuning and validation to further improve performance.

Practical implications

Significant speedup of CFD-based optimization loops.

Originality/value

Introduction of new multigrid prolongation operator that improves the accuracy of MAFD method for turbulent viscous flows. First application of MAFD evaluation of flow sensitivities within a hybrid optimization framework.

Article
Publication date: 20 February 2020

Ravinder Singh, Archana Khurana and Sunil Kumar

This study aims to develop an optimized 3D laser point reconstruction using Descent Gradient algorithm. Precise and accurate reconstruction of 3D laser point cloud of the…

Abstract

Purpose

This study aims to develop an optimized 3D laser point reconstruction using Descent Gradient algorithm. Precise and accurate reconstruction of 3D laser point cloud of the complex environment/object is a key solution for many industries such as construction, gaming, automobiles, aerial navigation, architecture and automation. A 2D laser scanner along with a servo motor/pan tilt/inertial measurement unit is used for generating 3D point cloud (either environment/object or both) by acquiring the real-time data from sensors. However, while generating the 3D laser point cloud, various problems related to time synchronization problem between laser and servomotor and torque variation in servomotors arise, which causes misalignment in stacking the 2D laser scan for generating the 3D point cloud of the environment. Because of the misalignment in stacking, the 2D laser scan corresponding to the erroneous angular and position information by the servomotor and the 3D laser point cloud become distorted in terms of inconsistency for measuring the dimension of the objects.

Design/methodology/approach

This paper addresses a modified 3D laser system assembled from a 2D laser scanner coupled with a servomotor (dynamixel motor) for developing an efficient 3D laser point cloud with the implementation of an optimization technique: descent gradient filter (DGT). The proposed approach reduces the cost function (error) in the angular and position coordinates of the servo motor caused because of torque variation and time synchronization, which resulted in enhancing the accuracy in 3D point cloud mapping for the accurate measurement of the object’s dimensions.

Findings

Various real-world experiments are performed with the proposed DGT filter linked with laser scanner and servomotor and an improvement of 6.5 per cent in measuring the accurate dimension of object is obtained while comparing with conventional approaches for generating a 3D laser point cloud.

Originality/value

This proposed technique may be applicable for various industrial applications that are based on robotics arms (such as painting, welding and cutting) in the automobile industry, the optimized measurement of object, efficient mobile robot navigation, precise 3D reconstruction of environment/object in construction, architecture applications, airborne applications and aerial navigation.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 30 November 2021

Julián Martínez-Vargas, Pedro Carmona and Pol Torrelles

The purpose of this paper is to study the influence of different quantitative (traditionally used) and qualitative variables, such as the possible negative effect in…

Abstract

Purpose

The purpose of this paper is to study the influence of different quantitative (traditionally used) and qualitative variables, such as the possible negative effect in determined periods of certain socio-political factors on share price formation.

Design/methodology/approach

We first analyse descriptively the evolution of the Ibex-35 in recent years and compare it with other international benchmark indices. Bellow, two techniques have been compared: a classic linear regression statistical model (GLM) and a method based on machine learning techniques called Extreme Gradient Boosting (XGBoost).

Findings

XGBoost yields a very accurate market value prediction model that clearly outperforms the other, with a coefficient of determination close to 90%, calculated on validation sets.

Practical implications

According to our analysis, individual accounts are equally or more important than consolidated information in predicting the behaviour of share prices. This would justify Spain maintaining the obligation to present individual interim financial statements, which does not happen in other European Union countries because IAS 34 only stipulates consolidated interim financial statements.

Social implications

The descriptive analysis allows us to see how the Ibex-35 has moved away from international trends, especially in periods in which some relevant socio-political events occurred, such as the independence referendum in Catalonia, the double elections of 2019 or the early handling of the Covid-19 pandemic in 2020.

Originality/value

Compared to other variables, the XGBoost model assigns little importance to socio-political factors when it comes to share price formation; however, this model explains 89.33% of its variance.

Propósito

El propósito de este artículo es estudiar la influencia de diferentes variables cuantitativas (tradicionalmente usadas) y cualitativas, como la posible influencia negativa en determinados períodos de ciertos factores sociopolíticos, sobre la formación del precio de.

Diseño/metodología/enfoque

Primero analizamos de forma descriptiva la evolución del Ibex-35 en los últimos años y la comparamos con la de otros índices internacionales de referencia. A continuación, se han contrastado dos técnicas: un modelo estadístico clásico de regresión lineal (GLM) y un método basado en el aprendizaje automático denominado Extreme Gradient Boosting (XGBoost).

Resultados

XGBoost nos permite obtener un modelo de predicción del valor de mercado muy preciso y claramente superior al otro, con un coeficiente de determinación cercano al 90%, calculado sobre las muestras de validación.

Implicaciones prácticas

De acuerdo con nuestro análisis, la información contable individual es igual o más importante que la consolidada para predecir el comportamiento del precio de las acciones. Esto justificaría que España mantenga la obligación de presentar estados financieros intermedios individuales, lo que no ocurre en otros países de la Unión Europea porque la NIC 34 solo obliga a realizar estados financieros intermedios consolidados.

Implicaciones sociales

El análisis descriptivo permite ver cómo el Ibex-35 se ha alejado de las tendencias internacionales, especialmente en periodos en los que se produjo algún hecho sociopolítico relevante, como el referéndum de autodeterminación de Cataluña, el doble proceso electoral de 2019 o la gestión inicial de la pandemia generada por el Covid-19.

Originalidad/valor

En comparación con otras variables, el modelo XGBoost asigna poca importancia a los factores sociopolíticos cuando se trata de la formación del precio de las acciones; sin embargo, este modelo explica el 89.33% de su varianza.

Details

Academia Revista Latinoamericana de Administración, vol. 35 no. 1
Type: Research Article
ISSN: 1012-8255

Keywords

Article
Publication date: 1 April 2002

S. Stoyanov, C. Bailey and M. Cross

This paper details and demonstrates integrated optimisation‐reliability modelling for predicting the performance of solder joints in electronic packaging. This integrated…

Abstract

This paper details and demonstrates integrated optimisation‐reliability modelling for predicting the performance of solder joints in electronic packaging. This integrated modelling approach is used to identify efficiently and quickly the most suitable design parameters for solder joint performance during thermal cycling and is demonstrated on flip‐chip components using “no‐flow” underfills. To implement “optimisation in reliability” approach, the finite element simulation tool – PHYSICA, is coupled with optimisation and statistical tools. This resulting framework is capable of performing design optimisation procedures in an entirely automated and systematic manner.

Details

Soldering & Surface Mount Technology, vol. 14 no. 1
Type: Research Article
ISSN: 0954-0911

Keywords

Article
Publication date: 25 January 2022

Anil Kumar Maddali and Habibulla Khan

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater…

Abstract

Purpose

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance or more discreetly. The purpose of this study is to explore how voices and their analyses are used in modern literature to generate a variety of solutions, of which only a few successful models exist.

Design/methodology

The mel-frequency cepstral coefficient (MFCC), average magnitude difference function, cepstrum analysis and other voice characteristics are effectively modeled and implemented using mathematical modeling with variable weights parametric for each algorithm, which can be used with or without noises. Improvising the design characteristics and their weights with different supervised algorithms that regulate the design model simulation.

Findings

Different data models have been influenced by the parametric range and solution analysis in different space parameters, such as frequency or time model, with features such as without, with and after noise reduction. The frequency response of the current design can be analyzed through the Windowing techniques.

Original value

A new model and its implementation scenario with pervasive computational algorithms’ (PCA) (such as the hybrid PCA with AdaBoost (HPCA), PCA with bag of features and improved PCA with bag of features) relating the different features such as MFCC, power spectrum, pitch, Window techniques, etc. are calculated using the HPCA. The features are accumulated on the matrix formulations and govern the design feature comparison and its feature classification for improved performance parameters, as mentioned in the results.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 22 May 2008

Alexander D. Klose and Andreas H. Hielscher

This paper sets out to give an overview about state‐of‐the‐art optical tomographic image reconstruction algorithms that are based on the equation of radiative transfer (ERT).

Abstract

Purpose

This paper sets out to give an overview about state‐of‐the‐art optical tomographic image reconstruction algorithms that are based on the equation of radiative transfer (ERT).

Design/methodology/approach

An objective function, which describes the discrepancy between measured and numerically predicted light intensity data on the tissue surface, is iteratively minimized to find the unknown spatial distribution of the optical parameters or sources. At each iteration step, the predicted partial current is calculated by a forward model for light propagation based on the ERT. The equation of radiative is solved with either finite difference or finite volume methods.

Findings

Tomographic reconstruction algorithms based on the ERT accurately recover the spatial distribution of optical tissue properties and light sources in biological tissue. These tissues either can have small geometries/large absorption coefficients, or can contain void‐like inclusions.

Originality/value

These image reconstruction methods can be employed in small animal imaging for monitoring blood oxygenation, in imaging of tumor growth, in molecular imaging of fluorescent and bioluminescent probes, in imaging of human finger joints for early diagnosis of rheumatoid arthritis, and in functional brain imaging.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 18 no. 3/4
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 28 July 2020

Noura AlNuaimi, Mohammad Mehedy Masud, Mohamed Adel Serhani and Nazar Zaki

Organizations in many domains generate a considerable amount of heterogeneous data every day. Such data can be processed to enhance these organizations’ decisions in real…

1407

Abstract

Organizations in many domains generate a considerable amount of heterogeneous data every day. Such data can be processed to enhance these organizations’ decisions in real time. However, storing and processing large and varied datasets (known as big data) is challenging to do in real time. In machine learning, streaming feature selection has always been considered a superior technique for selecting the relevant subset features from highly dimensional data and thus reducing learning complexity. In the relevant literature, streaming feature selection refers to the features that arrive consecutively over time; despite a lack of exact figure on the number of features, numbers of instances are well-established. Many scholars in the field have proposed streaming-feature-selection algorithms in attempts to find the proper solution to this problem. This paper presents an exhaustive and methodological introduction of these techniques. This study provides a review of the traditional feature-selection algorithms and then scrutinizes the current algorithms that use streaming feature selection to determine their strengths and weaknesses. The survey also sheds light on the ongoing challenges in big-data research.

Details

Applied Computing and Informatics, vol. 18 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 9 August 2019

Anand Amrit and Leifur Leifsson

The purpose of this work is to apply and compare surrogate-assisted and multi-fidelity, multi-objective optimization (MOO) algorithms to simulation-based aerodynamic…

Abstract

Purpose

The purpose of this work is to apply and compare surrogate-assisted and multi-fidelity, multi-objective optimization (MOO) algorithms to simulation-based aerodynamic design exploration.

Design/methodology/approach

The three algorithms for multi-objective aerodynamic optimization compared in this work are the combination of evolutionary algorithms, design space reduction and surrogate models, the multi-fidelity point-by-point Pareto set identification and the multi-fidelity sequential domain patching (SDP) Pareto set identification. The algorithms are applied to three cases, namely, an analytical test case, the design of transonic airfoil shapes and the design of subsonic wing shapes, and are evaluated based on the resulting best possible trade-offs and the computational overhead.

Findings

The results show that all three algorithms yield comparable best possible trade-offs for all the test cases. For the aerodynamic test cases, the multi-fidelity Pareto set identification algorithms outperform the surrogate-assisted evolutionary algorithm by up to 50 per cent in terms of cost. Furthermore, the point-by-point algorithm is around 27 per cent more efficient than the SDP algorithm.

Originality/value

The novelty of this work includes the first applications of the SDP algorithm to multi-fidelity aerodynamic design exploration, the first comparison of these multi-fidelity MOO algorithms and new results of a complex simulation-based multi-objective aerodynamic design of subsonic wing shapes involving two conflicting criteria, several nonlinear constraints and over ten design variables.

Article
Publication date: 20 February 2020

Yassine Selami, Na Lv, Wei Tao, Hongwei Yang and Hui Zhao

The purpose of this paper is to propose cuckoo optimization algorithm (COA)-based back propagation neural network (BPNN) to reduce the effect of the nonlinearities…

Abstract

Purpose

The purpose of this paper is to propose cuckoo optimization algorithm (COA)-based back propagation neural network (BPNN) to reduce the effect of the nonlinearities presented in laser triangulation displacement sensors. The 3D positioning and posture sensor allows access to the third dimension through depth measurement; the performance of the sensor varies according to the level of nonlinearities presented in the system, which leads to inaccuracies in measurement.

Design/methodology/approach

While applying optimization approach, the mathematical model and the relationship between the key parameters in the laser triangulation ranging and the indexes of the measuring system were analyzed.

Findings

Based on the performance of the parametric optimization method, the measurement repeatability reached 0.5 µm with an STD value within 0.17 µm, an expanded uncertainty of measurement was within 5 µm, the angle error variation of the object’s rotational plane was within 0.031 degrees and nonlinearity was recorded within 0.006 per cent in a full scale. The proposed approach reduced the effect of the nonlinearity presented in the sensor. Thus, the accuracy and speed of the sensor were greatly increased. The specifications of the optimized sensor meet the requirements for high-accuracy devices and allow wide range of industrial application.

Originality/value

In this paper, COA-based BPNN is proposed for laser triangulation displacement sensor optimization, on the basis of the mathematical model, clarifying the working space and working angle on the measurement system.

Details

Sensor Review, vol. 40 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 10 June 2022

Aman Arora, Debadrata Sarkar, Arunabha Majumder, Soumen Sen and Shibendu Shekhar Roy

This paper aims to devise a first-of-its-kind methodology to determine the design, operating conditions and actuation strategy of pneumatic artificial muscles (PAMs) for…

Abstract

Purpose

This paper aims to devise a first-of-its-kind methodology to determine the design, operating conditions and actuation strategy of pneumatic artificial muscles (PAMs) for assistive robotic applications. This requires extensive characterization, data set generation and meaningful modelling between PAM characteristics and design variables. Such a characterization should cover a wide range of design and operation parameters. This is a stepping stone towards generating a design guide for this highly popular compliant actuator, just like any conventional element of a mechanism.

Design/methodology/approach

Characterization of a large pool of custom fabricated PAMs of varying designs is performed to determine their static and dynamic behaviours. Metaheuristic optimizer-based artificial neural network (ANN) structures are used to determine eight different models representing PAM behaviour. The assistance of knee flexion during level walking is targeted for evaluating the applicability of the developed actuator by attaching a PAM across the joint. Accordingly, the PAM design and the actuation strategy are optimized through a tabletop emulator.

Findings

The dependence of passive length, static contraction, dynamic step response for inflation and deflation of the PAMs on their design dimensions and operating parameters is successfully modelled by the ANNs. The efficacy of these models is investigated to successfully optimize the PAM design, operation parameters and actuation strategy for using a PAM in assisting knee flexion in human gait.

Originality/value

Characterization of static and the dynamic behaviour of a large pool of PAMs with varying designs over a wide range of operating conditions is the novel feature in this article. A lucid customizable fabrication technique is discussed to obtain a wide variety of PAM designs. Metaheuristic-based ANNs are used for tackling high non-linearity in data while modelling the PAM behaviour. An innovative tabletop emulator is used for investigating the utility of the models in the possible application of PAMs in assistive robotics.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 7000