Search results

1 – 10 of over 4000
Article
Publication date: 25 February 2014

Dragan Ribarić and Gordan Jelenić

In this work, the authors aim to employ the so-called linked-interpolation concept already tested on beam and quadrilateral plate finite elements in the design of…

Abstract

Purpose

In this work, the authors aim to employ the so-called linked-interpolation concept already tested on beam and quadrilateral plate finite elements in the design of displacement-based higher-order triangular plate finite elements and test their performance.

Design/methodology/approach

Starting from the analogy between the Timoshenko beam theory and the Mindlin plate theory, a family of triangular linked-interpolation plate finite elements of arbitrary order are designed. The elements are tested on the standard set of examples.

Findings

The derived elements pass the standard patch tests and also the higher-order patch tests of an order directly related to the order of the element. The lowest-order member of the family of developed elements still suffers from shear locking for very coarse meshes, but the higher-order elements turn out to be successful when compared to the elements from literature for the problems with the same total number of the degrees of freedom.

Research limitations/implications

The elements designed perform well for a number of standard benchmark tests, but the well-known Morley's skewed plate example turns out to be rather demanding, i.e. the proposed design principle cannot compete with the mixed-type approach for this test. Work is under way to improve the proposed displacement-based elements by adding a number of internal bubble functions in the displacement and rotation fields, specifically chosen to satisfy the basic patch test and enable a softer response in the bench-mark test examples.

Originality/value

A new family of displacement-based higher-order triangular Mindlin plate finite elements has been derived. The higher-order elements perform very well, whereas the lowest-order element requires improvement.

Details

Engineering Computations, vol. 31 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 August 2017

Ming-min Liu, L.Z. Li and Jun Zhang

The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.

Abstract

Purpose

The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.

Design/methodology/approach

Instead of transmitting data of curved surfaces in 3D space directly, the method transmits data by unfolding 3D curved surfaces into 2D planes by manifold learning algorithms. The similarity between surface unfolding and manifold learning is discussed. Projection ability of several manifold learning algorithms is investigated to unfold curved surface. The algorithms’ efficiency and their influences on the accuracy of data transmission are investigated by three examples.

Findings

It is found that the data interpolations using manifold learning algorithms LLE, HLLE and LTSA are efficient and accurate.

Originality/value

The method can improve the accuracies of coupling data interpolation and fluid-structure interaction simulation involving curved surfaces.

Details

Multidiscipline Modeling in Materials and Structures, vol. 13 no. 2
Type: Research Article
ISSN: 1573-6105

Keywords

Article
Publication date: 1 January 1993

W. JOPPICH and R.A. LORENTZ

We develop new high‐order positive, monotone and convex interpolations, which are to be used in the multigrid context. This means that the value of the interpolant is calculated…

45

Abstract

We develop new high‐order positive, monotone and convex interpolations, which are to be used in the multigrid context. This means that the value of the interpolant is calculated only at the midpoints lying between the locations of the given values. As a consequence, these interpolants can be calculated very efficiently. They are then tested in a time‐dependent very large scale integration process simulation application.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 12 no. 1
Type: Research Article
ISSN: 0332-1649

Article
Publication date: 1 April 1995

B.P. Leonard, A.P. Lock and M.K. Macvean

The NIRVANA project is concerned with the development of anonoscillatory, integrally reconstructed,volume‐averaged numerical advectionscheme. The conservative, flux‐based…

Abstract

The NIRVANA project is concerned with the development of a nonoscillatory, integrally reconstructed, volume‐averaged numerical advection scheme. The conservative, flux‐based finite‐volume algorithm is built on an explicit, single‐step, forward‐in‐time update of the cell‐average variable, without restrictions on the size of the time‐step. There are similarities with semi‐Lagrangian schemes; a major difference is the introduction of a discrete integral variable, guaranteeing conservation. The crucial step is the interpolation of this variable, which is used in the calculation of the fluxes; the (analytic) derivative of the interpolant then gives sub‐cell behaviour of the advected variable. In this paper, basic principles are described, using the simplest possible conditions: pure one‐dimensional advection at constant velocity on a uniform grid. Piecewise Nth‐degree polynomial interpolation of the discrete integral variable leads to an Nth‐order advection scheme, in both space and time. Nonoscillatory results correspond to convexity preservation in the integrated variable, leading naturally to a large‐Δt generalisation of the universal limited. More restrictive TVD constraints are also extended to large Δt. Automatic compressive enhancement of step‐like profiles can be achieved without exciting “stair‐casing”. One‐dimensional simulations are shown for a number of different interpolations. In particular, convexity‐limited cubic‐spline and higher‐order polynomial schemes give very sharp, nonoscillatory results at any Courant number, without clipping of extrema. Some practical generalisations are briefly discussed.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 5 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 1 January 1986

Sergio PISSANETZKY

A magnetization table describing the magnetic properties of the material of interest is the primary input for any computer program expected to calculate magnetic fields or other…

Abstract

A magnetization table describing the magnetic properties of the material of interest is the primary input for any computer program expected to calculate magnetic fields or other magnetic parameters in a nonlinear case. Magnetization tables, however, consist of discrete points, and the program assumes some interpolation rule to calculate values between them. There exists a variety of interpolation schemes, and some of them can produce very large errors and even unphysical results when the intervals are not narrow enough. Unfortunately, it was found that intervals used in practice are seldom narrow enough. The accurate interpolation of magnetization tables thus becomes a central issue in the numerical solution of nonlinear magnetic problems. We discuss several interpolation schemes used in practice. We propose a new one that is guaranteed to give physical results, and we address the question as to how wide the table invervals can be if a desired accuracy is specified. The discussion is illustrated with many examples.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 5 no. 1
Type: Research Article
ISSN: 0332-1649

Article
Publication date: 19 August 2024

Walaa Metwally Kandil, Fawzi H. Zarzoura, Mahmoud Salah Goma and Mahmoud El-Mewafi El-Mewafi Shetiwi

This study aims to present a new rapid enhancement digital elevation model (DEM) framework using Google Earth Engine (GEE), machine learning, weighted interpolation and spatial…

Abstract

Purpose

This study aims to present a new rapid enhancement digital elevation model (DEM) framework using Google Earth Engine (GEE), machine learning, weighted interpolation and spatial interpolation techniques with ground control points (GCPs), where high-resolution DEMs are crucial spatial data that find extensive use in many analyses and applications.

Design/methodology/approach

First, rapid-DEM imports Shuttle Radar Topography Mission (SRTM) data and Sentinel-2 multispectral imagery from a user-defined time and area of interest into GEE. Second, SRTM with the feature attributes from Sentinel-2 multispectral imagery is generated and used as input data in support vector machine classification algorithm. Third, the inverse probability weighted interpolation (IPWI) approach uses 12 fixed GCPs as additional input data to assign the probability to each pixel of the image and generate corrected SRTM elevations. Fourth, gridding the enhanced DEM consists of regular points (E, N and H), and the contour interval is 5 m. Finally, densification of enhanced DEM data with GCPs is obtained using global positioning system technique through spatial interpolations such as Kriging, inverse distance weighted, modified Shepard’s method and triangulation with linear interpolation techniques.

Findings

The results were compared to a 1-m vertically accurate reference DEM (RD) obtained by image matching with Worldview-1 stereo satellite images. The results of this study demonstrated that the root mean square error (RMSE) of the original SRTM DEM was 5.95 m. On the other hand, the RMSE of the estimated elevations by the IPWI approach has been improved to 2.01 m, and the generated DEM by Kriging technique was 1.85 m, with a reduction of 68.91%.

Originality/value

A comparison with the RD demonstrates significant SRTM improvements. The suggested method clearly reduces the elevation error of the original SRTM DEM.

Book part
Publication date: 21 November 2014

Eric Ghysels and J. Isaac Miller

We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the…

Abstract

We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the interpolation. We propose modifications to these tests to effectively eliminate size distortions from such tests conducted on data interpolated from end-of-period sampled low-frequency series. Our results generally do not support linear interpolation when alternatives such as aggregation or mixed-frequency-modified tests are possible.

Details

Essays in Honor of Peter C. B. Phillips
Type: Book
ISBN: 978-1-78441-183-1

Keywords

Article
Publication date: 27 February 2023

Wenfeng Zhang, Ming K. Lim, Mei Yang, Xingzhi Li and Du Ni

As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This…

Abstract

Purpose

As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This motivates researchers to continuously add new features to the datasets for the credit risk prediction (CRP). However, adding new features can easily lead to missing of the data.

Design/methodology/approach

Based on the gaps summarized from the literature in CRP, this study first introduces the approaches to the building of datasets and the framing of the algorithmic models. Then, this study tests the interpolation effects of the algorithmic model in three artificial datasets with different missing rates and compares its predictability before and after the interpolation in a real dataset with the missing data in irregular time-series.

Findings

The algorithmic model of the time-decayed long short-term memory (TD-LSTM) proposed in this study can monitor the missing data in irregular time-series by capturing more and better time-series information, and interpolating the missing data efficiently. Moreover, the algorithmic model of Deep Neural Network can be used in the CRP for the datasets with the missing data in irregular time-series after the interpolation by the TD-LSTM.

Originality/value

This study fully validates the TD-LSTM interpolation effects and demonstrates that the predictability of the dataset after interpolation is improved. Accurate and timely CRP can undoubtedly assist a target company in avoiding losses. Identifying credit risks and taking preventive measures ahead of time, especially in the case of public emergencies, can help the company minimize losses.

Details

Industrial Management & Data Systems, vol. 123 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 25 May 2021

Miaomiao Yang, Xinkun Du and Yongbin Ge

This meshless collocation method is applicable not only to the Helmholtz equation with Dirichlet boundary condition but also mixed boundary conditions. It can calculate not only…

Abstract

Purpose

This meshless collocation method is applicable not only to the Helmholtz equation with Dirichlet boundary condition but also mixed boundary conditions. It can calculate not only the high wavenumber problems, but also the variable wave number problems.

Design/methodology/approach

In this paper, the authors developed a meshless collocation method by using barycentric Lagrange interpolation basis function based on the Chebyshev nodes to deduce the scheme for solving the three-dimensional Helmholtz equation. First, the spatial variables and their partial derivatives are treated by interpolation basis functions, and the collocation method is established for solving second order differential equations. Then the differential matrix is employed to simplify the differential equations which is on a given test node. Finally, numerical experiments show the accuracy and effectiveness of the proposed method.

Findings

The numerical experiments show the advantages of the present method, such as less number of collocation nodes needed, shorter calculation time, higher precision, smaller error and higher efficiency. What is more, the numerical solutions agree well with the exact solutions.

Research limitations/implications

Compared with finite element method, finite difference method and other traditional numerical methods based on grid solution, meshless method can reduce or eliminate the dependence on grid and make the numerical implementation more flexible.

Practical implications

The Helmholtz equation has a wide application background in many fields, such as physics, mechanics, engineering and so on.

Originality/value

This meshless method is first time applied for solving the 3D Helmholtz equation. What is more the present work not only gives the relationship of interpolation nodes but also the test nodes.

Article
Publication date: 2 February 2015

Songhao Shang

The purpose of this paper is to propose a new temporal disaggregation method for time series based on the accumulated and inverse accumulated generating operations in grey…

Abstract

Purpose

The purpose of this paper is to propose a new temporal disaggregation method for time series based on the accumulated and inverse accumulated generating operations in grey modeling and the interpolation method.

Design/methodology/approach

This disaggregation method includes three main steps, including accumulation, interpolation, and differentiation (AID). First, a low frequency flow series is transformed to the corresponding stock series through accumulated generating operation. Then, values of the stock series at unobserved time is estimated through appropriate interpolation method. And finally, the disaggregated stock series is transformed back to high frequency flow series through inverse accumulated generating operation.

Findings

The AID method is tested with a sales series. Results shows that the disaggregated sales data are satisfactory and reliable compared with the original data and disaggregated data using a time series model. The AID method is applicable to both long time series and grey series with insufficient information.

Practical implications

The AID method can be easily used to disaggregate low frequency flow series.

Originality/value

The AID method is a combination of grey modeling technique and interpolation method. Compared with other disaggregation methods, the AID method is simple, and does not require auxiliary information or plausible minimizing criterion required by other disaggregation methods.

Details

Grey Systems: Theory and Application, vol. 5 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

1 – 10 of over 4000