Search results

1 – 10 of 306
Article
Publication date: 20 December 2018

Mi Zhao, Huifang Li, Shengtao Cao and Xiuli Du

The purpose of this paper is to propose a new explicit time integration algorithm for solution to the linear and non-linear finite element equations of structural dynamic and wave…

Abstract

Purpose

The purpose of this paper is to propose a new explicit time integration algorithm for solution to the linear and non-linear finite element equations of structural dynamic and wave propagation problems.

Design/methodology/approach

The algorithm is completely explicit so that no linear equation system requires solving, if the mass matrix of the finite element equation is diagonal and whether the damping matrix does or not. The algorithm is a single-step method that has the simple starting and is applicable to the analysis with the variable time step size. The algorithm is second-order accurate and conditionally stable. Its numerical stability, dissipation and dispersion are analyzed for the dynamic single-degree-of-freedom equation. The stability of the multi-degrees-of-freedom non-proportional damping system can be evaluated directly by the stability theory on ordinary differential equation.

Findings

The performance of the proposed algorithm is demonstrated by several numerical examples including the linear single-degree-of-freedom problem, non-linear two-degree-of-freedom problem, wave propagation problem in two-dimensional layer and seismic elastoplastic analysis of high-rise structure.

Originality/value

A new single-step second-order accurate explicit time integration algorithm is proposed to solve the linear and non-linear dynamic finite element equations. The algorithm has advantages on the numerical stability and accuracy over the existing modified central difference method and Chung-Lee method though the theory and numerical analyses.

Article
Publication date: 1 May 1999

Kumar K. Tamma, Xiangmin Zhou and Desong Sha

The time‐discretization process of transient equation systems is an important concern in computational heat transfer applications. As such, the present paper describes a formal…

Abstract

The time‐discretization process of transient equation systems is an important concern in computational heat transfer applications. As such, the present paper describes a formal basis towards providing the theoretical concepts, evolution and development, and characterization of a wide class of time discretized operators for transient heat transfer computations. Therein, emanating from a common family tree and explained via a generalized time weighted philosophy, the paper addresses the development and evolution of time integral operators [IO], and leading to integration operators [InO] in time encompassing single‐step integration operators [SSInO], multi‐step integration operators [MSInO], and a class of finite element in time integration operators [FETInO] including the relationships and the resulting consequences. Also depicted are those termed as discrete numerically assigned [DNA] algorithmic markers essentially comprising of both: the weighted time fields, and the corresponding conditions imposed upon the dependent variable approximation, to uniquely characterize a wide class of transient algorithms. Thereby, providing a plausible standardized formal ideology when referring to and/or relating time discretized operators applicable to transient heat transfer computations.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 9 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 29 June 2012

Shuiqing Huang, Lin He, Bo Yang and Ming Zhang

The algorithm of disjoint literature‐based knowledge discovery provides a convenient, efficient and effective auxiliary method for scientific research. Based on an analysis of…

Abstract

Purpose

The algorithm of disjoint literature‐based knowledge discovery provides a convenient, efficient and effective auxiliary method for scientific research. Based on an analysis of Swanson's A‐B‐C model of disjoint literature‐based knowledge discovery and Gordon's intermediate literature theory, this paper seeks to propose a more comprehensive compound correlation model for disjoint literature‐based knowledge discovery.

Design/methodology/approach

A new algorithm of vector space model (VSM) based disjoint literature‐based knowledge discovery is designed to implement the compound correlation model.

Findings

The validity tests showed that this new model not only simulated both of Swanson's early and well‐known discoveries of Raynaud's disease‐fish oil and migraine‐magnesium connections successfully, but also applied to knowledge discovery in the agricultural economics literature in the Chinese language.

Research limitations/implications

Although the workload was reduced to the minimum under the compound correlation model compared with other algorithms and models, part of the work needed some manual intervention in the process of disjoint literature‐based knowledge discovery with the VSM‐based compound correlation model.

Practical implications

The algorithm was capable of knowledge discovery with a large‐scale dataset and had an advantage in identifying a series of hidden connections among a set of literatures. Therefore, application of the model might be extended to more fields.

Originality/value

Traditional two‐step knowledge discovery procedures were integrated into the model, which contained open and closed disjoint literature‐based knowledge discovery.

Details

Aslib Proceedings, vol. 64 no. 4
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 5 June 2009

Boris Mitavskiy, Jonathan Rowe and Chris Cannings

A variety of phenomena such as world wide web, social or business networks, interactions are modelled by various kinds of networks (such as the scale free or preferential…

Abstract

Purpose

A variety of phenomena such as world wide web, social or business networks, interactions are modelled by various kinds of networks (such as the scale free or preferential attachment networks). However, due to the model‐specific requirements one may want to rewire the network to optimize the communication among the various nodes while not overloading the number of channels (i.e. preserving the number of edges). The purpose of this paper is to present a formal framework for this problem and to examine a family of local search strategies to cope with it.

Design/methodology/approach

This is mostly theoretical work. The authors use rigorous mathematical framework to set‐up the model and then we prove some interesting theorems about it which pertain to various local search algorithms that work by rerouting the network.

Findings

This paper proves that in cases when every pair of nodes is sampled with non‐zero probability then the algorithm is ergodic in the sense that it samples every possible network on the specified set of nodes and having a specified number of edges with nonzero probability. Incidentally, the ergodicity result led to the construction of a class of algorithms for sampling graphs with a specified number of edges over a specified set of nodes uniformly at random and opened some other challenging and important questions for future considerations.

Originality/value

The measure‐theoretic framework presented in the current paper is original and rather general. It allows one to obtain new points of view on the problem.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 March 1989

Eddy Pramono and Kaspar Willam

Numerical solutions in computational plasticity are severely challenged when concrete and geomaterials are considered with non‐regular yield surfaces, strain‐softening and…

Abstract

Numerical solutions in computational plasticity are severely challenged when concrete and geomaterials are considered with non‐regular yield surfaces, strain‐softening and non‐associated flow. There are two aspects that are of immediate concern within load steps which are truly finite: first, the iterative corrector must assure that the equilibrium stress state and the plastic process variables do satisfy multiple yield conditions with corners, Fi(σ, q) = 0, at discrete stages of the solution process. To this end, a reliable return mapping algorithm is required which minimizes the error of the plastic return step. Second, the solution of non‐linear equations of motion on the global structural level must account for limit points and premature bifurcation of the equilibrium path. The current paper is mainly concerned with the implicit integration of elasto‐plastic hardening/softening relations considering non‐associated flow and the presence of composite yield conditions with corners.

Details

Engineering Computations, vol. 6 no. 3
Type: Research Article
ISSN: 0264-4401

Article
Publication date: 28 March 2008

Daniel Lockery and James F. Peters

The purpose of this paper is to report upon research into developing a biologically inspired target‐tracking system (TTS) capable of acquiring quality images of a known target…

Abstract

Purpose

The purpose of this paper is to report upon research into developing a biologically inspired target‐tracking system (TTS) capable of acquiring quality images of a known target type for a robotic inspection application.

Design/methodology/approach

The approach used in the design of the TTS hearkens back to the work on adaptive learning by Oliver Selfridge and Chris J.C.H. Watkins and the work on the classification of objects by Zdzislaw Pawlak during the 1980s in an approximation space‐based form of feedback during learning. Also, during the 1980s, it was Ewa Orlowska who called attention to the importance of approximation spaces as a formal counterpart of perception. This insight by Orlowska has been important in working toward a new form of adaptive learning useful in controlling the behaviour of machines to accomplish system goals. The adaptive learning algorithms presented in this paper are strictly temporal difference methods, including Q‐learning, sarsa, and the actor‐critic method. Learning itself is considered episodic. During each episode, the equivalent of a Tinbergen‐like ethogram is constructed. Such an ethogram provides a basis for the construction of an approximation space at the end of each episode. The combination of episodic ethograms and approximation spaces provides an extremely effective means of feedback useful in guiding learning during the lifetime of a robotic system such as the TTS reported in this paper.

Findings

It was discovered that even though the adaptive learning methods were computationally more expensive than the classical algorithm implementations, they proved to be more effective in a number of cases, especially in noisy environments.

Originality/value

The novelty associated with this work is the introduction of an approach to adaptive adaptive learning carried out within the framework of ethology‐based approximation spaces to provide performance feedback during the learning process.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 1 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 31 October 2023

Wenchao Zhang, Peixin Shi, Zhansheng Wang, Huajing Zhao, Xiaoqi Zhou and Pengjiao Jia

An accurate prediction of the deformation of retaining structures is critical for ensuring the stability and safety of braced deep excavations, while the high nonlinear and…

Abstract

Purpose

An accurate prediction of the deformation of retaining structures is critical for ensuring the stability and safety of braced deep excavations, while the high nonlinear and complex nature of the deformation makes the prediction challenging. This paper proposes an explainable boosted combining global and local feature multivariate regression (EB-GLFMR) model with high accuracy, robustness and interpretability to predict the deformation of retaining structures during braced deep excavations.

Design/methodology/approach

During the model development, the time series of deformation data is decomposed using a locally weighted scatterplot smoothing technique into trend and residual terms. The trend terms are analyzed through multiple adaptive spline regressions. The residual terms are reconstructed in phase space to extract both global and local features, which are then fed into a gradient-boosting model for prediction.

Findings

The proposed model outperforms other established approaches in terms of accuracy and robustness, as demonstrated through analyzing two cases of braced deep excavations.

Research limitations/implications

The model is designed for the prediction of the deformation of deep excavations with stepped, chaotic and fluctuating features. Further research needs to be conducted to expand the model applicability to other time series deformation data.

Practical implications

The model provides an efficient, robust and transparent approach to predict deformation during braced deep excavations. It serves as an effective decision support tool for engineers to ensure the stability and safety of deep excavations.

Originality/value

The model captures the global and local features of time series deformation of retaining structures and provides explicit expressions and feature importance for deformation trends and residuals, making it an efficient and transparent approach for deformation prediction.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 May 1994

I. Antoniadis and A. Kanarachos

Although the existence of a close relationship between the areas ofdigital signal processing and time integration methodology is known, asystematic application of the concepts and…

Abstract

Although the existence of a close relationship between the areas of digital signal processing and time integration methodology is known, a systematic application of the concepts and methods of the first area to the second is missing. Such an approach is followed in this paper, arising from the fact that any time integration formula can be viewed as a digital filter of the applied excitation force, approximating as close as possible to the behaviour of a ‘prototype analogue filter’, which is in fact the semi discrete equations of motion of the system. This approach provides a universal framework for handling and analysing all various aspects of time integration formulae, such as analysis in the frequency domain, algebraic operations, accuracy and stability, aliasing, spurious oscillations generation, introduction of digital filters within the time integration formula, initial conditions handling and overshooting. Additionally it is shown that digital signal processing methods, such as pre‐ or post‐processing, time delays, etc. can be in certain cases a quite effective complement of the time integration scheme.

Details

Engineering Computations, vol. 11 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 22 August 2008

M. Rezaiee‐Pajand and J. Alamatian

This paper aims to provide a simple and accurate higher order predictor‐corrector integration which can be used in dynamic analysis and to compare it with the previous works.

Abstract

Purpose

This paper aims to provide a simple and accurate higher order predictor‐corrector integration which can be used in dynamic analysis and to compare it with the previous works.

Design/methodology/approach

The predictor‐corrector integration is defined by combining the higher order explicit and implicit integrations in which displacement and velocity are assumed to be functions of accelerations of several previous time steps. By studying the accuracy and stability conditions, the weighted factors and acceptable time step are determined.

Findings

Simplicity and vector operations plus accuracy and stability are the main specifications of the new predictor‐corrector method. This procedure can be used in linear and nonlinear dynamic analysis.

Research limitations/implications

In the proposed integration, time step is assumed to be constant.

Practical implications

The numerical integration is the heart of a dynamic analysis. The result's accuracy is strongly influenced by the accuracy and stability of the numerical integration.

Originality/value

This paper presents simple and accurate predictor‐corrector integration based on accelerations of several previous time steps. This may be used as a routine in any dynamic analysis software to enhance accuracy and reduce computational time.

Details

Engineering Computations, vol. 25 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 23 August 2011

D.K. Sharma, R.K. Sharma, B.K. Kaushik and Pankaj Kumar

This paper aims to address the various issues of board‐level (off‐chip) interconnects testing. A new algorithm based on the boundary scan architecture is developed to test…

Abstract

Purpose

This paper aims to address the various issues of board‐level (off‐chip) interconnects testing. A new algorithm based on the boundary scan architecture is developed to test off‐chip interconnect faults. The proposed algorithm can easily diagnose which two interconnects are shorted.

Design/methodology/approach

The problems in board‐level interconnects testing are not simple. A new algorithm is developed to rectify some of the problems in existing algorithms. The proposed algorithm to test board‐level interconnect faults is implemented using Verilog on Modelsim software. The output response of each shorting between different wires of different nodes is different, which is the basis of fault detection by the proposed algorithm. The test vectors are generated by the test pattern generator and these test vectors are different for different nodes. This work implements built in self test using boundary scan technique.

Findings

The dominant‐1 (wired‐OR, denoted as WOR), dominant‐0 (wired‐AND, denoted as WAND) and stuck‐at faults are tested using the proposed algorithm. The proposed algorithm is also compared with the several algorithms in the literature, i.e. modified counting, walking one's algorithm and others. This paper's results are found to be better than the existing algorithms.

Research limitations/implications

The limitation of the proposed algorithm is that, at any time, the faults on any seven nodes can be tested to avoid aliasing. So, the groups are formed out of total nodes, in a multiple of seven to carry out the testing of faults.

Practical implications

The proposed algorithm is free from the problems of syndromes and utilizes a smaller number of test vectors.

Originality/value

Various existing algorithms namely modified counting, walking one's algorithm and others are discussed. A new algorithm is developed which can easily detect board‐level dominant‐1 (WOR), dominant‐0 (WAND) and stuck‐at faults. The proposed algorithm is completely free from aliasing and confounding syndromes.

Details

Circuit World, vol. 37 no. 3
Type: Research Article
ISSN: 0305-6120

Keywords

1 – 10 of 306