Search results

1 – 10 of over 10000
To view the access options for this content please click here
Article
Publication date: 19 September 2018

Dmytro Svyetlichnyy

The well-known discrete methods of computational fluid dynamics (CFD), lattice Boltzmann method (LBM), cellular automata (CA), volume-of-fluid (VoF) and others rely on…

Abstract

Purpose

The well-known discrete methods of computational fluid dynamics (CFD), lattice Boltzmann method (LBM), cellular automata (CA), volume-of-fluid (VoF) and others rely on several parameters describing the boundary or the surface. Some of them are vector normal to the surface, coordinates of the point on the surface and the curvature. They are necessary for the reconstruction of the real surface (boundary) based on the values of the volume fractions of several cells. However, the simple methods commonly used for calculations of the vector normal to the surface are of unsatisfactory accuracy. In light of this, the purpose of this paper is to demonstrate a more accurate method for determining the vector normal to the surface.

Design/methodology/approach

Based on the thesis that information about the volume fractions of the 3 × 3 cell block should be enough for normal vector determination, a neural network (NN) was proposed for use in the paper. The normal vector and the volume fractions of the cells themselves can be defined on the basis of such variables as the location of the center and the radius of the circumference. Therefore, the NN is proposed to solve the inverse problem – to determine the normal vector based on known values of volume fractions. Volume fractions are inputs of NNs, while the normal vector is their output. Over a thousand variants of the surface location, orientations of the normal vector and curvatures were prepared for volume fraction calculations; their results were used for training, validating and testing the NNs.

Findings

The simplest NN with one neuron in the hidden layer shows better results than other commonly used methods, and an NN with four neurons produces results with errors below 1° relative to the orientation of the normal vector; for several cases, it proven to be more accurate by an order of magnitude.

Practical implications

The method can be used in the CFD, LBM, CA, VoF and other discrete computational methods. The more precise normal vector allows for a more accurate determination of the points on the surface and curvature in further calculations via the surface or interface tracking method. The paper contains the data for the practical application of developed NNs. The method is limited to regular square or cuboid lattices.

Originality value

The paper presents an original implementation of NNs for normal vector calculation connected with CFD, LBM and other application for fluid flow with free surface or phase transformation.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 28 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Article
Publication date: 1 May 1998

J.M. Khodadadi

A computational methodology, based on the coupling of the finite element and boundary element methods, is developed for the solution of magnetothermal problems. The finite…

Abstract

A computational methodology, based on the coupling of the finite element and boundary element methods, is developed for the solution of magnetothermal problems. The finite element formulation and boundary element formulation, along with their coupling, are discussed. The coupling procedure is also presented, which entails the application of the LU decomposition to eliminate the need for the direct inversion of matrices resulting from FE‐BE formulation, thereby saving computation time and storage space. Corners for both FE‐BE interface and BE regions, where discontinuous fluxes exist, are treated using the double flux concept. Numerical results are presented for three different systems and compared with analytical solutions when available. Numerical experiments suggest that for magnetothermal problems involving small skin depths, a careful mesh distribution is critical for accurate prediction of the field variables of interest. It is found that the accuracy of the temperature distribution is strongly dependent upon that of the magnetic vector potential. A small error in the magnetic vector potential can produce significant errors in the subsequent temperature calculations. Thus, particular attention must be paid to the design of a suitable mesh for the accurate prediction of vector potentials. From all the cases examined, 4‐node linear elements with adequate progressive coarsening of meshes from the surface gave the results with best accuracy.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 8 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1993

MACIEJ KOWALCZYK

This paper is concerned with rank analysis of rectangular matrix of a homogeneous set of incremental equations regarded as an element of continuation method. The rank…

Abstract

This paper is concerned with rank analysis of rectangular matrix of a homogeneous set of incremental equations regarded as an element of continuation method. The rank analysis is based on a known feature that every rectangular matrix can be transformed into the matrix of echelon form. By inspection of the rank, correct control parameters are chosen and this allows not only for rounding limit and turning points but also for branch‐switching near bifurcation points.

Details

Engineering Computations, vol. 10 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 23 July 2019

Mark J. Nigrini and William Karstens

This paper develops a vector variation score that quantifies the change in an array of data points from period-to-period. The array could be the amounts reported on an…

Abstract

Purpose

This paper develops a vector variation score that quantifies the change in an array of data points from period-to-period. The array could be the amounts reported on an income tax return, the closing stock prices for a set of listed companies, the monthly sales amounts for retail locations or the monthly balances in general ledger accounts.

Design/methodology/approach

The score is grounded in analytic geometry. The angle θ measures whether the changes were uniformly spread across the line items. The item(s) with the largest contribution(s) to the score can be identified. Line items can be weighted such that they contribute less than fully to the score.

Findings

The method can identify tax returns with large year-on-year changes. The method can identify the fact that the price movements during earnings season are less dependent than is usually the case. The method can identify anomalies in reported sales amounts. The method should be able to identify ledger accounts’ large abnormal changes.

Research limitations/implications

Auditors will need to be trained to interpret the results and to reduce the number of false positives.

Practical implications

The score could be used in both external and internal audit applications where auditors want to quantify and rank period-on-period changes in a search for outliers.

Originality/value

The change score is normalized to the [0, 1] range. The results can be plotted as a polar plot for display on an auditing dashboard. The contribution of a single line item can be calculated and line items can be weighted to prevent them from having an undue influence on the results.

Details

Managerial Auditing Journal, vol. 36 no. 1
Type: Research Article
ISSN: 0268-6902

Keywords

To view the access options for this content please click here
Article
Publication date: 21 May 2018

Dongmei Han, Wen Wang, Suyuan Luo, Weiguo Fan and Songxin Wang

This paper aims to apply vector space model (VSM)-PCR model to compute the similarity of Fault zone ontology semantics, which verified the feasibility and effectiveness of…

Abstract

Purpose

This paper aims to apply vector space model (VSM)-PCR model to compute the similarity of Fault zone ontology semantics, which verified the feasibility and effectiveness of the application of VSM-PCR method in uncertainty mapping of ontologies.

Design/methodology/approach

The authors first define the concept of uncertainty ontology and then propose the method of ontology mapping. The proposed method fully considers the properties of ontology in measuring the similarity of concept. It expands the single VSM of concept meaning or instance set to the “meaning, properties, instance” three-dimensional VSM and uses membership degree or correlation to express the level of uncertainty.

Findings

It provides a relatively better accuracy which verified the feasibility and effectiveness of VSM-PCR method in treating the uncertainty mapping of ontology.

Research limitations/implications

The future work will focus on exploring the similarity measure and combinational methods in every dimension.

Originality/value

This paper presents an uncertain mapping method of ontology concept based on three-dimensional combination weighted VSM, namely, VSM-PCR. It expands the single VSM of concept meaning or instance set to the “meaning, properties, instance” three-dimensional VSM. The model uses membership degree or correlation which is used to express the degree of uncertainty; as a result, a three-dimensional VSM is obtained. The authors finally provide an example to verify the feasibility and effectiveness of VSM-PCR method in treating the uncertainty mapping of ontology.

Details

Information Discovery and Delivery, vol. 46 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

To view the access options for this content please click here
Article
Publication date: 2 August 2013

Wan‐Shiou Yang and Yi‐Rong Lin

The scientific literature has played an important role in the dissemination of new knowledge throughout the past century. However, the increasing numbers of scientific…

Abstract

Purpose

The scientific literature has played an important role in the dissemination of new knowledge throughout the past century. However, the increasing numbers of scientific articles being published in recent years has intensified the perception of information overload for users attempting to find relevant scientific information. The purpose of this paper is to describe a task‐focused strategy that employs the task profiles of users to make recommendations in a digital library.

Design/method/approach

This paper combines information retrieval, common citation analysis, and coauthor relationship analysis techniques with a citation network analysis technique – the CiteRank algorithm – to find relevant and high‐quality articles. In total, nine variations of the proposed approach were tested using articles downloaded from the CiteSeerX system and usage logs collected from the authors' experimental server.

Findings

The results from the authors' experimental evaluations demonstrate that the proposed Content‐citation approach outperforms the Relevance‐CiteRank, Relevance‐citation count, and Relevance‐only approaches.

Originality/value

This paper describes an original study that has produced a novel way to combine information retrieval, common citation analysis, and coauthor relationship analysis techniques to find relevant and high‐quality articles for recommendation in a digital library.

To view the access options for this content please click here
Article
Publication date: 18 May 2015

Srikanta Routroy, Pavan Kumar Potdar and Arjun Shankar

– The purpose of this paper is to determine the agility level of a manufacturing system along different timelines.

Downloads
1245

Abstract

Purpose

The purpose of this paper is to determine the agility level of a manufacturing system along different timelines.

Design/methodology/approach

The fuzzy synthetic extents of agile manufacturing enablers (AMEs), on the basis of their importance, are determined. Then they are integrated with their performance ratings along different timeline for calculating the Fuzzy Agile Manufacturing Index (FAMI). Euclidean distances of FAMI from predetermined agility levels are mapped to determine the agility level of the manufacturing system along different timeline.

Findings

The proposed methodology was implemented in an Indian manufacturing organization to determine its agility level. It was concluded from the obtained results that there was significant improvement in the agility level along the timeline.

Research limitations/implications

The weights of the AMEs are assumed to be constant along the timeline.

Practical implications

The supply chain mangers can easily apply this methodology in their respective manufacturing organizations to assess and determine the agility level. This proposed approach will show the direction to check the performance of agility and evaluate the evolution of agility in their respective manufacturing organizations.

Originality/value

The combination of fuzzy synthetic extent of weights and average fuzzy performance ratings of AMEs to calculate the FAMI along the timeline considering judgments of multiple experts is a unique contribution.

Details

Measuring Business Excellence, vol. 19 no. 2
Type: Research Article
ISSN: 1368-3047

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 2000

A. Savini

Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic…

Abstract

Gives introductory remarks about chapter 1 of this group of 31 papers, from ISEF 1999 Proceedings, in the methodologies for field analysis, in the electromagnetic community. Observes that computer package implementation theory contributes to clarification. Discusses the areas covered by some of the papers ‐ such as artificial intelligence using fuzzy logic. Includes applications such as permanent magnets and looks at eddy current problems. States the finite element method is currently the most popular method used for field computation. Closes by pointing out the amalgam of topics.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 2013

Antonios E. Tzinevrakis, Dimitrios K. Tsanakas and Evangelos I. Mimos

The paper aims to highlight the efficiency of double complex numbers for the complete analysis of the intensity of the electric field produced by power lines.

Abstract

Purpose

The paper aims to highlight the efficiency of double complex numbers for the complete analysis of the intensity of the electric field produced by power lines.

Design/methodology/approach

One set of complex numbers is used to represent all the plane vectors (vector distances) and another set of complex numbers is used to represent all the sinusoidal time varying quantities (electric charges and voltages). The simultaneous representation of vector distances and sinusoidal time varying quantities with complex numbers gives elegant expressions to the electric field vector and simplifies the mathematical relations to a great degree.

Findings

General analytical formulas are developed for the direct calculation of all the parameters of the elliptically rotating electric field (rms value, major and minor semi‐axis of the ellipse, angles of the semi‐axes, tracing direction, polarization). The analytical formulas depend on the components of the double complex number.

Research limitations/implications

The proposed method can be applied only on 2D problems, especially power lines where the electric field vector can be expressed as a double complex number.

Originality/value

Double complex numbers are proved in this paper as a very effective mathematical tool for the complete analysis of the electric field produced by power lines. The expression of the electric field vector as a double complex number allows the direct calculation of all the parameters of the electric field with analytical relations.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 1 April 1995

Wojciech Demski and Grzegorz Szymański

Normal components of B and the tangential components of H alike are needed for the force calculation. If the field is calculated numerically, only one of these components…

Abstract

Normal components of B and the tangential components of H alike are needed for the force calculation. If the field is calculated numerically, only one of these components is known exactly (in numerical sense), the other one is known only approximately.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 14 no. 4
Type: Research Article
ISSN: 0332-1649

1 – 10 of over 10000