Search results

1 – 10 of over 28000
Article
Publication date: 12 August 2020

Ngoc Le Chau, Ngoc Thoai Tran and Thanh-Phong Dao

Compliant mechanism has been receiving a great interest in precision engineering. However, analytical methods involving their behavior analysis is still a challenge because there…

Abstract

Purpose

Compliant mechanism has been receiving a great interest in precision engineering. However, analytical methods involving their behavior analysis is still a challenge because there are unclear kinematic behaviors. Especially, design optimization for compliant mechanisms becomes an important task when the problem is more and more complex. Therefore, the purpose of this study is to design a new hybrid computational method. The hybridized method is an integration of statistics, numerical method, computational intelligence and optimization.

Design/methodology/approach

A tensural bistable compliant mechanism is used to clarify the efficiency of the developed method. A pseudo model of the mechanism is designed and simulations are planned to retrieve the data sets. Main contributions of design variables are analyzed by analysis of variance to initialize several new populations. Next, objective functions are transformed into the desirability, which are inputs of the fuzzy inference system (FIS). The FIS modeling is aimed to initialize a single-combined objective function (SCOF). Subsequently, adaptive neuro-fuzzy inference system is developed to modeling a relation of the main geometrical parameters and the SCOF. Finally, the SCOF is maximized by lightning attachment procedure optimization algorithm to yield a global optimality.

Findings

The results prove that the present method is better than a combination of fuzzy logic and Taguchi. The present method is also superior to other algorithms by conducting non-parameter tests. The proposed computational method is a usefully systematic method that can be applied to compliant mechanisms with complex structures and multiple-constrained optimization problems.

Originality/value

The novelty of this work is to make a new approach by combining statistical techniques, numerical method, computational intelligence and metaheuristic algorithm. The feasibility of the method is capable of solving a multi-objective optimization problem for compliant mechanisms with nonlinear complexity.

Details

Engineering Computations, vol. 38 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 17 May 2018

Richard Marciano, Victoria Lemieux, Mark Hedges, Maria Esteva, William Underwood, Michael Kurtz and Mark Conrad

Purpose – For decades, archivists have been appraising, preserving, and providing access to digital records by using archival theories and methods developed for paper records…

Abstract

Purpose – For decades, archivists have been appraising, preserving, and providing access to digital records by using archival theories and methods developed for paper records. However, production and consumption of digital records are informed by social and industrial trends and by computer and data methods that show little or no connection to archival methods. The purpose of this chapter is to reexamine the theories and methods that dominate records practices. The authors believe that this situation calls for a formal articulation of a new transdiscipline, which they call computational archival science (CAS).

Design/Methodology/Approach – After making a case for CAS, the authors present motivating case studies: (1) evolutionary prototyping and computational linguistics; (2) graph analytics, digital humanities, and archival representation; (3) computational finding aids; (4) digital curation; (5) public engagement with (archival) content; (6) authenticity; (7) confluences between archival theory and computational methods: cyberinfrastructure and the records continuum; and (8) spatial and temporal analytics.

Findings – Each case study includes suggestions for incorporating CAS into Master of Library Science (MLS) education in order to better address the needs of today’s MLS graduates looking to employ “traditional” archival principles in conjunction with computational methods. A CAS agenda will require transdisciplinary iSchools and extensive hands-on experience working with cyberinfrastructure to implement archival functions.

Originality/Value – We expect that archival practice will benefit from the development of new tools and techniques that support records and archives professionals in managing and preserving records at scale and that, conversely, computational science will benefit from the consideration and application of archival principles.

Details

Re-envisioning the MLS: Perspectives on the Future of Library and Information Science Education
Type: Book
ISBN: 978-1-78754-884-8

Keywords

Book part
Publication date: 5 October 2018

Nima Gerami Seresht and Aminah Robinson Fayek

Fuzzy numbers are often used to represent non-probabilistic uncertainty in engineering, decision-making and control system applications. In these applications, fuzzy arithmetic…

Abstract

Fuzzy numbers are often used to represent non-probabilistic uncertainty in engineering, decision-making and control system applications. In these applications, fuzzy arithmetic operations are frequently used for solving mathematical equations that contain fuzzy numbers. There are two approaches proposed in the literature for implementing fuzzy arithmetic operations: the α-cut approach and the extension principle approach using different t-norms. Computational methods for the implementation of fuzzy arithmetic operations in different applications are also proposed in the literature; these methods are usually developed for specific types of fuzzy numbers. This chapter discusses existing methods for implementing fuzzy arithmetic on triangular fuzzy numbers using both the α-cut approach and the extension principle approach using the min and drastic product t-norms. This chapter also presents novel computational methods for the implementation of fuzzy arithmetic on triangular fuzzy numbers using algebraic product and bounded difference t-norms. The applicability of the α-cut approach is limited because it tends to overestimate uncertainty, and the extension principle approach using the drastic product t-norm produces fuzzy numbers that are highly sensitive to changes in the input fuzzy numbers. The novel computational methods proposed in this chapter for implementing fuzzy arithmetic using algebraic product and bounded difference t-norms contribute to a more effective use of fuzzy arithmetic in construction applications. This chapter also presents an example of the application of fuzzy arithmetic operations to a construction problem. In addition, it discusses the effects of using different approaches for implementing fuzzy arithmetic operations in solving practical construction problems.

Details

Fuzzy Hybrid Computing in Construction Engineering and Management
Type: Book
ISBN: 978-1-78743-868-2

Keywords

Article
Publication date: 7 January 2021

Saba Gharehdash, Bre-Anne Louise Sainsbury, Milad Barzegar, Igor B. Palymskiy and Pavel A. Fomin

This research study aims to develop regular cylindrical pore network models (RCPNMs) to calculate topology and geometry properties of explosively created fractures along with…

253

Abstract

Purpose

This research study aims to develop regular cylindrical pore network models (RCPNMs) to calculate topology and geometry properties of explosively created fractures along with their resulting hydraulic permeability. The focus of the investigation is to define a method that generates a valid geometric and topologic representation from a computational modelling point of view for explosion-generated fractures in rocks. In particular, extraction of geometries from experimentally validated Eulerian smoothed particle hydrodynamics (ESPH) approach, to avoid restrictions for image-based computational methods.

Design/methodology/approach

Three-dimensional stabilized ESPH solution is required to model explosively created fracture networks, and the accuracy of developed ESPH is qualitatively and quantitatively examined against experimental observations for both peak detonation pressures and crack density estimations. SPH simulation domain is segmented to void and solid spaces using a graphical user interface, and the void space of blasted rocks is represented by a regular lattice of spherical pores connected by cylindrical throats. Results produced by the RCPNMs are compared to three pore network extraction algorithms. Thereby, once the accuracy of RCPNMs is confirmed, the absolute permeability of fracture networks is calculated.

Findings

The results obtained with RCPNMs method were compared with three pore network extraction algorithms and computational fluid dynamics method, achieving a more computational efficiency regarding to CPU cost and a better geometry and topology relationship identification, in all the cases studied. Furthermore, a reliable topology data that does not have image-based pore network limitations, and the effect of topological disorder on the computed absolute permeability is minor. However, further research is necessary to improve the interpretation of real pore systems for explosively created fracture networks.

Practical implications

Although only laboratory cylindrical rock specimens were tested in the computational examples, the developed approaches are applicable for field scale and complex pore network grids with arbitrary shapes.

Originality/value

It is often desirable to develop an integrated computational method for hydraulic conductivity of explosively created fracture networks which segmentation of fracture networks is not restricted to X-ray images, particularly when topologic and geometric modellings are the crucial parts. This research study provides insight to the reliable computational methods and pore network extraction algorithm selection processes, as well as defining a practical framework for generating reliable topological and geometrical data in a Eulerian SPH setting.

Details

Engineering Computations, vol. 38 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 2 May 2017

Amirmahdi Ghasemi, R. Nikbakhti, Amirreza Ghasemi, Faraz Hedayati and Amir Malvandi

A numerical method is developed to capture the interaction of solid object with two-phase flow with high density ratios. The current computational tool would be the first step of…

Abstract

Purpose

A numerical method is developed to capture the interaction of solid object with two-phase flow with high density ratios. The current computational tool would be the first step of accurate modeling of wave energy converters in which the immense energy of the ocean can be extracted at low cost.

Design/methodology/approach

The full two-dimensional Navier–Stokes equations are discretized on a regular structured grid, and the two-step projection method along with multi-processing (OpenMP) is used to efficiently solve the flow equations. The level set and the immersed boundary methods are used to capture the free surface of a fluid and a solid object, respectively. The full two-dimensional Navier–Stokes equations are solved on a regular structured grid to resolve the flow field. Level set and immersed boundary methods are used to capture the free surface of liquid and solid object, respectively. A proper contact angle between the solid object and the fluid is used to enhance the accuracy of the advection of the mass and momentum of the fluids in three-phase cells.

Findings

The computational tool is verified based on numerical and experimental data with two scenarios: a cylinder falling into a rectangular domain due to gravity and a dam breaking in the presence of a fixed obstacle. In the former validation simulation, the accuracy of the immersed boundary method is verified. However, the accuracy of the level set method while the computational tool can model the high-density ratio is confirmed in the dam-breaking simulation. The results obtained from the current method are in good agreement with experimental data and other numerical studies.

Practical/implications

The computational tool is capable of being parallelized to reduce the computational cost; therefore, an OpenMP is used to solve the flow equations. Its application is seen in the following: wind energy conversion, interaction of solid object such as wind turbine with water waves, etc.

Originality/value

A high efficient CFD approach method is introduced to capture the interaction of solid object with a two-phase flow where they have high-density ratio. The current method has the ability to efficiently be parallelized.

Details

Engineering Computations, vol. 34 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 6 March 2020

Katherine Celia Greder, Jie Pei and Jooyoung Shin

The purpose of this study was to create a corset—understructure as well as fabric covering—using only computational, 3D approaches to fashion design. The process incorporated 3D…

Abstract

Purpose

The purpose of this study was to create a corset—understructure as well as fabric covering—using only computational, 3D approaches to fashion design. The process incorporated 3D body scan data, parametric methods for the 3D-printed design, and algorithmic methods for the automated, custom-fit fabric pattern.

Design/methodology/approach

The methods or protocol-based framework that nucleated this design project (see Figure 1) enabled more concentrated research into the iterative step-by-step procedure and the computational techniques used herein.

Findings

The 3D computational methods in this study demonstrated a new way of rendering the body-to-pattern relationship through the use of multiple software platforms. Using body scan data and computer coding, the computational construction methods in this study showed a pliant and sustainable method of clothing design where designers were able to manipulate the X, Y, and Z coordinates of the points on the scan surface.

Research limitations/implications

A study of algorithmic methods is inherently a study of limitation. The iterative process of design was defined and refined through the particularity of an algorithm, which required thoughtful manipulation to inform the outcome of this research.

Practical implications

This study sought to illustrate the use and limitations of algorithm-driven computer programming to advance creative design practices.

Social implications

As body scan data and biometric information become increasingly common components of computational fashion design practices, the need for more research on the use of these techniques is pressing. Moreover, computational techniques serve as a catalyst for discussions about the use of biometric information in design and data modeling.

Originality/value

The process of designing in 3D allowed for the dynamic capability to manipulate proportion and form using parametric design techniques.

Details

International Journal of Clothing Science and Technology, vol. 32 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 4 February 2020

Jin Wang, Yi Wang and Jing Shi

Selective laser melting (SLM) is a major additive manufacturing (AM) process in which laser beams are used as the heat source to melt and deposit metals in a layerwise fashion to…

Abstract

Purpose

Selective laser melting (SLM) is a major additive manufacturing (AM) process in which laser beams are used as the heat source to melt and deposit metals in a layerwise fashion to enable the construction of components of arbitrary complexity. The purpose of this paper is to develop a framework for accurate and fast prediction of the temperature distribution during the SLM process.

Design/methodology/approach

A fast computation tool is proposed for thermal analysis of the SLM process. It is based on the finite volume method (FVM) and the quiet element method to allow the development of customized functionalities at the source level. The results obtained from the proposed FVM approach are compared against those obtained from the finite element method (FEM) using a well-established commercial software, in terms of accuracy and efficiency.

Findings

The results show that for simulating the SLM deposition of a cubic block with 81,000, 189,000 and 297,000 cells, the computation takes about 767, 3,041 and 7,054 min, respectively, with the FEM approach; while 174, 679 and 1,630 min with the FVM code. This represents a speedup of around 4.4x. Meanwhile, the average temperature difference between the two is below 6%, indicating good agreement between them.

Originality/value

The thermal field for the multi-track and multi-layer SLM process is for the first time computed by the FVM approach. This pioneering work on comparing FVM and FEM for SLM applications implies that a fast and simple computing tool for thermal analysis of the SLM process is within the reach, and it delivers comparable accuracy with significantly higher computational efficiency. The research results lay the foundation for a potentially cost-effective tool for investigating the fundamental microstructure evolution, and also optimizing the process parameters in the SLM process.

Details

Engineering Computations, vol. 37 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 4 July 2023

Jiayu Qin, Nengxiong Xu and Gang Mei

In this paper, the smoothed point interpolation method (SPIM) is used to model the slope deformation. However, the computational efficiency of SPIM is not satisfying when modeling…

Abstract

Purpose

In this paper, the smoothed point interpolation method (SPIM) is used to model the slope deformation. However, the computational efficiency of SPIM is not satisfying when modeling the large-scale nonlinear deformation problems of geological bodies.

Design/methodology/approach

In this paper, the SPIM is used to model the slope deformation. However, the computational efficiency of SPIM is not satisfying when modeling the large-scale nonlinear deformation problems of geological bodies.

Findings

A simple slope model with different mesh sizes is used to verify the performance of the efficient face-based SPIM. The first accelerating strategy greatly enhances the computational efficiency of solving the large-scale slope deformation. The second accelerating strategy effectively improves the convergence of nonlinear behavior that occurred in the slope deformation.

Originality/value

The designed efficient face-based SPIM can enhance the computational efficiency when analyzing large-scale nonlinear slope deformation problems, which can help to predict and prevent potential geological hazards.

Details

Engineering Computations, vol. 40 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 February 2018

Gregory Nicholas de Boer, Adam Johns, Nicolas Delbosc, Daniel Burdett, Morgan Tatchell-Evans, Jonathan Summers and Remi Baudot

This aim of this work is to investigate different modelling approaches for air-cooled data centres. The study employs three computational methods, which are based on finite…

Abstract

Purpose

This aim of this work is to investigate different modelling approaches for air-cooled data centres. The study employs three computational methods, which are based on finite element, finite volume and lattice Boltzmann methods and which are respectively implemented via commercial Multiphysics software, open-source computational fluid dynamics code and graphical processing unit-based code developed by the authors. The results focus on comparison of the three methods, all of which include models for turbulence, when applied to two rows of datacom racks with cool air supplied via an underfloor plenum.

Design/methodology/approach

This paper studies thermal airflows in a data centre by applying different numerical simulation techniques that are able to analyse the thermal airflow distribution for a simplified layout of datacom racks in the presence of a computer room air conditioner.

Findings

Good quantitative agreement between the three methods is seen in terms of the inlet temperatures to the datacom equipment. The computational methods are contrasted in terms of application to thermal management of data centres.

Originality/value

The work demonstrates how the different simulation techniques applied to thermal management of airflow in a data centre can provide valuable design and operational understanding. Basing the analysis on three very different computational approaches is new and would offer an informed understanding of their potential for a class of problems.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 28 no. 2
Type: Research Article
ISSN: 0961-5539

Keywords

1 – 10 of over 28000