Search results

1 – 10 of over 1000
Article
Publication date: 9 January 2024

Juelin Leng, Quan Xu, Tiantian Liu, Yang Yang and Peng Zheng

The purpose of this paper is to present an automatic approach for mesh sizing field generation of complicated  computer-aided design (CAD) models.

Abstract

Purpose

The purpose of this paper is to present an automatic approach for mesh sizing field generation of complicated  computer-aided design (CAD) models.

Design/methodology/approach

In this paper, the authors present an automatic approach for mesh sizing field generation. First, a source point extraction algorithm is applied to capture curvature and proximity features of CAD models. Second, according to the distribution of feature source points, an octree background mesh is constructed for storing element size value. Third, mesh size value on each node of background mesh is calculated by interpolating the local feature size of the nearby source points, and then, an initial mesh sizing field is obtained. Finally, a theoretically guaranteed smoothing algorithm is developed to restrict the gradient of the mesh sizing field.

Findings

To achieve high performance, the proposed approach has been implemented in multithreaded parallel using OpenMP. Numerical results demonstrate that the proposed approach is remarkably efficient to construct reasonable mesh sizing field for complicated CAD models and applicable for generating geometrically adaptive triangle/tetrahedral meshes. Moreover, since the mesh sizing field is defined on an octree background mesh, high-efficiency query of local size value could be achieved in the following mesh generation procedure.

Originality/value

How to determine a reasonable mesh size for complicated CAD models is often a bottleneck of mesh generation. For the complicated models with thousands or even ten thousands of geometric entities, it is time-consuming to construct an appropriate mesh sizing field for generating high-quality mesh. A parallel algorithm of mesh sizing field generation with low computational complexity is presented in this paper, and its usability and efficiency have been verified.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 April 2024

Fangqi Hong, Pengfei Wei and Michael Beer

Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and…

Abstract

Purpose

Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and alternative acquisition functions, such as the Posterior Variance Contribution (PVC) function, have been developed for adaptive experiment design of the integration points. However, those sequential design strategies also prevent BC from being implemented in a parallel scheme. Therefore, this paper aims at developing a parallelized adaptive BC method to further improve the computational efficiency.

Design/methodology/approach

By theoretically examining the multimodal behavior of the PVC function, it is concluded that the multiple local maxima all have important contribution to the integration accuracy as can be selected as design points, providing a practical way for parallelization of the adaptive BC. Inspired by the above finding, four multimodal optimization algorithms, including one newly developed in this work, are then introduced for finding multiple local maxima of the PVC function in one run, and further for parallel implementation of the adaptive BC.

Findings

The superiority of the parallel schemes and the performance of the four multimodal optimization algorithms are then demonstrated and compared with the k-means clustering method by using two numerical benchmarks and two engineering examples.

Originality/value

Multimodal behavior of acquisition function for BC is comprehensively investigated. All the local maxima of the acquisition function contribute to adaptive BC accuracy. Parallelization of adaptive BC is realized with four multimodal optimization methods.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 May 2023

Basma Al-Mutawa and Muneer Mohammed Saeed Al Mubarak

The purpose of this study is to investigate the adoption of cloud computing as a digital technology by small and medium enterprises (SMEs) and assess its impact on sustainability…

Abstract

Purpose

The purpose of this study is to investigate the adoption of cloud computing as a digital technology by small and medium enterprises (SMEs) and assess its impact on sustainability of such enterprises.

Design/methodology/approach

A model was developed that featured factors influencing SMEs sustainability. Primary quantitative data was gathered using a survey as an instrument. Total set of n = 387 responses were gathered using a convenience sampling method.

Findings

Findings reveal that cost reduction, ease of use, reliability and sharing and collaboration factors have significant statistical impacts on SMEs sustainability, whereas privacy and security factor has no significant statistical on SMEs sustainability.

Practical implications

The study poses significant implications on managers and SME development authority to create an inductive environment for technological support for SMEs’ sustainability.

Originality/value

The study enhances SMEs’ performance and sustainability by upgrading their existing information and communications technology as a digital infrastructure and benefiting from novel IT-based cloud revolution. Several studies have provided an understanding of the use of cloud computing services in SMEs but lack enough information about the challenges and impact on SMEs sustainability.

Details

Competitiveness Review: An International Business Journal , vol. 34 no. 1
Type: Research Article
ISSN: 1059-5422

Keywords

Article
Publication date: 8 September 2021

Senthil Kumar Angappan, Tezera Robe, Sisay Muleta and Bekele Worku M

Cloud computing services gained huge attention in recent years and many organizations started moving their business data traditional server to the cloud storage providers…

Abstract

Purpose

Cloud computing services gained huge attention in recent years and many organizations started moving their business data traditional server to the cloud storage providers. However, increased data storage introduces challenges like inefficient usage of resources in the cloud storage, in order to meet the demands of users and maintain the service level agreement with the clients, the cloud server has to allocate the physical machine to the virtual machines as requested, but the random resource allocations procedures lead to inefficient utilization of resources.

Design/methodology/approach

This thesis focuses on resource allocation for reasonable utilization of resources. The overall framework comprises of cloudlets, broker, cloud information system, virtual machines, virtual machine manager, and data center. Existing first fit and best fit algorithms consider the minimization of the number of bins but do not consider leftover bins.

Findings

The proposed algorithm effectively utilizes the resources compared to first, best and worst fit algorithms. The effect of this utilization efficiency can be seen in metrics where central processing unit (CPU), bandwidth (BW), random access memory (RAM) and power consumption outperformed very well than other algorithms by saving 15 kHz of CPU, 92.6kbps of BW, 6GB of RAM and saved 3kW of power compared to first and best fit algorithms.

Originality/value

The proposed multi-objective bin packing algorithm is better for packing VMs on physical servers in order to better utilize different parameters such as memory availability, CPU speed, power and bandwidth availability in the physical machine.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 5 April 2024

Abhishek Kumar Singh and Krishna Mohan Singh

In the present work, we focus on developing an in-house parallel meshless local Petrov-Galerkin (MLPG) code for the analysis of heat conduction in two-dimensional and…

Abstract

Purpose

In the present work, we focus on developing an in-house parallel meshless local Petrov-Galerkin (MLPG) code for the analysis of heat conduction in two-dimensional and three-dimensional regular as well as complex geometries.

Design/methodology/approach

The parallel MLPG code has been implemented using open multi-processing (OpenMP) application programming interface (API) on the shared memory multicore CPU architecture. Numerical simulations have been performed to find the critical regions of the serial code, and an OpenMP-based parallel MLPG code is developed, considering the critical regions of the sequential code.

Findings

Based on performance parameters such as speed-up and parallel efficiency, the credibility of the parallelization procedure has been established. Maximum speed-up and parallel efficiency are 10.94 and 0.92 for regular three-dimensional geometry (343,000 nodes). Results demonstrate the suitability of parallelization for larger nodes as parallel efficiency and speed-up are more for the larger nodes.

Originality/value

Few attempts have been made in parallel implementation of the MLPG method for solving large-scale industrial problems. Although the literature suggests that message-passing interface (MPI) based parallel MLPG codes have been developed, the OpenMP model has rarely been touched. This work is an attempt at the development of OpenMP-based parallel MLPG code for the very first time.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 22 December 2023

Vaclav Snasel, Tran Khanh Dang, Josef Kueng and Lingping Kong

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate…

80

Abstract

Purpose

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.

Design/methodology/approach

Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.

Findings

ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.

Originality/value

IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 27 January 2023

Davit Marikyan, Savvas Papagiannidis, Omer F. Rana and Rajiv Ranjan

The coronavirus disease 2019 (COVID-19) pandemic has had a big impact on organisations globally, leaving organisations with no choice but to adapt to the new reality of remote…

1205

Abstract

Purpose

The coronavirus disease 2019 (COVID-19) pandemic has had a big impact on organisations globally, leaving organisations with no choice but to adapt to the new reality of remote work to ensure business continuity. Such an unexpected reality created the conditions for testing new applications of smart home technology whilst working from home. Given the potential implications of such applications to improve the working environment, and a lack of research on that front, this paper pursued two objectives. First, the paper explored the impact of smart home applications by examining the factors that could contribute to perceived productivity and well-being whilst working from home. Second, the study investigated the role of productivity and well-being in motivating the intention of remote workers to use smart home technologies in a home-work environment in the future.

Design/methodology/approach

The study adopted a cross-sectional research design. For data collection, 528 smart home users working from home during the pandemic were recruited. Collected data were analysed using a structural equation modelling approach.

Findings

The results of the research confirmed that perceived productivity is dependent on service relevance, perceived usefulness, innovativeness, hedonic beliefs and control over environmental conditions. Perceived well-being correlates with task-technology fit, service relevance, perceived usefulness, perceived ease of use, attitude to smart homes, innovativeness, hedonic beliefs and control over environmental conditions. Intention to work from a smart home-office in the future is dependent on perceived well-being.

Originality/value

The findings of the research contribute to the organisational and smart home literature, by providing missing evidence about the implications of the application of smart home technologies for employees' perceived productivity and well-being. The paper considers the conditions that facilitate better outcomes during remote work and could potentially be used to improve the work environment in offices after the pandemic. Also, the findings inform smart home developers about the features of technology which could improve the developers' application in contexts beyond home settings.

Details

Internet Research, vol. 34 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 14 December 2023

Marjan Sharifi, Majid Siavashi and Milad Hosseini

Present study aims to extend the lattice Boltzmann method (LBM) to simulate radiation in geometries with curved boundaries, as the first step to simulate radiation in complex…

Abstract

Purpose

Present study aims to extend the lattice Boltzmann method (LBM) to simulate radiation in geometries with curved boundaries, as the first step to simulate radiation in complex porous media. In recent years, researchers have increasingly explored the use of porous media to improve the heat transfer processes. The lattice Boltzmann method (LBM) is one of the most effective techniques for simulating heat transfer in such media. However, the application of the LBM to study radiation in complex geometries that contain curved boundaries, as found in many porous media, has been limited.

Design/methodology/approach

The numerical evaluation of the effect of the radiation-conduction parameter and extinction coefficient on temperature and incident radiation distributions demonstrates that the proposed LBM algorithm provides highly accurate results across all cases, compared to those found in the literature or those obtained using the finite volume method (FVM) with the discrete ordinates method (DOM) for radiative information.

Findings

For the case with a conduction-radiation parameter equal to 0.01, the maximum relative error is 1.9% in predicting temperature along vertical central line. The accuracy improves with an increase in the conduction-radiation parameter. Furthermore, the comparison between computational performances of two approaches reveals that the LBM-LBM approach performs significantly faster than the FVM-DOM solver.

Originality/value

The difficulty of radiative modeling in combined problems involving irregular boundaries has led to alternative approaches that generally increase the computational expense to obtain necessary radiative details. To address the limitations of existing methods, this study presents a new approach involving a coupled lattice Boltzmann and first-order blocked-off technique to efficiently model conductive-radiative heat transfer in complex geometries with participating media. This algorithm has been developed using the parallel lattice Boltzmann solver.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 18 August 2022

Britto Pari J., Mariammal K. and Vaithiyanathan D.

Filter design plays an essential role in most communication standards. The essential element of the software-defined radio is a channelizer that comprises several channel filters…

Abstract

Purpose

Filter design plays an essential role in most communication standards. The essential element of the software-defined radio is a channelizer that comprises several channel filters. Designing filters with lower complexity, minimized area and enhanced speed is a demanding task in currently prevailing communication standards. This study aims to propose an efficient reconfigurable residue number system (RNS)-based multiply-accumulate (MAC) channel filter for software radio receivers.

Design/methodology/approach

RNS-based pipelined MAC module for the realization of channel finite impulse response (FIR) filter architecture is considered in this work. Further, the use of a single adder and single multiplier for realizing the filter architecture regardless of the number of taps offers effective resource sharing. This design provides significant improvement in speed of operation as well as a reduction in area complexity.

Findings

In this paper, two major tasks have been considered: first, the RNS number conversion is performed in which the integer is converted into several residues. These residues are processed in parallel and are applied to the MAC-FIR filter architecture. Second, the MAC filter architecture involves pipelining, which enhances the speed of operation to a significant extent. Also, the time-sharing-based design incorporates a single partial product-based shift and add multiplier and single adder, which provide a low complex design. The results show that the proposed 16-tap RNS-based pipelined MAC sub-filter achieves significant improvement in speed as well as 89.87% area optimization when examined with the conventional RNS-based FIR filter structure.

Originality/value

The proposed MAC-FIR filter architecture provides good performance in terms of complexity and speed of operation because of the use of the RNS scheme with pipelining and partial product-based shift and adds multiplier and single adder when examining with the conventional designs. The reported architecture can be used in software radios.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 28 February 2023

Tulsi Pawan Fowdur, M.A.N. Shaikh Abdoolla and Lokeshwar Doobur

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality…

Abstract

Purpose

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality assessment (VQA) and a phishing detection application by using the edge, fog and cloud computing paradigms.

Design/methodology/approach

The VQA algorithm was developed using Android Studio and run on a mobile phone for the edge paradigm. For the fog paradigm, it was hosted on a Java server and for the cloud paradigm on the IBM and Firebase clouds. The phishing detection algorithm was embedded into a browser extension for the edge paradigm. For the fog paradigm, it was hosted on a Node.js server and for the cloud paradigm on Firebase.

Findings

For the VQA algorithm, the edge paradigm had the highest response time while the cloud paradigm had the lowest, as the algorithm was computationally intensive. For the phishing detection algorithm, the edge paradigm had the lowest response time, and the cloud paradigm had the highest, as the algorithm had a low computational complexity. Since the determining factor for the response time was the latency, the edge paradigm provided the smallest delay as all processing were local.

Research limitations/implications

The main limitation of this work is that the experiments were performed on a small scale due to time and budget constraints.

Originality/value

A detailed analysis with real applications has been provided to show how the complexity of an application can determine the best computing paradigm on which it can be deployed.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

1 – 10 of over 1000