Search results

1 – 10 of 548
Article
Publication date: 5 April 2024

Fangqi Hong, Pengfei Wei and Michael Beer

Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and…

Abstract

Purpose

Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and alternative acquisition functions, such as the Posterior Variance Contribution (PVC) function, have been developed for adaptive experiment design of the integration points. However, those sequential design strategies also prevent BC from being implemented in a parallel scheme. Therefore, this paper aims at developing a parallelized adaptive BC method to further improve the computational efficiency.

Design/methodology/approach

By theoretically examining the multimodal behavior of the PVC function, it is concluded that the multiple local maxima all have important contribution to the integration accuracy as can be selected as design points, providing a practical way for parallelization of the adaptive BC. Inspired by the above finding, four multimodal optimization algorithms, including one newly developed in this work, are then introduced for finding multiple local maxima of the PVC function in one run, and further for parallel implementation of the adaptive BC.

Findings

The superiority of the parallel schemes and the performance of the four multimodal optimization algorithms are then demonstrated and compared with the k-means clustering method by using two numerical benchmarks and two engineering examples.

Originality/value

Multimodal behavior of acquisition function for BC is comprehensively investigated. All the local maxima of the acquisition function contribute to adaptive BC accuracy. Parallelization of adaptive BC is realized with four multimodal optimization methods.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 8 September 2021

Senthil Kumar Angappan, Tezera Robe, Sisay Muleta and Bekele Worku M

Cloud computing services gained huge attention in recent years and many organizations started moving their business data traditional server to the cloud storage providers…

Abstract

Purpose

Cloud computing services gained huge attention in recent years and many organizations started moving their business data traditional server to the cloud storage providers. However, increased data storage introduces challenges like inefficient usage of resources in the cloud storage, in order to meet the demands of users and maintain the service level agreement with the clients, the cloud server has to allocate the physical machine to the virtual machines as requested, but the random resource allocations procedures lead to inefficient utilization of resources.

Design/methodology/approach

This thesis focuses on resource allocation for reasonable utilization of resources. The overall framework comprises of cloudlets, broker, cloud information system, virtual machines, virtual machine manager, and data center. Existing first fit and best fit algorithms consider the minimization of the number of bins but do not consider leftover bins.

Findings

The proposed algorithm effectively utilizes the resources compared to first, best and worst fit algorithms. The effect of this utilization efficiency can be seen in metrics where central processing unit (CPU), bandwidth (BW), random access memory (RAM) and power consumption outperformed very well than other algorithms by saving 15 kHz of CPU, 92.6kbps of BW, 6GB of RAM and saved 3kW of power compared to first and best fit algorithms.

Originality/value

The proposed multi-objective bin packing algorithm is better for packing VMs on physical servers in order to better utilize different parameters such as memory availability, CPU speed, power and bandwidth availability in the physical machine.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 5 April 2024

Abhishek Kumar Singh and Krishna Mohan Singh

In the present work, we focus on developing an in-house parallel meshless local Petrov-Galerkin (MLPG) code for the analysis of heat conduction in two-dimensional and…

Abstract

Purpose

In the present work, we focus on developing an in-house parallel meshless local Petrov-Galerkin (MLPG) code for the analysis of heat conduction in two-dimensional and three-dimensional regular as well as complex geometries.

Design/methodology/approach

The parallel MLPG code has been implemented using open multi-processing (OpenMP) application programming interface (API) on the shared memory multicore CPU architecture. Numerical simulations have been performed to find the critical regions of the serial code, and an OpenMP-based parallel MLPG code is developed, considering the critical regions of the sequential code.

Findings

Based on performance parameters such as speed-up and parallel efficiency, the credibility of the parallelization procedure has been established. Maximum speed-up and parallel efficiency are 10.94 and 0.92 for regular three-dimensional geometry (343,000 nodes). Results demonstrate the suitability of parallelization for larger nodes as parallel efficiency and speed-up are more for the larger nodes.

Originality/value

Few attempts have been made in parallel implementation of the MLPG method for solving large-scale industrial problems. Although the literature suggests that message-passing interface (MPI) based parallel MLPG codes have been developed, the OpenMP model has rarely been touched. This work is an attempt at the development of OpenMP-based parallel MLPG code for the very first time.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 22 December 2023

Vaclav Snasel, Tran Khanh Dang, Josef Kueng and Lingping Kong

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate…

80

Abstract

Purpose

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.

Design/methodology/approach

Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.

Findings

ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.

Originality/value

IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 27 January 2023

Davit Marikyan, Savvas Papagiannidis, Omer F. Rana and Rajiv Ranjan

The coronavirus disease 2019 (COVID-19) pandemic has had a big impact on organisations globally, leaving organisations with no choice but to adapt to the new reality of remote…

1227

Abstract

Purpose

The coronavirus disease 2019 (COVID-19) pandemic has had a big impact on organisations globally, leaving organisations with no choice but to adapt to the new reality of remote work to ensure business continuity. Such an unexpected reality created the conditions for testing new applications of smart home technology whilst working from home. Given the potential implications of such applications to improve the working environment, and a lack of research on that front, this paper pursued two objectives. First, the paper explored the impact of smart home applications by examining the factors that could contribute to perceived productivity and well-being whilst working from home. Second, the study investigated the role of productivity and well-being in motivating the intention of remote workers to use smart home technologies in a home-work environment in the future.

Design/methodology/approach

The study adopted a cross-sectional research design. For data collection, 528 smart home users working from home during the pandemic were recruited. Collected data were analysed using a structural equation modelling approach.

Findings

The results of the research confirmed that perceived productivity is dependent on service relevance, perceived usefulness, innovativeness, hedonic beliefs and control over environmental conditions. Perceived well-being correlates with task-technology fit, service relevance, perceived usefulness, perceived ease of use, attitude to smart homes, innovativeness, hedonic beliefs and control over environmental conditions. Intention to work from a smart home-office in the future is dependent on perceived well-being.

Originality/value

The findings of the research contribute to the organisational and smart home literature, by providing missing evidence about the implications of the application of smart home technologies for employees' perceived productivity and well-being. The paper considers the conditions that facilitate better outcomes during remote work and could potentially be used to improve the work environment in offices after the pandemic. Also, the findings inform smart home developers about the features of technology which could improve the developers' application in contexts beyond home settings.

Details

Internet Research, vol. 34 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 14 December 2023

Marjan Sharifi, Majid Siavashi and Milad Hosseini

Present study aims to extend the lattice Boltzmann method (LBM) to simulate radiation in geometries with curved boundaries, as the first step to simulate radiation in complex…

Abstract

Purpose

Present study aims to extend the lattice Boltzmann method (LBM) to simulate radiation in geometries with curved boundaries, as the first step to simulate radiation in complex porous media. In recent years, researchers have increasingly explored the use of porous media to improve the heat transfer processes. The lattice Boltzmann method (LBM) is one of the most effective techniques for simulating heat transfer in such media. However, the application of the LBM to study radiation in complex geometries that contain curved boundaries, as found in many porous media, has been limited.

Design/methodology/approach

The numerical evaluation of the effect of the radiation-conduction parameter and extinction coefficient on temperature and incident radiation distributions demonstrates that the proposed LBM algorithm provides highly accurate results across all cases, compared to those found in the literature or those obtained using the finite volume method (FVM) with the discrete ordinates method (DOM) for radiative information.

Findings

For the case with a conduction-radiation parameter equal to 0.01, the maximum relative error is 1.9% in predicting temperature along vertical central line. The accuracy improves with an increase in the conduction-radiation parameter. Furthermore, the comparison between computational performances of two approaches reveals that the LBM-LBM approach performs significantly faster than the FVM-DOM solver.

Originality/value

The difficulty of radiative modeling in combined problems involving irregular boundaries has led to alternative approaches that generally increase the computational expense to obtain necessary radiative details. To address the limitations of existing methods, this study presents a new approach involving a coupled lattice Boltzmann and first-order blocked-off technique to efficiently model conductive-radiative heat transfer in complex geometries with participating media. This algorithm has been developed using the parallel lattice Boltzmann solver.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 18 August 2022

Britto Pari J., Mariammal K. and Vaithiyanathan D.

Filter design plays an essential role in most communication standards. The essential element of the software-defined radio is a channelizer that comprises several channel filters…

Abstract

Purpose

Filter design plays an essential role in most communication standards. The essential element of the software-defined radio is a channelizer that comprises several channel filters. Designing filters with lower complexity, minimized area and enhanced speed is a demanding task in currently prevailing communication standards. This study aims to propose an efficient reconfigurable residue number system (RNS)-based multiply-accumulate (MAC) channel filter for software radio receivers.

Design/methodology/approach

RNS-based pipelined MAC module for the realization of channel finite impulse response (FIR) filter architecture is considered in this work. Further, the use of a single adder and single multiplier for realizing the filter architecture regardless of the number of taps offers effective resource sharing. This design provides significant improvement in speed of operation as well as a reduction in area complexity.

Findings

In this paper, two major tasks have been considered: first, the RNS number conversion is performed in which the integer is converted into several residues. These residues are processed in parallel and are applied to the MAC-FIR filter architecture. Second, the MAC filter architecture involves pipelining, which enhances the speed of operation to a significant extent. Also, the time-sharing-based design incorporates a single partial product-based shift and add multiplier and single adder, which provide a low complex design. The results show that the proposed 16-tap RNS-based pipelined MAC sub-filter achieves significant improvement in speed as well as 89.87% area optimization when examined with the conventional RNS-based FIR filter structure.

Originality/value

The proposed MAC-FIR filter architecture provides good performance in terms of complexity and speed of operation because of the use of the RNS scheme with pipelining and partial product-based shift and adds multiplier and single adder when examining with the conventional designs. The reported architecture can be used in software radios.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 28 February 2023

Tulsi Pawan Fowdur, M.A.N. Shaikh Abdoolla and Lokeshwar Doobur

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality…

Abstract

Purpose

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality assessment (VQA) and a phishing detection application by using the edge, fog and cloud computing paradigms.

Design/methodology/approach

The VQA algorithm was developed using Android Studio and run on a mobile phone for the edge paradigm. For the fog paradigm, it was hosted on a Java server and for the cloud paradigm on the IBM and Firebase clouds. The phishing detection algorithm was embedded into a browser extension for the edge paradigm. For the fog paradigm, it was hosted on a Node.js server and for the cloud paradigm on Firebase.

Findings

For the VQA algorithm, the edge paradigm had the highest response time while the cloud paradigm had the lowest, as the algorithm was computationally intensive. For the phishing detection algorithm, the edge paradigm had the lowest response time, and the cloud paradigm had the highest, as the algorithm had a low computational complexity. Since the determining factor for the response time was the latency, the edge paradigm provided the smallest delay as all processing were local.

Research limitations/implications

The main limitation of this work is that the experiments were performed on a small scale due to time and budget constraints.

Originality/value

A detailed analysis with real applications has been provided to show how the complexity of an application can determine the best computing paradigm on which it can be deployed.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 20 March 2024

Ziming Zhou, Fengnian Zhao and David Hung

Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine…

Abstract

Purpose

Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine. However, it remains a daunting task to predict the nonlinear and transient in-cylinder flow motion because they are highly complex which change both in space and time. Recently, machine learning methods have demonstrated great promises to infer relatively simple temporal flow field development. This paper aims to feature a physics-guided machine learning approach to realize high accuracy and generalization prediction for complex swirl-induced flow field motions.

Design/methodology/approach

To achieve high-fidelity time-series prediction of unsteady engine flow fields, this work features an automated machine learning framework with the following objectives: (1) The spatiotemporal physical constraint of the flow field structure is transferred to machine learning structure. (2) The ML inputs and targets are efficiently designed that ensure high model convergence with limited sets of experiments. (3) The prediction results are optimized by ensemble learning mechanism within the automated machine learning framework.

Findings

The proposed data-driven framework is proven effective in different time periods and different extent of unsteadiness of the flow dynamics, and the predicted flow fields are highly similar to the target field under various complex flow patterns. Among the described framework designs, the utilization of spatial flow field structure is the featured improvement to the time-series flow field prediction process.

Originality/value

The proposed flow field prediction framework could be generalized to different crank angle periods, cycles and swirl ratio conditions, which could greatly promote real-time flow control and reduce experiments on in-cylinder flow field measurement and diagnostics.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 18 April 2024

Vaishali Rajput, Preeti Mulay and Chandrashekhar Madhavrao Mahajan

Nature’s evolution has shaped intelligent behaviors in creatures like insects and birds, inspiring the field of Swarm Intelligence. Researchers have developed bio-inspired…

Abstract

Purpose

Nature’s evolution has shaped intelligent behaviors in creatures like insects and birds, inspiring the field of Swarm Intelligence. Researchers have developed bio-inspired algorithms to address complex optimization problems efficiently. These algorithms strike a balance between computational efficiency and solution optimality, attracting significant attention across domains.

Design/methodology/approach

Bio-inspired optimization techniques for feature engineering and its applications are systematically reviewed with chief objective of assessing statistical influence and significance of “Bio-inspired optimization”-based computational models by referring to vast research literature published between year 2015 and 2022.

Findings

The Scopus and Web of Science databases were explored for review with focus on parameters such as country-wise publications, keyword occurrences and citations per year. Springer and IEEE emerge as the most creative publishers, with indicative prominent and superior journals, namely, PLoS ONE, Neural Computing and Applications, Lecture Notes in Computer Science and IEEE Transactions. The “National Natural Science Foundation” of China and the “Ministry of Electronics and Information Technology” of India lead in funding projects in this area. China, India and Germany stand out as leaders in publications related to bio-inspired algorithms for feature engineering research.

Originality/value

The review findings integrate various bio-inspired algorithm selection techniques over a diverse spectrum of optimization techniques. Anti colony optimization contributes to decentralized and cooperative search strategies, bee colony optimization (BCO) improves collaborative decision-making, particle swarm optimization leads to exploration-exploitation balance and bio-inspired algorithms offer a range of nature-inspired heuristics.

1 – 10 of 548