Search results

1 – 10 of 653
Open Access
Article
Publication date: 3 August 2020

Maryam AlJame and Imtiaz Ahmad

The evolution of technologies has unleashed a wealth of challenges by generating massive amount of data. Recently, biological data has increased exponentially, which has…

1148

Abstract

The evolution of technologies has unleashed a wealth of challenges by generating massive amount of data. Recently, biological data has increased exponentially, which has introduced several computational challenges. DNA short read alignment is an important problem in bioinformatics. The exponential growth in the number of short reads has increased the need for an ideal platform to accelerate the alignment process. Apache Spark is a cluster-computing framework that involves data parallelism and fault tolerance. In this article, we proposed a Spark-based algorithm to accelerate DNA short reads alignment problem, and it is called Spark-DNAligning. Spark-DNAligning exploits Apache Spark ’s performance optimizations such as broadcast variable, join after partitioning, caching, and in-memory computations. Spark-DNAligning is evaluated in term of performance by comparing it with SparkBWA tool and a MapReduce based algorithm called CloudBurst. All the experiments are conducted on Amazon Web Services (AWS). Results demonstrate that Spark-DNAligning outperforms both tools by providing a speedup in the range of 101–702 in aligning gigabytes of short reads to the human genome. Empirical evaluation reveals that Apache Spark offers promising solutions to DNA short reads alignment problem.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 30 August 2021

Kailun Feng, Shiwei Chen, Weizhuo Lu, Shuo Wang, Bin Yang, Chengshuang Sun and Yaowu Wang

Simulation-based optimisation (SO) is a popular optimisation approach for building and civil engineering construction planning. However, in the framework of SO, the simulation is…

1409

Abstract

Purpose

Simulation-based optimisation (SO) is a popular optimisation approach for building and civil engineering construction planning. However, in the framework of SO, the simulation is continuously invoked during the optimisation trajectory, which increases the computational loads to levels unrealistic for timely construction decisions. Modification on the optimisation settings such as reducing searching ability is a popular method to address this challenge, but the quality measurement of the obtained optimal decisions, also termed as optimisation quality, is also reduced by this setting. Therefore, this study aims to develop an optimisation approach for construction planning that reduces the high computational loads of SO and provides reliable optimisation quality simultaneously.

Design/methodology/approach

This study proposes the optimisation approach by modifying the SO framework through establishing an embedded connection between simulation and optimisation technologies. This approach reduces the computational loads and ensures the optimisation quality associated with the conventional SO approach by accurately learning the knowledge from construction simulations using embedded ensemble learning algorithms, which automatically provides efficient and reliable fitness evaluations for optimisation iterations.

Findings

A large-scale project application shows that the proposed approach was able to reduce computational loads of SO by approximately 90%. Meanwhile, the proposed approach outperformed SO in terms of optimisation quality when the optimisation has limited searching ability.

Originality/value

The core contribution of this research is to provide an innovative method that improves efficiency and ensures effectiveness, simultaneously, of the well-known SO approach in construction applications. The proposed method is an alternative approach to SO that can run on standard computing platforms and support nearly real-time construction on-site decision-making.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 1
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 10 August 2022

Rama K. Malladi

Critics say cryptocurrencies are hard to predict and lack both economic value and accounting standards, while supporters argue they are revolutionary financial technology and a…

2311

Abstract

Purpose

Critics say cryptocurrencies are hard to predict and lack both economic value and accounting standards, while supporters argue they are revolutionary financial technology and a new asset class. This study aims to help accounting and financial modelers compare cryptocurrencies with other asset classes (such as gold, stocks and bond markets) and develop cryptocurrency forecast models.

Design/methodology/approach

Daily data from 12/31/2013 to 08/01/2020 (including the COVID-19 pandemic period) for the top six cryptocurrencies that constitute 80% of the market are used. Cryptocurrency price, return and volatility are forecasted using five traditional econometric techniques: pooled ordinary least squares (OLS) regression, fixed-effect model (FEM), random-effect model (REM), panel vector error correction model (VECM) and generalized autoregressive conditional heteroskedasticity (GARCH). Fama and French's five-factor analysis, a frequently used method to study stock returns, is conducted on cryptocurrency returns in a panel-data setting. Finally, an efficient frontier is produced with and without cryptocurrencies to see how adding cryptocurrencies to a portfolio makes a difference.

Findings

The seven findings in this analysis are summarized as follows: (1) VECM produces the best out-of-sample price forecast of cryptocurrency prices; (2) cryptocurrencies are unlike cash for accounting purposes as they are very volatile: the standard deviations of daily returns are several times larger than those of the other financial assets; (3) cryptocurrencies are not a substitute for gold as a safe-haven asset; (4) the five most significant determinants of cryptocurrency daily returns are emerging markets stock index, S&P 500 stock index, return on gold, volatility of daily returns and the volatility index (VIX); (5) their return volatility is persistent and can be forecasted using the GARCH model; (6) in a portfolio setting, cryptocurrencies exhibit negative alpha, high beta, similar to small and growth stocks and (7) a cryptocurrency portfolio offers more portfolio choices for investors and resembles a levered portfolio.

Practical implications

One of the tasks of the financial econometrics profession is building pro forma models that meet accounting standards and satisfy auditors. This paper undertook such activity by deploying traditional financial econometric methods and applying them to an emerging cryptocurrency asset class.

Originality/value

This paper attempts to contribute to the existing academic literature in three ways: Pro forma models for price forecasting: five established traditional econometric techniques (as opposed to novel methods) are deployed to forecast prices; Cryptocurrency as a group: instead of analyzing one currency at a time and running the risk of missing out on cross-sectional effects (as done by most other researchers), the top-six cryptocurrencies constitute 80% of the market, are analyzed together as a group using panel-data methods; Cryptocurrencies as financial assets in a portfolio: To understand the linkages between cryptocurrencies and traditional portfolio characteristics, an efficient frontier is produced with and without cryptocurrencies to see how adding cryptocurrencies to an investment portfolio makes a difference.

Details

China Accounting and Finance Review, vol. 25 no. 2
Type: Research Article
ISSN: 1029-807X

Keywords

Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 16 August 2019

Morteza Moradi, Mohammad Moradi, Farhad Bayat and Adel Nadjaran Toosi

Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant…

3904

Abstract

Purpose

Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive.

Design/methodology/approach

According to the extensive acceptance and efficiency of crowdsourcing, hybrid intelligence and distributed computing concepts, the authors have come up with the (complementary) idea of collective hybrid intelligence. In this regard, besides providing a brief review of the efforts made in the related contexts, conceptual foundations and building blocks of the proposed framework are delineated. Moreover, some discussion on architectural and realization issues are presented.

Findings

The paper describes the conceptual architecture, workflow and schematic representation of a new hybrid computing concept. Moreover, by introducing three sample scenarios, its benefits, requirements, practical roadmap and architectural notes are explained.

Originality/value

The major contribution of this work is introducing the conceptual foundations to combine and integrate collective intelligence of humans and machines to achieve higher efficiency and (computing) performance. To the best of the authors’ knowledge, this the first study in which such a blessing integration is considered. Therefore, it is believed that the proposed computing concept could inspire researchers toward realizing such unprecedented possibilities in practical and theoretical contexts.

Details

International Journal of Crowd Science, vol. 3 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 18 January 2022

Srinimalan Balakrishnan Selvakumaran and Daniel Mark Hall

The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science…

1463

Abstract

Purpose

The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science research approach. Current methods to create digital assets by capturing the state of existing buildings can provide high accuracy but are time-consuming, expensive and difficult.

Design/methodology/approach

Using design science research, this research identifies the need for a crowdsourced and cloud-based approach to reconstruct digital building assets. The research then develops and tests a fully functional smartphone application prototype. The proposed end-to-end smartphone workflow begins with data capture and ends with user applications.

Findings

The resulting implementation can achieve a realistic three-dimensional (3D) model characterized by different typologies, minimal trade-off in accuracy and low processing costs. By crowdsourcing the images, the proposed approach can reduce costs for asset reconstruction by an estimated 93% compared to manual modeling and 80% compared to locally processed reconstruction algorithms.

Practical implications

The resulting implementation achieves “good enough” reconstruction of as-is 3D models with minimal tradeoffs in accuracy compared to automated approaches and 15× cost savings compared to a manual approach. Potential facility management use cases include the issue and information tracking, 3D mark-up and multi-model configurators.

Originality/value

Through user engagement, development, testing and validation, this work demonstrates the feasibility and impact of a novel crowdsourced and cloud-based approach for the reconstruction of digital building assets.

Details

Journal of Facilities Management , vol. 20 no. 3
Type: Research Article
ISSN: 1472-5967

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1039

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4533

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 28 March 2022

Yunfei Li, Shengbo Eben Li, Xingheng Jia, Shulin Zeng and Yu Wang

The purpose of this paper is to reduce the difficulty of model predictive control (MPC) deployment on FPGA so that researchers can make better use of FPGA technology for academic…

1327

Abstract

Purpose

The purpose of this paper is to reduce the difficulty of model predictive control (MPC) deployment on FPGA so that researchers can make better use of FPGA technology for academic research.

Design/methodology/approach

In this paper, the MPC algorithm is written into FPGA by combining hardware with software. Experiments have verified this method.

Findings

This paper implements a ZYNQ-based design method, which could significantly reduce the difficulty of development. The comparison with the CPU solution results proves that FPGA has a significant acceleration effect on the solution of MPC through the method.

Research limitations implications

Due to the limitation of practical conditions, this paper cannot carry out a hardware-in-the-loop experiment for the time being, instead of an open-loop experiment.

Originality value

This paper proposes a new design method to deploy the MPC algorithm to the FPGA, reducing the development difficulty of the algorithm implementation on FPGA. It greatly facilitates researchers in the field of autonomous driving to carry out FPGA algorithm hardware acceleration research.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 2
Type: Research Article
ISSN: 2399-9802

Keywords

Open Access
Article
Publication date: 7 November 2019

Seyed Mahmoud Zanjirchi, Negar Jalilian and Marzieh Shahmohamadi Mehrjardi

Nowadays, to develop innovative activities in research and development units, it is desirable to rely on the concept of open innovation to take actions towards the identification…

2927

Abstract

Purpose

Nowadays, to develop innovative activities in research and development units, it is desirable to rely on the concept of open innovation to take actions towards the identification of external capabilities of an organization and external knowledge acquisition. Therefore, this study aims to evaluate the impact of external technology acquisition (ETA), external technology exploitation (ETE) and culture of innovation (IC) on open innovation (OI) using SEM approach and then examine the amount of the impact of open innovation on organizational performance (OP) and value creation (VC).

Design/methodology/approach

This study was an applied survey in terms of research purpose and data collection method. The statistical population included all companies in Yazd Science and Technology Park (STP). To collect the data, 109 questionnaires were distributed. The content validity of the questionnaire was confirmed by experts’ comments, and Cronbach’s alpha coefficient was calculated equal to 0.873 for reliability.

Findings

The results indicated, ETA, ETE and IC had significant and positive effects on OI, and OI by itself had a significant and positive impact on OP and VC. However, the hypothesis of the significant and positive effect of VC on OP was rejected.

Originality/value

Considering the importance of innovative activities of companies in STPs and the role of OI in achieving the goals of idea-driven companies, the present study evaluated the effects of factors affecting the fulfillment of OI in companies based in STPs in the Yazd province of Iran.

Details

Asia Pacific Journal of Innovation and Entrepreneurship, vol. 13 no. 3
Type: Research Article
ISSN: 2071-1395

Keywords

1 – 10 of 653