Search results

1 – 10 of over 30000
Article
Publication date: 1 March 2002

Alfred Loo and Y.K. Choi

Heretofore, it has been extremely expensive to install and use distributed databases. With the advent of Java, JDBC and other Internet technologies, it has become easy and…

Abstract

Heretofore, it has been extremely expensive to install and use distributed databases. With the advent of Java, JDBC and other Internet technologies, it has become easy and inexpensive to connect multiple databases and form distributed databases, even where the various host computers run on different platforms. These types of databases can be used in many peer‐to‐peer applications which are now receiving much attention from researchers. Although it is easy to form a distributed database via Internet/intranet, effective sharing of information continues to be problematic. We need to pay more attention to the enabling algorithms, as dedicated links between computers are usually not available in peer‐to‐peer systems. The lack of dedicated links can cause poor performance, especially if the databases are connected via Internet. Discusses the problems of distributed database operation with reference to an example. Presents two statistical selection algorithms which are designed to select the jth smallest key from a very large file distributed over many computers. The objective of these algorithms is to minimise the number of communication messages necessary to the selection operation. One algorithm is for the intranet with broadcast/multicast facilities while the other is for Internet without broadcast/multicast facilities.

Details

Internet Research, vol. 12 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 July 1998

Ralf Östermark

In the present study we introduce a new recursive matrix inversion (RMI) algorithm for a distributed memory computer. The RMI algorithm was designed to meet the requirements of…

Abstract

In the present study we introduce a new recursive matrix inversion (RMI) algorithm for a distributed memory computer. The RMI algorithm was designed to meet the requirements of high performance flexible software for implementing different parallel optimization algorithms. Special consideration has been taken to ensure the usability and portability of the algorithm. The results we present show that a significant improvement in performance is attainable over the LU‐factorization algorithm included in the LAPACK library.

Details

Kybernetes, vol. 27 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 9 May 2019

Andrew Kwok-Fai Lui, Maria Hiu Man Poon and Raymond Man Hong Wong

The purpose of this study is to investigate students’ decisions in example-based instruction within a novel self-regulated learning context. The novelty was the use of automated…

Abstract

Purpose

The purpose of this study is to investigate students’ decisions in example-based instruction within a novel self-regulated learning context. The novelty was the use of automated generators of worked examples and problem-solving exercises instead of a few handcrafted ones. According to the cognitive load theory, when students are in control of their learning, they demonstrate different preferences in selecting worked examples or problem solving exercises for maximizing their learning. An unlimited supply of examples and exercises, however, offers unprecedented degree of flexibility that should alter the decisions of students in scheduling the instruction.

Design/methodology/approach

ASolver, an online learning environment augmented with such generators for studying computer algorithms in an operating systems course, was developed as the experimental platform. Students’ decisions related to choosing worked examples or problem-solving exercises were logged and analyzed.

Findings

Results show that students had a tendency to attempt many exercises and examples, especially when performance measurement events were impending. Strong students had greater appetite for both exercises and examples than weak students, and they were found to be more adventurous and less bothered by scaffolding. On the other hand, weak students were found to be more timid or unmotivated. They need support in the form of procedural scaffolding to guide the learning.

Originality/value

This study was one of the first to introduce automated example generators for studying an operating systems course and investigate students’ behaviors in such learning environments.

Details

Interactive Technology and Smart Education, vol. 16 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 29 January 2020

Dianchen Zhu, Huiying Wen and Yichuan Deng

To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active…

376

Abstract

Purpose

To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active warning system for crossroads at construction sites. Although prior studies have made efforts to develop warning systems for construction sites, most of them paid attention to the construction process, while the accidents that occur at crossroads were probably overlooked.

Design/methodology/approach

By summarizing the main reasons resulting for those accidents occurring at crossroads, a pro-active warning system that could provide six functions for countermeasures was designed. Several approaches relating to computer vision and a prediction algorithm were applied and proposed to realize the setting functions.

Findings

One 12-hour video that films a crossroad at a construction site was selected as the original data. The test results show that all designed functions could operate normally, several predicted dangerous situations could be detected and corresponding proper warnings could be given. To validate the applicability of this system, another 36-hour video data were chosen for a performance test, and the findings indicate that all applied algorithms show a significant fitness of the data.

Originality/value

Computer vision algorithms have been widely used in previous studies to address video data or monitoring information; however, few of them have demonstrated the high applicability of identification and classification of the different participants at construction sites. In addition, none of these studies attempted to use a dynamic prediction algorithm to predict risky events, which could provide significant information for relevant active warnings.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 5
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 21 May 2021

Chang Liu, Samad M.E. Sepasgozar, Sara Shirowzhan and Gelareh Mohammadi

The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction…

1008

Abstract

Purpose

The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction industry due to a lack of expertise and the limited reliable applications for AI technology. Hence, this paper aims to present the detailed outcome of experimentations evaluating the applicability and the performance of AI object detection algorithms for construction modular object detection.

Design/methodology/approach

This paper provides a thorough evaluation of two deep learning algorithms for object detection, including the faster region-based convolutional neural network (faster RCNN) and single shot multi-box detector (SSD). Two types of metrics are also presented; first, the average recall and mean average precision by image pixels; second, the recall and precision by counting. To conduct the experiments using the selected algorithms, four infrastructure and building construction sites are chosen to collect the required data, including a total of 990 images of three different but common modular objects, including modular panels, safety barricades and site fences.

Findings

The results of the comprehensive evaluation of the algorithms show that the performance of faster RCNN and SSD depends on the context that detection occurs. Indeed, surrounding objects and the backgrounds of the objects affect the level of accuracy obtained from the AI analysis and may particularly effect precision and recall. The analysis of loss lines shows that the loss lines for selected objects depend on both their geometry and the image background. The results on selected objects show that faster RCNN offers higher accuracy than SSD for detection of selected objects.

Research limitations/implications

The results show that modular object detection is crucial in construction for the achievement of the required information for project quality and safety objectives. The detection process can significantly improve monitoring object installation progress in an accurate and machine-based manner avoiding human errors. The results of this paper are limited to three construction sites, but future investigations can cover more tasks or objects from different construction sites in a fully automated manner.

Originality/value

This paper’s originality lies in offering new AI applications in modular construction, using a large first-hand data set collected from three construction sites. Furthermore, the paper presents the scientific evaluation results of implementing recent object detection algorithms across a set of extended metrics using the original training and validation data sets to improve the generalisability of the experimentation. This paper also provides the practitioners and scholars with a workflow on AI applications in the modular context and the first-hand referencing data.

Article
Publication date: 1 October 2005

Juraj Hanuliak and Ivan Hanuliak

To address the problems of high performance computing by using the networks of workstations (NOW) and to discuss the complex performance evaluation of centralised and distributed…

Abstract

Purpose

To address the problems of high performance computing by using the networks of workstations (NOW) and to discuss the complex performance evaluation of centralised and distributed parallel algorithms.

Design/methodology/approach

Defines the role of performance and performance evaluation methods using a theoretical approach. Presents concrete parallel algorithms and tabulates the results of their performance.

Findings

Sees that a network of workstations based on powerful personal computers belongs in the future and as very cheap, flexible and perspective asynchronous parallel systems. Argues that this trend will produce dynamic growth in the parallel architectures based on the networks of workstations.

Research limitations/implication

We would like to continue these experiments in order to derive more precise and general formulae for typical used parallel algorithms from linear algebra and other application oriented parallel algorithms.

Practical implications

Describes how the use of NOW can provide a cheaper alternative to traditionally used massively parallel multiprocessors or supercomputers and shows the advantages of unifying the two disciplines that are involved.

Originality/value

Produces a new approach and exploits the parallel processing capability of NOW. Gives the concrete practical examples of the method that has been developed using experimental measuring.

Details

Kybernetes, vol. 34 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

Book part
Publication date: 4 October 2018

Korbkul Jantarakolica and Tatre Jantarakolica

The rapid change of technology has significantly affected the financial markets in Thailand. In order to enhance the market efficiency and liquidity, the Stock Exchange of…

Abstract

The rapid change of technology has significantly affected the financial markets in Thailand. In order to enhance the market efficiency and liquidity, the Stock Exchange of Thailand (SET) has granted Thai stock brokers permission to develop and offer their customers algorithm and automatic stock trading. However, algorithm trading on SET was not widely adopted. This chapter intends to design and empirically estimate a model in explaining Thai investors’ acceptance of algorithm trading. The theoretical framework is based on the theory of reasoned action and technology acceptance model (TAM). A sample of 400 investors who have used online stock trading and 300 investors who have used algorithm stock trading were observed and analyzed using structural equations model (SEM) and generalized linear regression model (GLM) with a Logit specification. The results confirm that attitudes, subjective norm, perceived risks, and trust toward algorithm stock trading are factors determining investors’ behavior and acceptance of using algorithm stock trading. Investor’s perception and trust on algorithm stock trading as a trading strategy is a major factor in determining their perceived behavior and control, which affect their decision on whether to invest using algorithm trading. Accordingly, it can be concluded that Thai investors is willing to accept algorithm trading as a new financial technology, but still has concern about the reliability and profitable of this new stock trading strategy. Therefore, algorithm trading can be promoted by building investors’ trust on algorithm trading as a reliable and profitable trading strategy.

Details

Banking and Finance Issues in Emerging Markets
Type: Book
ISBN: 978-1-78756-453-4

Keywords

Article
Publication date: 1 October 2006

Ibrahim Rawabdeh and Khaldoun Tahboub

This paper seeks to apply a heuristic approach to solve the facility layout problem and the description of a new computer‐aided layout design system.

2084

Abstract

Purpose

This paper seeks to apply a heuristic approach to solve the facility layout problem and the description of a new computer‐aided layout design system.

Design/methodology/approach

The system utilizes a new approach for computing the adjacency scores, stacking of departments, and reserving or changing the department's shapes and dimensions. The system algorithms are based on calculating the minimal distance between departments and modified departmental closeness rating.

Findings

The research addressed in this paper has resulted in developing FLASP (Facility LAyout Support Program) software. FLASP could reduce the number of iterations needed to reach the optimal solution of the layout problems by restricting the location for each department depending on the relationships between them.

Practical implications

The system is built on a set of algorithms that are concerned with stacking, calculating the shortest rectilinear distances between departments, adjacency matrix system, modifications capabilities, and plans main aisles surrounding each department.

Originality/value

The program gathers the importance of both the adjacency relationships and the distances between departments in a way that depends on the concept that the adjacency score should not be nullified just because the two departments are no longer strictly adjacent. It rather considers that the adjacency score fades away gradually with the increase of distance between the two departments which leads to a main difference in distance consideration.

Details

Journal of Manufacturing Technology Management, vol. 17 no. 7
Type: Research Article
ISSN: 1741-038X

Keywords

Book part
Publication date: 11 June 2021

Madhav Sharma and David Biros

The nature of technologies that are recognised as Artificial Intelligence (AI) has continually changed over time to be something more advanced than other technologies. Despite the…

Abstract

The nature of technologies that are recognised as Artificial Intelligence (AI) has continually changed over time to be something more advanced than other technologies. Despite the fluidity of understanding of AI, the most common theme that has stuck with AI is ‘human-like decision making’. Advancements in processing power, coupled with big data technologies, gave rise to highly accurate prediction algorithms. Analytical techniques which use multi-layered neural networks such as machine learning and deep learning have emerged as the drivers of these AI-based applications. Due to easy access and growing information workforce, these algorithms are extensively used in a plethora of industries ranging from healthcare, transportation, finance, legal systems, to even military. AI-tools have the potential to transform industries and societies through automation. Conversely, the undesirable or negative consequences of AI-tools have harmed their respective organisations in social, financial and legal spheres. As the use of these algorithms propagates in the industry, the AI-based decisions have the potential to affect large portions of the population, sometimes involving vulnerable groups in society. This chapter presents an overview of AI’s use in organisations by discussing the following: first, it discusses the core components of AI. Second, the chapter discusses common goals organisations can achieve with AI. Third, it examines different types of AI. Fourth, it discusses unintended consequences that may take place in organisations due to the use of AI. Fifth, it discusses vulnerabilities that may arise from AI systems. Lastly, this chapter offers some recommendations for industries to consider regarding the development and implementation of AI systems.

Details

Information Technology in Organisations and Societies: Multidisciplinary Perspectives from AI to Technostress
Type: Book
ISBN: 978-1-83909-812-3

Keywords

Article
Publication date: 5 January 2010

Olusegun Folorunso and AdioTaofeek Akinwale

In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization…

1160

Abstract

Purpose

In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands‐on experience in database normalization process.

Design/methodology/approach

The model‐view‐controller architecture is used to alleviate the black box syndrome associated with the study of algorithm behavior for database normalization process. The authors propose a visualization “exploratory” tool that assists the learners in understanding the actual behavior of the database normalization algorithms of choice and also in evaluating the validity/quality of the algorithm. This paper describes the visualization tool and its effectiveness in teaching and learning normalization forms and their functional dependencies.

Findings

The effectiveness of the tool has been evaluated in surveys. It shows that students generally viewed the tool more positively than the textbook technique. This difference is significant to p<0.05 (t=1.645). The mean interactions precision and calculated value using expert judge relevance ratings show a significant difference between visualization tool and textbook performance 3.74 against 2.61 for precision with calculated t=6.69.

Originality/value

The visualization tool helped students validate/check their learning of normalization process. Consequently, the paper shows that the tool has a positive impact on students' perception.

Details

Campus-Wide Information Systems, vol. 27 no. 1
Type: Research Article
ISSN: 1065-0741

Keywords

1 – 10 of over 30000