Search results

1 – 10 of over 67000
Article
Publication date: 26 January 2021

Adli Hamdam, Ruzita Jusoh, Yazkhiruni Yahya, Azlina Abdul Jalil and Nor Hafizah Zainal Abidin

The role of big data and data analytics in the audit engagement process is evident. Notwithstanding, understanding how big data influences cognitive processes and, consequently…

2309

Abstract

Purpose

The role of big data and data analytics in the audit engagement process is evident. Notwithstanding, understanding how big data influences cognitive processes and, consequently, on the auditors’ judgment decision-making process is limited. The purpose of this paper is to present a conceptual framework on the cognitive process that may influence auditors’ judgment decision-making in the big data environment. The proposed framework predicts the relationships among data visualization integration, data processing modes, task complexity and auditors’ judgment decision-making.

Design/methodology/approach

The methodology to accomplish the conceptual framework is based on a thorough literature review that consists of theoretical discussions and comparative studies of other authors’ works and thinking. It also involves summarizing and interpreting previous contributions subjectively and narratively and extending the work in some fashion. Based on this approach, this paper formulates four propositions about data visualization integration, data processing modes, task complexity and auditors’ judgment decision-making. The proposed framework was built from cognitive theory addressing how auditors process data into useful information to make judgment decision-making.

Findings

The proposed framework expects that the cognitive process of data visualization integration and intuitive data processing mode will improve auditors’ judgment decision-making. This paper also contends that task complexity may influence the cognitive process of data visualization integration and processing modes because of the voluminous nature of data and the complexity of business processes. Hence, it is also expected that the relationships between data visualization integration and audit judgment decision-making and between processing mode and audit judgment decision-making will be moderated by task complexity.

Research limitations/implications

There is a dearth of studies examining how big data and big data analytics affect auditors’ cognitive processes in making decisions. This paper will help researchers and auditors understand the behavioral consequences of data visualization integration and data processing mode in making judgment decision-making, given a certain level of task complexity.

Originality/value

With the advent of big data and the evolution of innovative audit procedures, the constructed framework can be used as a theoretical foundation for future empirical studies concerning auditors’ judgment decision-making. It highlights the potential of big data to transform the nature and practice of accounting and auditing.

Details

Accounting Research Journal, vol. 35 no. 1
Type: Research Article
ISSN: 1030-9616

Keywords

Abstract

Details

Transport Survey Methods
Type: Book
ISBN: 978-1-78-190288-2

Article
Publication date: 1 June 1998

Uri Fidelman

Applies the analytic‐synthetic dichotomy of hemispheric functioning suggested by Levy‐Agresti and Sperry to explain the chunking theory of Miller. Constructs a theory of…

Abstract

Applies the analytic‐synthetic dichotomy of hemispheric functioning suggested by Levy‐Agresti and Sperry to explain the chunking theory of Miller. Constructs a theory of cognition, based on cerebral functions which were discovered through hemispheric differences. Shows that all the arguments of Efron against the hemispheric paradigm are merely “puzzles” that can be solved within this paradigm. New findings of Efron and Yund were, in fact, predicted by a component of this theory.

Details

Kybernetes, vol. 27 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…

384

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 31 July 2009

Zhen Xijin, Wu Dianliang, Fan Xiumin and Hu Yong

Automobile development needs more and more collaborative work involving geographical dispersed designers, that brings difficulty for model verification, conception review and…

Abstract

Purpose

Automobile development needs more and more collaborative work involving geographical dispersed designers, that brings difficulty for model verification, conception review and assembly process evaluation, so a collaborative virtual environment for automobile based on network is required. In this kind of environment, designers can do interactive assembly operations collaboratively, such as grasp, move, release, collision detection (CD), assembly evaluation report generation, etc. Furthermore, automobile structure becomes more complicated, how to process this large real‐time data effectively in real‐time interactive virtual environment is a great challenge. The purpose of this paper is focus on this.

Design/methodology/approach

A distributed parallel virtual assembly environment (DPVAE) is developed. In this environment, the mechanism of event synchronization based on high‐level architecture/run‐time infrastructure) is applied to realize multi‐user collaboratively interactive operation. To meet the large data set real‐time processing demand, a creative parallel processing approach supported by a single supercomputer or a parallel processing environment composed of common personal computer in a high‐speed local area network is developed. The technologies such as real‐time CD, multiple interactive operation modals are applied in DPVAE and several auxiliary tools are provide to help achieving whole scheme review, component model verification and assembly evaluation.

Findings

This paper finds that DPVAE system is an available and efficient tool to support automobile collaborative assembly design.

Practical implications

Designers can discuss and verify the assembly scheme to realize the previous design scenario in DPVAE, so it is useful for reducing costs, improving quality and shortening the time to market, especially for new type automobile development.

Originality/value

A combination of distributed technology and parallel computing technology is applied in product virtual assembly, solving the problems including collaborative work of multi‐user and large data real‐time processing successfully, that provides a useful tool for automobile development.

Details

Assembly Automation, vol. 29 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 20 August 2018

Laouni Djafri, Djamel Amar Bensaber and Reda Adjoudj

This paper aims to solve the problems of big data analytics for prediction including volume, veracity and velocity by improving the prediction result to an acceptable level and in…

Abstract

Purpose

This paper aims to solve the problems of big data analytics for prediction including volume, veracity and velocity by improving the prediction result to an acceptable level and in the shortest possible time.

Design/methodology/approach

This paper is divided into two parts. The first one is to improve the result of the prediction. In this part, two ideas are proposed: the double pruning enhanced random forest algorithm and extracting a shared learning base from the stratified random sampling method to obtain a representative learning base of all original data. The second part proposes to design a distributed architecture supported by new technologies solutions, which in turn works in a coherent and efficient way with the sampling strategy under the supervision of the Map-Reduce algorithm.

Findings

The representative learning base obtained by the integration of two learning bases, the partial base and the shared base, presents an excellent representation of the original data set and gives very good results of the Big Data predictive analytics. Furthermore, these results were supported by the improved random forests supervised learning method, which played a key role in this context.

Originality/value

All companies are concerned, especially those with large amounts of information and want to screen them to improve their knowledge for the customer and optimize their campaigns.

Details

Information Discovery and Delivery, vol. 46 no. 3
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 1 March 1985

Howard Falk

We have followed recent developments in computer hardware library and information uses in these pages. Readers have likely noticed that the emphasis has been on equipment for…

Abstract

We have followed recent developments in computer hardware library and information uses in these pages. Readers have likely noticed that the emphasis has been on equipment for micro/ personal computers. That will continue to be the focus here.

Details

The Electronic Library, vol. 3 no. 3
Type: Research Article
ISSN: 0264-0473

Article
Publication date: 13 February 2017

Elan Sasson, Gilad Ravid and Nava Pliskin

Although acknowledged as a principal dimension in the context of text mining, time has yet to be formally incorporated into the process of visually representing the relationships…

Abstract

Purpose

Although acknowledged as a principal dimension in the context of text mining, time has yet to be formally incorporated into the process of visually representing the relationships between keywords in a knowledge domain. This paper aims to develop and validate the feasibility of adding temporal knowledge to a concept map via pair-wise temporal analysis (PTA).

Design/methodology/approach

The paper presents a temporal trend detection algorithm – vector space model – designed to use objective quantitative pair-wise temporal operators to automatically detect co-occurring hot concepts. This PTA approach is demonstrated and validated without loss of generality for a spectrum of information technologies.

Findings

The rigorous validation study shows that the resulting temporal assessments are highly correlated with subjective assessments of experts (n = 136), exhibiting substantial reliability-of-agreement measures and average predictive validity above 85 per cent.

Practical implications

Using massive amounts of textual documents available on the Web to first generate a concept map and then add temporal knowledge, the contribution of this work is emphasized and magnified against the current growing attention to big data analytics.

Originality/value

This paper proposes a novel knowledge discovery method to improve a text-based concept map (i.e. semantic graph) via detection and representation of temporal relationships. The originality and value of the proposed method is highlighted in comparison to other knowledge discovery methods.

Details

Journal of Knowledge Management, vol. 21 no. 1
Type: Research Article
ISSN: 1367-3270

Keywords

Abstract

Details

Games in Everyday Life: For Play
Type: Book
ISBN: 978-1-83867-937-8

Article
Publication date: 1 July 1992

R. Dreyer Berg

It has been assumed that electronic computers and telematics are part of an electronic revolution which causes to obsolesce many of the perceptual, psychic and social cultural…

Abstract

It has been assumed that electronic computers and telematics are part of an electronic revolution which causes to obsolesce many of the perceptual, psychic and social cultural effects of phonetic literacy and typography and their subsequent perceptual, psychic, and social effects, particularly as they relate to the lifestyles and production techniques we associate with the seventeenth‐century new science and the mechanical Industrial Revolution. Shows that the digital computer, “the ultimate assembly line”, and its various effects represent a vast extention and amplification of centuries‐old trends, and, indeed, seem to present us with habits and attitudes at odds with those induced by older electronic media such as radio and television. Among the results of digital technologies are business‐as‐usual 24 hours a day and societal breakdown which occurs as a result of continuing acceleration and the splitting apart of human functions and human psyche.

Details

Kybernetes, vol. 21 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 67000