Search results
1 – 10 of over 54000Adli Hamdam, Ruzita Jusoh, Yazkhiruni Yahya, Azlina Abdul Jalil and Nor Hafizah Zainal Abidin
The role of big data and data analytics in the audit engagement process is evident. Notwithstanding, understanding how big data influences cognitive processes and, consequently…
Abstract
Purpose
The role of big data and data analytics in the audit engagement process is evident. Notwithstanding, understanding how big data influences cognitive processes and, consequently, on the auditors’ judgment decision-making process is limited. The purpose of this paper is to present a conceptual framework on the cognitive process that may influence auditors’ judgment decision-making in the big data environment. The proposed framework predicts the relationships among data visualization integration, data processing modes, task complexity and auditors’ judgment decision-making.
Design/methodology/approach
The methodology to accomplish the conceptual framework is based on a thorough literature review that consists of theoretical discussions and comparative studies of other authors’ works and thinking. It also involves summarizing and interpreting previous contributions subjectively and narratively and extending the work in some fashion. Based on this approach, this paper formulates four propositions about data visualization integration, data processing modes, task complexity and auditors’ judgment decision-making. The proposed framework was built from cognitive theory addressing how auditors process data into useful information to make judgment decision-making.
Findings
The proposed framework expects that the cognitive process of data visualization integration and intuitive data processing mode will improve auditors’ judgment decision-making. This paper also contends that task complexity may influence the cognitive process of data visualization integration and processing modes because of the voluminous nature of data and the complexity of business processes. Hence, it is also expected that the relationships between data visualization integration and audit judgment decision-making and between processing mode and audit judgment decision-making will be moderated by task complexity.
Research limitations/implications
There is a dearth of studies examining how big data and big data analytics affect auditors’ cognitive processes in making decisions. This paper will help researchers and auditors understand the behavioral consequences of data visualization integration and data processing mode in making judgment decision-making, given a certain level of task complexity.
Originality/value
With the advent of big data and the evolution of innovative audit procedures, the constructed framework can be used as a theoretical foundation for future empirical studies concerning auditors’ judgment decision-making. It highlights the potential of big data to transform the nature and practice of accounting and auditing.
Details
Keywords
Applies the analytic‐synthetic dichotomy of hemispheric functioning suggested by Levy‐Agresti and Sperry to explain the chunking theory of Miller. Constructs a theory of…
Abstract
Applies the analytic‐synthetic dichotomy of hemispheric functioning suggested by Levy‐Agresti and Sperry to explain the chunking theory of Miller. Constructs a theory of cognition, based on cerebral functions which were discovered through hemispheric differences. Shows that all the arguments of Efron against the hemispheric paradigm are merely “puzzles” that can be solved within this paradigm. New findings of Efron and Yund were, in fact, predicted by a component of this theory.
Details
Keywords
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…
Abstract
Purpose
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.
Design/methodology/approach
In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.
Findings
The authors got very satisfactory classification results.
Originality/value
DDPML system is specially designed to smoothly handle big data mining classification.
Details
Keywords
Zhen Xijin, Wu Dianliang, Fan Xiumin and Hu Yong
Automobile development needs more and more collaborative work involving geographical dispersed designers, that brings difficulty for model verification, conception review and…
Abstract
Purpose
Automobile development needs more and more collaborative work involving geographical dispersed designers, that brings difficulty for model verification, conception review and assembly process evaluation, so a collaborative virtual environment for automobile based on network is required. In this kind of environment, designers can do interactive assembly operations collaboratively, such as grasp, move, release, collision detection (CD), assembly evaluation report generation, etc. Furthermore, automobile structure becomes more complicated, how to process this large real‐time data effectively in real‐time interactive virtual environment is a great challenge. The purpose of this paper is focus on this.
Design/methodology/approach
A distributed parallel virtual assembly environment (DPVAE) is developed. In this environment, the mechanism of event synchronization based on high‐level architecture/run‐time infrastructure) is applied to realize multi‐user collaboratively interactive operation. To meet the large data set real‐time processing demand, a creative parallel processing approach supported by a single supercomputer or a parallel processing environment composed of common personal computer in a high‐speed local area network is developed. The technologies such as real‐time CD, multiple interactive operation modals are applied in DPVAE and several auxiliary tools are provide to help achieving whole scheme review, component model verification and assembly evaluation.
Findings
This paper finds that DPVAE system is an available and efficient tool to support automobile collaborative assembly design.
Practical implications
Designers can discuss and verify the assembly scheme to realize the previous design scenario in DPVAE, so it is useful for reducing costs, improving quality and shortening the time to market, especially for new type automobile development.
Originality/value
A combination of distributed technology and parallel computing technology is applied in product virtual assembly, solving the problems including collaborative work of multi‐user and large data real‐time processing successfully, that provides a useful tool for automobile development.
Details
Keywords
Laouni Djafri, Djamel Amar Bensaber and Reda Adjoudj
This paper aims to solve the problems of big data analytics for prediction including volume, veracity and velocity by improving the prediction result to an acceptable level and in…
Abstract
Purpose
This paper aims to solve the problems of big data analytics for prediction including volume, veracity and velocity by improving the prediction result to an acceptable level and in the shortest possible time.
Design/methodology/approach
This paper is divided into two parts. The first one is to improve the result of the prediction. In this part, two ideas are proposed: the double pruning enhanced random forest algorithm and extracting a shared learning base from the stratified random sampling method to obtain a representative learning base of all original data. The second part proposes to design a distributed architecture supported by new technologies solutions, which in turn works in a coherent and efficient way with the sampling strategy under the supervision of the Map-Reduce algorithm.
Findings
The representative learning base obtained by the integration of two learning bases, the partial base and the shared base, presents an excellent representation of the original data set and gives very good results of the Big Data predictive analytics. Furthermore, these results were supported by the improved random forests supervised learning method, which played a key role in this context.
Originality/value
All companies are concerned, especially those with large amounts of information and want to screen them to improve their knowledge for the customer and optimize their campaigns.
Details
Keywords
We have followed recent developments in computer hardware library and information uses in these pages. Readers have likely noticed that the emphasis has been on equipment for…
Abstract
We have followed recent developments in computer hardware library and information uses in these pages. Readers have likely noticed that the emphasis has been on equipment for micro/ personal computers. That will continue to be the focus here.
Elan Sasson, Gilad Ravid and Nava Pliskin
Although acknowledged as a principal dimension in the context of text mining, time has yet to be formally incorporated into the process of visually representing the relationships…
Abstract
Purpose
Although acknowledged as a principal dimension in the context of text mining, time has yet to be formally incorporated into the process of visually representing the relationships between keywords in a knowledge domain. This paper aims to develop and validate the feasibility of adding temporal knowledge to a concept map via pair-wise temporal analysis (PTA).
Design/methodology/approach
The paper presents a temporal trend detection algorithm – vector space model – designed to use objective quantitative pair-wise temporal operators to automatically detect co-occurring hot concepts. This PTA approach is demonstrated and validated without loss of generality for a spectrum of information technologies.
Findings
The rigorous validation study shows that the resulting temporal assessments are highly correlated with subjective assessments of experts (n = 136), exhibiting substantial reliability-of-agreement measures and average predictive validity above 85 per cent.
Practical implications
Using massive amounts of textual documents available on the Web to first generate a concept map and then add temporal knowledge, the contribution of this work is emphasized and magnified against the current growing attention to big data analytics.
Originality/value
This paper proposes a novel knowledge discovery method to improve a text-based concept map (i.e. semantic graph) via detection and representation of temporal relationships. The originality and value of the proposed method is highlighted in comparison to other knowledge discovery methods.
Details
Keywords
It has been assumed that electronic computers and telematics are part of an electronic revolution which causes to obsolesce many of the perceptual, psychic and social cultural…
Abstract
It has been assumed that electronic computers and telematics are part of an electronic revolution which causes to obsolesce many of the perceptual, psychic and social cultural effects of phonetic literacy and typography and their subsequent perceptual, psychic, and social effects, particularly as they relate to the lifestyles and production techniques we associate with the seventeenth‐century new science and the mechanical Industrial Revolution. Shows that the digital computer, “the ultimate assembly line”, and its various effects represent a vast extention and amplification of centuries‐old trends, and, indeed, seem to present us with habits and attitudes at odds with those induced by older electronic media such as radio and television. Among the results of digital technologies are business‐as‐usual 24 hours a day and societal breakdown which occurs as a result of continuing acceleration and the splitting apart of human functions and human psyche.
Details
Keywords
Yingying Ding, Xi Xi and Yao He
Time analysis and institution analysis as well as journal analysis allow the study to show literature distribution in this research area. Research hotspots and trending among…
Abstract
Purpose
Time analysis and institution analysis as well as journal analysis allow the study to show literature distribution in this research area. Research hotspots and trending among different times are revealed by network-structural properties and network-temporal property. The study aims to shed light on international cluster research progress on strategic niche management (SNM).
Design/methodology/approach
Using searched literature data on SNM from 1991 to 2018 from the database of Web of Science (WOS), the article maps the citation network and completes the citation analysis based on bibliometric citation analysis.
Findings
These eight research streams reveal the development of SNM from theoretical description to target-oriented study and finally diversification analysis.
Originality/value
The paper identifies eight continuous research streams in SNM: sustainable transition, dynamical diversity, complexity, social-technical system, social innovation, social-cognitive evolution, emerging market and policy mix.
Details
Keywords
Burcu Adivar, Tarik Atan, Bengü Sevil Oflaç and Tuğba Örten
The purpose of this study is to introduce the concept of social welfare chain and address the challenges in decision making through the development of an optimal planning model…
Abstract
Purpose
The purpose of this study is to introduce the concept of social welfare chain and address the challenges in decision making through the development of an optimal planning model for a nongovernmental organization (NGO). The distinctive properties of the social welfare chain and its relationship with the humanitarian relief chain in the context of supply chain management are also discussed. The paper presents a real decision problem and analyzes the managerial impacts of the proposed solution.
Design/methodology/approach
The study of social welfare policy and the review of the humanitarian literature has necessitated the introduction of the social welfare chain. Based on its definition, an optimal facility location distribution model that consolidates the non‐integrated style of logistics functions with a cost minimizing approach is developed. The General Algebraic Modeling System (GAMS) is used in order to optimize the coal distribution model of an NGO. Data is obtained from an NGO that aims to help vulnerable people through distributing coal and basic food such as rice and sugar.
Findings
Besides laying the foundations of social welfare chain, an analytical tool for decision support systems of the NGOs can be considered as the major finding of the research. Despite the increased number of stages in the proposed network configuration, the optimal solution resulted in significant cost reduction and distribution efficiency due to the availability of temporary distribution center locations at no extra cost. Furthermore, this study brings out the advantages of using intermodal transportation in the distribution process of cost‐sensitive networks.
Practical implications
This paper provides a detailed analysis that contributes to the efficiency and the effectiveness of social welfare chains. Moreover, it represents a cooperation established between university and NGOs.
Originality/value
The planning efforts of nongovernmental organizations targeting at the periodical aids to improve the social welfare level have received little attention in the literature. This paper is the first to propose the concept of “Social Welfare Chain”, at the same time addressing the distribution planning for the NGO.
Details