Search results

1 – 10 of over 89000
Article
Publication date: 30 July 2018

Mauro Romanelli

The aim of this study is to provide a conceptual framework to explain how museums sustain intellectual capital and promote value co-creation moving from designing virtual…

1118

Abstract

Purposes

The aim of this study is to provide a conceptual framework to explain how museums sustain intellectual capital and promote value co-creation moving from designing virtual environments to introducing and managing Big Data.

Design/methodology/approach

This study is based on archival and qualitative data considering the literature related to the introduction of virtual environments and Big Data within museums.

Findings

Museums contribute to sustaining intellectual capital and in promoting value co-creation developing a Big Data-driven strategy and innovation.

Practical implications

By introducing and managing Big Data, museums contribute to creating a community by improving knowledge within cultural ecosystems while strengthening the users as active participants and the museum’s professionals as user-centred mediators.

Originality/value

As audience-driven and knowledge-oriented organisations moving from designing virtual environments to following a Big data-driven strategy, museums should select organisational and strategic choices for driving change.

Details

Meditari Accountancy Research, vol. 26 no. 3
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 19 October 2015

Kun Chen, Xin Li and Huaiqing Wang

Although big data analytics has reaped great business rewards, big data system design and integration still face challenges resulting from the demanding environment, including…

2746

Abstract

Purpose

Although big data analytics has reaped great business rewards, big data system design and integration still face challenges resulting from the demanding environment, including challenges involving variety, uncertainty, and complexity. These characteristics in big data systems demand flexible and agile integration architectures. Furthermore, a formal model is needed to support design and verification. The purpose of this paper is to resolve the two problems with a collective intelligence (CI) model.

Design/methodology/approach

In the conceptual CI framework as proposed by Schut (2010), a CI design should be comprised of a general model, which has formal form for verification and validation, and also a specific model, which is an implementable system architecture. After analyzing the requirements of system integration in big data environments, the authors apply the CI framework to resolve the integration problem. In the model instantiation, the authors use multi-agent paradigm as the specific model, and the hierarchical colored Petri Net (PN) as the general model.

Findings

First, multi-agent paradigm is a good implementation for reuse and integration of big data analytics modules in an agile and loosely coupled method. Second, the PN models provide effective simulation results in the system design period. It gives advice on business process design and workload balance control. Third, the CI framework provides an incrementally build and deployed method for system integration. It is especially suitable to the dynamic data analytics environment. These findings have both theoretical and managerial implications.

Originality/value

In this paper, the authors propose a CI framework, which includes both practical architectures and theoretical foundations, to solve the system integration problem in big data environment. It provides a new point of view to dynamically integrate large-scale modules in an organization. This paper also has practical suggestions for Chief Technical Officers, who want to employ big data technologies in their companies.

Details

Industrial Management & Data Systems, vol. 115 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 13 July 2022

Trevor Cadden, Ronan McIvor, Guangming Cao, Raymond Treacy, Ying Yang, Manjul Gupta and George Onofrei

Increasingly, studies are reporting supply chain analytical capabilities as a key enabler of supply chain agility (SCAG) and supply chain performance (SCP). This study…

1612

Abstract

Purpose

Increasingly, studies are reporting supply chain analytical capabilities as a key enabler of supply chain agility (SCAG) and supply chain performance (SCP). This study investigates the impact of environmental dynamism and competitive pressures in a supply chain analytics setting, and how intangible supply chain analytical capabilities (ISCAC) moderate the relationship between big data characteristics (BDC's) and SCAG in support of enhanced SCP.

Design/methodology/approach

The study draws on the literature on big data, supply chain analytical capabilities, and dynamic capability theory to empirically develop and test a supply chain analytical capabilities model in support of SCAG and SCP. ISCAC was the moderated construct and was tested using two sub-dimensions, supply chain organisational learning and supply chain data driven culture.

Findings

The results show that whilst environmental dynamism has a significant relationship on the three key BDC's, only the volume and velocity dimensions are significant in relation to competitive pressures. Furthermore, only the velocity element of BDC's has a significant positive impact on SCAG. In terms of moderation, the supply chain organisational learning dimension of ISCAC was shown to only moderate the velocity aspect of BDC's on SCAG, whereas for the supply chain data driven culture dimension of ISCAC, only the variety aspect was shown to moderate of BDC on SCAG. SCAG had a significant impact on SCP.

Originality/value

This study adds to the existing knowledge in the supply chain analytical capabilities domain by presenting a nuanced moderation model that includes external factors (environmental dynamism and competitive pressures), their relationships with BDC's and how ISCAC (namely, supply chain organisational learning and supply chain data driven culture) moderates and strengthens aspects of BDC's in support of SCAG and enhanced SCP.

Details

International Journal of Operations & Production Management, vol. 42 no. 9
Type: Research Article
ISSN: 0144-3577

Keywords

Article
Publication date: 26 January 2021

Adli Hamdam, Ruzita Jusoh, Yazkhiruni Yahya, Azlina Abdul Jalil and Nor Hafizah Zainal Abidin

The role of big data and data analytics in the audit engagement process is evident. Notwithstanding, understanding how big data influences cognitive processes and, consequently…

2484

Abstract

Purpose

The role of big data and data analytics in the audit engagement process is evident. Notwithstanding, understanding how big data influences cognitive processes and, consequently, on the auditors’ judgment decision-making process is limited. The purpose of this paper is to present a conceptual framework on the cognitive process that may influence auditors’ judgment decision-making in the big data environment. The proposed framework predicts the relationships among data visualization integration, data processing modes, task complexity and auditors’ judgment decision-making.

Design/methodology/approach

The methodology to accomplish the conceptual framework is based on a thorough literature review that consists of theoretical discussions and comparative studies of other authors’ works and thinking. It also involves summarizing and interpreting previous contributions subjectively and narratively and extending the work in some fashion. Based on this approach, this paper formulates four propositions about data visualization integration, data processing modes, task complexity and auditors’ judgment decision-making. The proposed framework was built from cognitive theory addressing how auditors process data into useful information to make judgment decision-making.

Findings

The proposed framework expects that the cognitive process of data visualization integration and intuitive data processing mode will improve auditors’ judgment decision-making. This paper also contends that task complexity may influence the cognitive process of data visualization integration and processing modes because of the voluminous nature of data and the complexity of business processes. Hence, it is also expected that the relationships between data visualization integration and audit judgment decision-making and between processing mode and audit judgment decision-making will be moderated by task complexity.

Research limitations/implications

There is a dearth of studies examining how big data and big data analytics affect auditors’ cognitive processes in making decisions. This paper will help researchers and auditors understand the behavioral consequences of data visualization integration and data processing mode in making judgment decision-making, given a certain level of task complexity.

Originality/value

With the advent of big data and the evolution of innovative audit procedures, the constructed framework can be used as a theoretical foundation for future empirical studies concerning auditors’ judgment decision-making. It highlights the potential of big data to transform the nature and practice of accounting and auditing.

Details

Accounting Research Journal, vol. 35 no. 1
Type: Research Article
ISSN: 1030-9616

Keywords

Article
Publication date: 2 November 2015

De-gan Zhang, Xiao-dong Song, Xiang Wang, Ke Li, Wen-bin Li and Zhen Ma

Mobile Service of Big Data can be supported by the fused technologies of computing, communication and digital multimedia. The purpose of this paper is to propose new agent-based…

Abstract

Purpose

Mobile Service of Big Data can be supported by the fused technologies of computing, communication and digital multimedia. The purpose of this paper is to propose new agent-based proactive migration method and system for Big Data Environment (BDE).

Design/methodology/approach

First, the authors have designed new relative fusion method for making decision based on fuzzy-neural network. The method can make the fusion belief degree to be improved. Then the authors have proposed agent-based proactive migrating method with service discovery and key frames selection strategy. Finally, the authors have designed the application system, which can support proactive seamless migration function for big data. The method has innovation in which mobile service task of big data can dynamically follow its mobile user from one device to another device.

Findings

The authors have proposed agent-based proactive migrating method with service discovery and key frames selection strategy. The method has innovation in which mobile service task of big data can dynamically follow its mobile user from one device to another device. The designed system is convenient to work and use during mobility, and which is useful or helpful for mobile user in the BDE.

Originality/value

The authors have clarified and realizes how to transfer service tasks among different distances in Big Data Environment (BDE). The authors have given a formal description and classification of the mobile service task, which is independent of the realization mechanism. In the designed and developed application system, the new idea adopts fuzzy-neural control theory to make decision for task-oriented proactive seamless migration application.

Details

Engineering Computations, vol. 32 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 August 2016

Bao-Rong Chang, Hsiu-Fen Tsai, Yun-Che Tsai, Chin-Fu Kuo and Chi-Chung Chen

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big

Abstract

Purpose

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big data environment.

Design/methodology/approach

First, the integration of Apache Hive, Cloudera Impala and BDAS Shark make the platform support SQL-like query. Next, users can access a single interface and select the best performance of big data warehouse platform automatically by the proposed optimizer. Finally, the distributed memory storage system Memcached incorporated into the distributed file system, Apache HDFS, is employed for fast caching query results. Therefore, if users query the same SQL command, the same result responds rapidly from the cache system instead of suffering the repeated searches in a big data warehouse and taking a longer time to retrieve.

Findings

As a result the proposed approach significantly improves the overall performance and dramatically reduces the search time as querying a database, especially applying for the high-repeatable SQL commands under multi-user mode.

Research limitations/implications

Currently, Shark’s latest stable version 0.9.1 does not support the latest versions of Spark and Hive. In addition, this series of software only supports Oracle JDK7. Using Oracle JDK8 or Open JDK will cause serious errors, and some software will be unable to run.

Practical implications

The problem with this system is that some blocks are missing when too many blocks are stored in one result (about 100,000 records). Another problem is that the sequential writing into In-memory cache wastes time.

Originality/value

When the remaining memory capacity is 2 GB or less on each server, Impala and Shark will have a lot of page swapping, causing extremely low performance. When the data scale is larger, it may cause the JVM I/O exception and make the program crash. However, when the remaining memory capacity is sufficient, Shark is faster than Hive and Impala. Impala’s consumption of memory resources is between those of Shark and Hive. This amount of remaining memory is sufficient for Impala’s maximum performance. In this study, each server allocates 20 GB of memory for cluster computing and sets the amount of remaining memory as Level 1: 3 percent (0.6 GB), Level 2: 15 percent (3 GB) and Level 3: 75 percent (15 GB) as the critical points. The program automatically selects Hive when memory is less than 15 percent, Impala at 15 to 75 percent and Shark at more than 75 percent.

Article
Publication date: 1 July 2024

Hashim Zameer, Ying Wang and Humaira Yasmeen

Big data capabilities have the potential to completely transform conventional methods of doing business. Nevertheless, the role of big data capabilities in fostering green…

Abstract

Purpose

Big data capabilities have the potential to completely transform conventional methods of doing business. Nevertheless, the role of big data capabilities in fostering green marketing capabilities and improving green competitive advantage is still not fully understood. To add new knowledge, this paper aims to propose a moderated mediation model to strengthen green competitive advantage in a big data environment. The model introduces both the mediating role of green marketing capabilities and the moderating role of big data capabilities. We developed and empirically tested a moderated mediation model.

Design/methodology/approach

In this study, we have adopted a survey-based methodology. The study collected data from 337 managers and empirically analyzed it to test the theoretical model of moderated mediation. We employed structural equation modeling for empirical analysis.

Findings

The findings revealed that organizational learning improves green marketing capabilities, whereas the relationship between organizational learning and green competitive advantage is insignificant. The mediating role of green marketing capabilities in the relationship between organizational learning and green competitive advantage was statistically significant, indicating that green marketing capabilities serve as a bridge between organizational learning and green competitive advantage. Big data capabilities moderate the relationship between organizational learning and green marketing capabilities. The moderated mediation was also significant, highlighting that big data capabilities further strengthen the indirect effects of organizational learning on green competitive advantage via green marketing capabilities.

Originality/value

This paper delivers theoretical and practical understandings of the importance of organizational learning and big data capabilities. Similarly, it extends current knowledge and provides key insights for managerial decision-making.

Details

Business Process Management Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 8 August 2016

H. Frank Cervone

Organizations are beginning to realize the potential benefits of big data and harnessing all of the data they are creating. However, a major impediment for many organizations is…

3437

Abstract

Purpose

Organizations are beginning to realize the potential benefits of big data and harnessing all of the data they are creating. However, a major impediment for many organizations is understanding where to start in big data and analytics implementation. In many respects, starting a successful implementation is not much different from any other project managed within the organization. The major stumbling block is knowing what questions to ask to get things going. This paper aims to help libraries and information organizations that are considering big data and analytics implementation to begin their journey by following a checklist of eight aspects to be considered in the development of a big data and analytics strategy.

Design/methodology/approach

The eight aspects to consider in big data and analytics implementation were developed using a combination of existing project management common knowledge, consultant recommendations and real-life experiences.

Findings

Organizations considering big data and analytics implementation need to explore aspects related to the data they have, what organizational problems they are trying to solve, how data governance will work in the new environment, as well as how they will define success in terms of their implementation. These are in addition to the technical issues one would normally expect in a systems implementation.

Originality/value

While there have been many articles written about the implementation of big data and analytics in organizations, most of these focus on technical issues rather than managerial and organizational concerns. In addition, none of these other articles have been from the perspective of library and information science. In this article, the focus is specifically on how information professionals may approach this problem.

Article
Publication date: 1 February 2024

Hakeem A. Owolabi, Azeez A. Oyedele, Lukumon Oyedele, Hafiz Alaka, Oladimeji Olawale, Oluseyi Aju, Lukman Akanbi and Sikiru Ganiyu

Despite an enormous body of literature on conflict management, intra-group conflicts vis-à-vis team performance, there is currently no study investigating the conflict prevention…

Abstract

Purpose

Despite an enormous body of literature on conflict management, intra-group conflicts vis-à-vis team performance, there is currently no study investigating the conflict prevention approach to handling innovation-induced conflicts that may hinder smooth implementation of big data technology in project teams.

Design/methodology/approach

This study uses constructs from conflict theory, and team power relations to develop an explanatory framework. The study proceeded to formulate theoretical hypotheses from task-conflict, process-conflict, relationship and team power conflict. The hypotheses were tested using Partial Least Square Structural Equation Model (PLS-SEM) to understand key preventive measures that can encourage conflict prevention in project teams when implementing big data technology.

Findings

Results from the structural model validated six out of seven theoretical hypotheses and identified Relationship Conflict Prevention as the most important factor for promoting smooth implementation of Big Data Analytics technology in project teams. This is followed by power-conflict prevention, prevention of task disputes and prevention of Process conflicts respectively. Results also show that relationship and power conflicts interact on the one hand, while task and relationship conflict prevention also interact on the other hand, thus, suggesting the prevention of one of the conflicts could minimise the outbreak of the other.

Research limitations/implications

The study has been conducted within the context of big data adoption in a project-based work environment and the need to prevent innovation-induced conflicts in teams. Similarly, the research participants examined are stakeholders within UK projected-based organisations.

Practical implications

The study urges organisations wishing to embrace big data innovation to evolve a multipronged approach for facilitating smooth implementation through prevention of conflicts among project frontlines. This study urges organisations to anticipate both subtle and overt frictions that can undermine relationships and team dynamics, effective task performance, derail processes and create unhealthy rivalry that undermines cooperation and collaboration in the team.

Social implications

The study also addresses the uncertainty and disruption that big data technology presents to employees in teams and explore conflict prevention measure which can be used to mitigate such in project teams.

Originality/value

The study proposes a Structural Model for establishing conflict prevention strategies in project teams through a multidimensional framework that combines constructs like team power conflict, process, relationship and task conflicts; to encourage Big Data implementation.

Details

Information Technology & People, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-3845

Keywords

Content available
Article
Publication date: 24 October 2022

Lixu Li, Yeming Gong, Zhiqiang Wang and Shan Liu

Although big data may enhance the visibility, transparency, and responsiveness of supply chains, whether it is effective for improving supply chain performance in a turbulent…

3434

Abstract

Purpose

Although big data may enhance the visibility, transparency, and responsiveness of supply chains, whether it is effective for improving supply chain performance in a turbulent environment, especially in mitigating the impact of COVID-19, is unclear. The research question the authors addressed is: How do logistics firms improve the supply chain performance in COVID-19 through big data and supply chain integration (SCI)?

Design/methodology/approach

The authors used a mixed-method approach with four rounds of data collection. A three-round survey of 323 logistics firms in 26 countries in Europe, America, and Asia was first conducted. The authors then conducted in-depth interviews with 55 logistics firms.

Findings

In the first quantitative study, the authors find mediational mechanisms through which big data analytics technology capability (BDATC) and SCI influence supply chain performance. In particular, BDATC and SCI are two second-order capabilities that help firms develop three first-order capabilities (i.e. proactive capabilities, reactive capabilities, and resource reconfiguration) and eventually lead to innovation capability and disaster immunity that allow firms to survive in COVID-19 and improve supply chain performance. The results of the follow-up qualitative analysis not only confirm the inferences from the quantitative analysis but also provide complementary insights into organizational culture and the institutional environment.

Originality/value

The authors contribute to supply chain risk management by developing a three-level hierarchy of capabilities framework and finding a mechanism with the links between big data and big disaster. The authors also provide managerial implications for logistics firms to address the new management challenges posed by COVID-19.

Details

International Journal of Operations & Production Management, vol. 43 no. 2
Type: Research Article
ISSN: 0144-3577

Keywords

1 – 10 of over 89000