Search results

1 – 10 of over 2000
Article
Publication date: 7 October 2014

Rieke Bärenfänger, Boris Otto and Hubert Österle

– The purpose of this paper is to assess the business value of in-memory computing (IMC) technology by analyzing its organizational impact in different application scenarios.

1216

Abstract

Purpose

The purpose of this paper is to assess the business value of in-memory computing (IMC) technology by analyzing its organizational impact in different application scenarios.

Design/methodology/approach

This research applies a multiple-case study methodology analyzing five cases of IMC application scenarios in five large European industrial and service-sector companies.

Findings

Results show that IMC can deliver business value in various applications ranging from advanced analytic insights to support of real-time processes. This enables higher-level organizational advantages like data-driven decision making, superior transparency of operations, and experience with Big Data technology. The findings are summarized in a business value generation model which captures the business benefits along with preceding enabling changes in the organizational environment.

Practical implications

Results aid managers in identifying different application scenarios where IMC technology may generate value for their organizations from business and IT management perspectives. The research also sheds light on the socio-technical factors that influence the likelihood of success or failure of IMC initiatives.

Originality/value

This research is among the first to model the business value creation process of in-memory technology based on insights from multiple implemented applications in different industries.

Details

Industrial Management & Data Systems, vol. 114 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 5 June 2017

Alexander J. McLeod, Michael Bliemel and Nancy Jones

The purpose of this paper is to explore the demand for big data and analytics curriculum, provide an overview of the curriculum available from the SAP University Alliances…

1635

Abstract

Purpose

The purpose of this paper is to explore the demand for big data and analytics curriculum, provide an overview of the curriculum available from the SAP University Alliances program, examine the evolving usage of such curriculum, and suggest an academic research agenda for this topic.

Design/methodology/approach

In this work, the authors reviewed recent academic utilization of big data and analytics curriculum in a large faculty-driven university program by examining school hosting request logs over a four-year period. The authors analyze curricula usage to determine how changes in big data and analytics are being introduced to academia.

Findings

Results indicate that there is a substantial shift toward curriculum focusing on big data and analytics.

Research limitations/implications

Because this research only considered data from one proprietary software vendor, the scope of this project is limited and may not generalize to other university software support programs.

Practical implications

Faculty interested in creating or furthering their business process programs to include big data and analytics will find practical information, materials, suggestions, as well as a research and curriculum development agenda.

Originality/value

Faculty interested in creating or furthering their programs to include big data and analytics will find practical information, materials, suggestions, and a research and curricula agenda.

Details

Business Process Management Journal, vol. 23 no. 3
Type: Research Article
ISSN: 1463-7154

Keywords

Abstract

Details

Digital Transformation Management for Agile Organizations: A Compass to Sail the Digital World
Type: Book
ISBN: 978-1-80043-171-3

Article
Publication date: 4 September 2009

Zinaida Manžuch

The purpose of this paper is to evaluate current approaches to assessing digitisation activities in memory institutions.

3777

Abstract

Purpose

The purpose of this paper is to evaluate current approaches to assessing digitisation activities in memory institutions.

Design/methodology/approach

Qualitative and quantitative analysis of digitisation surveys were performed. Analysis concentrated on several themes: general methodological solutions, digitisation objectives, users and usage of digitised content, budgeting and costs of digitisation, and volume and growth of digitised collections.

Findings

Analysis revealed an absence of sound methodology solutions, issues of constructing a sample, the split between strategic and resource management approaches to digitisation, low visibility of user related evaluation criteria, and problems in developing quantitative measures.

Research limitations/implications

Approaches to evaluating digitisation are not restricted to digitisation surveys and to provide a more comprehensive analysis these should be complemented by other data (e.g. interviews of digitisation experts). The identification of surveys was limited by subjective factors such as knowledge of national experts, visibility of reports on the web, and language of publication.

Practical implications

The paper assists in the development of digitisation surveys by highlighting previous gaps and achievements.

Originality/value

The paper is a first attempt to comprehend approaches to monitoring digitisation internationally. Gaps and issues identified in the research can guide studies on developing indicators and measures for specific digitisation activities.

Details

Journal of Documentation, vol. 65 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 31 December 2006

Hooman Homayounfar and Fangju Wang

XML is becoming one of the most important structures for data exchange on the web. Despite having many advantages, XML structure imposes several major obstacles to large document…

Abstract

XML is becoming one of the most important structures for data exchange on the web. Despite having many advantages, XML structure imposes several major obstacles to large document processing. Inconsistency between the linear nature of the current algorithms (e.g. for caching and prefetch) used in operating systems and databases, and the non‐linear structure of XML data makes XML processing more costly. In addition to verbosity (e.g. tag redundancy), interpreting (i.e. parsing) depthfirst (DF) structure of XML documents is a significant overhead to processing applications (e.g. query engines). Recent research on XML query processing has learned that sibling clustering can improve performance significantly. However, the existing clustering methods are not able to avoid parsing overhead as they are limited by larger document sizes. In this research, We have developed a better data organization for native XML databases, named sibling‐first (SF) format that improves query performance significantly. SF uses an embedded index for fast accessing to child nodes. It also compresses documents by eliminating extra information from the original DF format. The converted SF documents can be processed for XPath query purposes without being parsed. We have implemented the SF storage in virtual memory as well as a format on disk. Experimental results with real data have showed that significantly higher performance can be achieved when XPath queries are conducted on very large SF documents.

Details

International Journal of Web Information Systems, vol. 2 no. 3/4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 22 December 2023

Vaclav Snasel, Tran Khanh Dang, Josef Kueng and Lingping Kong

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate…

84

Abstract

Purpose

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.

Design/methodology/approach

Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.

Findings

ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.

Originality/value

IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 23 November 2010

Nils Hoeller, Christoph Reinke, Jana Neumann, Sven Groppe, Christian Werner and Volker Linnemann

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and…

Abstract

Purpose

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and software heterogeneity on application level and easy WWW integration make XML an ideal data format for many other application and network scenarios like wireless sensor networks (WSNs). Moreover, the usage of XML encourages using standardized techniques like SOAP to adapt the service‐oriented paradigm to sensor network engineering. Nevertheless, integrating XML usage in WSN data management is limited by the low hardware resources that require efficient XML data management strategies suitable to bridge the general resource gap. The purpose of this paper is to present two separate strategies on integrating XML data management in WSNs.

Design/methodology/approach

The paper presents two separate strategies on integrating XML data management in WSNs that have been implemented and are running on today's sensor node platforms. The paper shows how XML data can be processed and how XPath queries can be evaluated dynamically. In an extended evaluation, the performance of both strategies concerning the memory and energy efficiency are compared and both solutions are shown to have application domains fully applicable on today's sensor node products.

Findings

This work shows that dynamic XML data management and query evaluation is possible on sensor nodes with strict limitations in terms of memory, processing power and energy supply.

Originality/value

The paper presents an optimized stream‐based XML compression technique and shows how XML queries can be evaluated on compressed XML bit streams using generic pushdown automata. To the best of the authors' knowledge, this is the first complete approach on integrating dynamic XML data management into WSNs.

Details

International Journal of Web Information Systems, vol. 6 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 20 July 2021

Chern Li Liew

While memory institutions' use of social media has proliferated, research and scholarly literature on risks, resulting from social media use, memory institutions' social media…

Abstract

Purpose

While memory institutions' use of social media has proliferated, research and scholarly literature on risks, resulting from social media use, memory institutions' social media risk-aware culture and, in particular, social media risk management remains scant. This study addresses this knowledge gap and identifies aspects of social media risk management from other sectors that could inform the cultural heritage sector.

Design/methodology/approach

This research involves a review of the scholarly and professional literature that contribute to social media risk management discourses. These include those that discuss the different categories of social media risks, social media policies, risk-aware culture and social media risk management strategies and processes. Works discussing social media risk management models and frameworks are also included in the review. Based on the insights gained from these reviews, a pillar framework to guide social media risk management in memory institutions is developed.

Findings

The proposed framework outlines the baseline components relevant for the cultural heritage sector and underlines the evolving and continual nature of these components. Elements particularly important to memory institutions are highlighted. Notably, that social risks as a risk category must be recognised. Also noted is that the conventional apolitical stance still taken by many memory institutions need to be reviewed. The importance of memory institutions to be not overly risk-averse to the point of failing to take advantage of the affordances of social media platforms, thereby stifling potential innovations around services and engagement with their users/audience is discussed.

Originality/value

This research offers an extensive review of the social media risk management literature, both scholarly and professional across different domains. The ensuing insights inform the development of a pillar framework to guide social media risk management in memory institutions. The framework outlines a baseline mapping of the governance, processes and systems components. The expectation is that this framework could be extended to account for contextual and situational requirements at more granular levels to reflect the nuances, variances and complexities that exist among different types of memory institutions and to account for varying attributes, mandates and priorities in the cultural heritage sector.

Article
Publication date: 1 January 1986

Robert H. Dodds and Leonard A. Lopez

The software virtual machine (SVM) concept is described as a methodology to reduce the manpower required to implement and maintain finite element software. A SVM provides the…

55

Abstract

The software virtual machine (SVM) concept is described as a methodology to reduce the manpower required to implement and maintain finite element software. A SVM provides the engineering programmer with high‐level languages to facilitate the structuring and management of data, to define and interface process modules, and to manage computer resources. A prototype finite element system has been successfully implemented using the SVM approach. Development effort is significantly reduced compared to a conventional all‐FORTRAN approach. The impact on execution efficiency of the SVM is described along with special procedures developed to minimize overhead in compute‐bound modules. Planned extensions of capabilities in the SVM used by the authors are outlined.

Details

Engineering Computations, vol. 3 no. 1
Type: Research Article
ISSN: 0264-4401

Article
Publication date: 1 August 2016

Bao-Rong Chang, Hsiu-Fen Tsai, Yun-Che Tsai, Chin-Fu Kuo and Chi-Chung Chen

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big…

Abstract

Purpose

The purpose of this paper is to integrate and optimize a multiple big data processing platform with the features of high performance, high availability and high scalability in big data environment.

Design/methodology/approach

First, the integration of Apache Hive, Cloudera Impala and BDAS Shark make the platform support SQL-like query. Next, users can access a single interface and select the best performance of big data warehouse platform automatically by the proposed optimizer. Finally, the distributed memory storage system Memcached incorporated into the distributed file system, Apache HDFS, is employed for fast caching query results. Therefore, if users query the same SQL command, the same result responds rapidly from the cache system instead of suffering the repeated searches in a big data warehouse and taking a longer time to retrieve.

Findings

As a result the proposed approach significantly improves the overall performance and dramatically reduces the search time as querying a database, especially applying for the high-repeatable SQL commands under multi-user mode.

Research limitations/implications

Currently, Shark’s latest stable version 0.9.1 does not support the latest versions of Spark and Hive. In addition, this series of software only supports Oracle JDK7. Using Oracle JDK8 or Open JDK will cause serious errors, and some software will be unable to run.

Practical implications

The problem with this system is that some blocks are missing when too many blocks are stored in one result (about 100,000 records). Another problem is that the sequential writing into In-memory cache wastes time.

Originality/value

When the remaining memory capacity is 2 GB or less on each server, Impala and Shark will have a lot of page swapping, causing extremely low performance. When the data scale is larger, it may cause the JVM I/O exception and make the program crash. However, when the remaining memory capacity is sufficient, Shark is faster than Hive and Impala. Impala’s consumption of memory resources is between those of Shark and Hive. This amount of remaining memory is sufficient for Impala’s maximum performance. In this study, each server allocates 20 GB of memory for cluster computing and sets the amount of remaining memory as Level 1: 3 percent (0.6 GB), Level 2: 15 percent (3 GB) and Level 3: 75 percent (15 GB) as the critical points. The program automatically selects Hive when memory is less than 15 percent, Impala at 15 to 75 percent and Shark at more than 75 percent.

1 – 10 of over 2000