Search results

1 – 10 of over 212000
Article
Publication date: 6 February 2019

Ganjar Alfian, Muhammad Fazal Ijaz, Muhammad Syafrudin, M. Alex Syaekhoni, Norma Latif Fitriyani and Jongtae Rhee

The purpose of this paper is to propose customer behavior analysis based on real-time data processing and association rule for digital signage-based online store (DSOS). The real

3169

Abstract

Purpose

The purpose of this paper is to propose customer behavior analysis based on real-time data processing and association rule for digital signage-based online store (DSOS). The real-time data processing based on big data technology (such as NoSQL MongoDB and Apache Kafka) is utilized to handle the vast amount of customer behavior data.

Design/methodology/approach

In order to extract customer behavior patterns, customers’ browsing history and transactional data from digital signage (DS) could be used as the input for decision making. First, the authors developed a DSOS and installed it in different locations, so that customers could have the experience of browsing and buying a product. Second, the real-time data processing system gathered customers’ browsing history and transaction data as it occurred. In addition, the authors utilized the association rule to extract useful information from customer behavior, so it may be used by the managers to efficiently enhance the service quality.

Findings

First, as the number of customers and DS increases, the proposed system was capable of processing a gigantic amount of input data conveniently. Second, the data set showed that as the number of visit and shopping duration increases, the chance of products being purchased also increased. Third, by combining purchasing and browsing data from customers, the association rules from the frequent transaction pattern were achieved. Thus, the products will have a high possibility to be purchased if they are used as recommendations.

Research limitations/implications

This research empirically supports the theory of association rule that frequent patterns, correlations or causal relationship found in various kinds of databases. The scope of the present study is limited to DSOS, although the findings can be interpreted and generalized in a global business scenario.

Practical implications

The proposed system is expected to help management in taking decisions such as improving the layout of the DS and providing better product suggestions to the customer.

Social implications

The proposed system may be utilized to promote green products to the customer, having a positive impact on sustainability.

Originality/value

The key novelty of the present study lies in system development based on big data technology to handle the enormous amounts of data as well as analyzing the customer behavior in real time in the DSOS. The real-time data processing based on big data technology (such as NoSQL MongoDB and Apache Kafka) is used to handle the vast amount of customer behavior data. In addition, the present study proposed association rule to extract useful information from customer behavior. These results can be used for promotion as well as relevant product recommendations to DSOS customers. Besides in today’s changing retail environment, analyzing the customer behavior in real time in DSOS helps to attract and retain customers more efficiently and effectively, and retailers can get a competitive advantage over their competitors.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 31 no. 1
Type: Research Article
ISSN: 1355-5855

Keywords

Book part
Publication date: 1 November 2007

Irina Farquhar and Alan Sorkin

This study proposes targeted modernization of the Department of Defense (DoD's) Joint Forces Ammunition Logistics information system by implementing the optimized innovative…

Abstract

This study proposes targeted modernization of the Department of Defense (DoD's) Joint Forces Ammunition Logistics information system by implementing the optimized innovative information technology open architecture design and integrating Radio Frequency Identification Device data technologies and real-time optimization and control mechanisms as the critical technology components of the solution. The innovative information technology, which pursues the focused logistics, will be deployed in 36 months at the estimated cost of $568 million in constant dollars. We estimate that the Systems, Applications, Products (SAP)-based enterprise integration solution that the Army currently pursues will cost another $1.5 billion through the year 2014; however, it is unlikely to deliver the intended technical capabilities.

Details

The Value of Innovation: Impact on Health, Life Quality, Safety, and Regulatory Research
Type: Book
ISBN: 978-1-84950-551-2

Article
Publication date: 10 August 2021

Silvia Sagita Arumsari and Ammar Aamer

While several warehouses are now technologically equipped and smart, the implementation of real-time analytics in warehouse operations is scarcely reported in the literature. This…

Abstract

Purpose

While several warehouses are now technologically equipped and smart, the implementation of real-time analytics in warehouse operations is scarcely reported in the literature. This study aims to develop a practical system for real-time analytics of process monitoring in an internet-of-things (IoT)-enabled smart warehouse environment.

Design/methodology/approach

A modified system development research process was used to carry out this research. A prototype system was developed that mimicked a case company’s actual warehouse operations in Indonesia’s manufacturing companies. The proposed system relied heavily on the utilization of IoT technologies, wireless internet connection and web services to keep track of the product movement to provide real-time access to critical warehousing activities, helping make better, faster and more informed decisions.

Findings

The proposed system in the presented case company increased real-time warehousing processes visibility for stakeholders at different management levels in their most convenient ways by developing visual representation to display crucial information. The numerical or textual data were converted into graphics for ease of understanding for stakeholders, including field operators. The key elements for the feasible implementation of the proposed model in an industrial area were discussed. They are strategic-level components, IoT-enabled warehouse environments, customized middleware settings, real-time processing software and visual dashboard configuration.

Research limitations/implications

While this study shows a prototype-based implementation of actual warehouse operations in one of Indonesia’s manufacturing companies, the architectural requirements are applicable and extensible by other companies. In this sense, the research offers significant economic advantages by using customized middleware to avoid unnecessary waste brought by the off-the-shelves generic middleware, which is not entirely suitable for system development.

Originality/value

This research’s finding contributes to filling the gap in the limited body of knowledge of real-time analytics implementation in warehousing operations. This should encourage other researchers to enhance and develop the devised elements to enrich smart warehousing’s theoretical knowledge. Besides, the successful proof-of-concept implementation reported in this research would allow other companies to gain valuable insights and experiences.

Details

Journal of Science and Technology Policy Management, vol. 13 no. 2
Type: Research Article
ISSN: 2053-4620

Keywords

Article
Publication date: 2 October 2018

Dawn M. Russell and David Swanson

The purpose of this paper is to investigate the mediators that occupy the gap between information processing theory and supply chain agility. In today’s Mach speed business…

1689

Abstract

Purpose

The purpose of this paper is to investigate the mediators that occupy the gap between information processing theory and supply chain agility. In today’s Mach speed business environment, managers often install new technology and expect an agile supply chain when they press<Enter>. This study reveals the naivety of such an approach, which has allowed new technology to be governed by old processes.

Design/methodology/approach

This work takes a qualitative approach to the dynamic conditions surrounding information processing and its connection to supply chain agility through the assessment of 60 exemplar cases. The situational conditions that have created the divide between information processing and supply chain agility are studied.

Findings

The agility adaptation typology (AAT) defining three types of adaptations and their mediating constructs is presented. Type 1: information processing, is generally an exercise in synchronization that can be used to support assimilation. Type 2: demand sensing, is where companies are able to incorporate real-time data into everyday processes to better understand demand and move toward a real-time environment. Type 3: supply chain agility, requires fundamentally new thinking in the areas of transformation, mindset and culture.

Originality/value

This work describes the reality of today’s struggle to achieve supply chain agility, providing guidelines and testable propositions, and at the same time, avoids “ivory tower prescriptions,” which exclude the real world details from the research process (Meredith, 1993). By including the messy real world details, while difficult to understand and explain, the authors are able to make strides in the AAT toward theory that explains and guides the manager’s everyday reality with all of its messy real world details.

Details

The International Journal of Logistics Management, vol. 30 no. 1
Type: Research Article
ISSN: 0957-4093

Keywords

Article
Publication date: 30 March 2023

Rafael Diaz and Ali Ardalan

Motivated by recent research indicating that the operational performance of an enterprise can be enhanced by building a supporting data-driven environment in which to operate…

Abstract

Purpose

Motivated by recent research indicating that the operational performance of an enterprise can be enhanced by building a supporting data-driven environment in which to operate, this paper presents a simulation framework that enables an examination of the effects of applying smart manufacturing principles to conventional production systems, intending to transition to digital platforms.

Design/methodology/approach

To investigate the extent to which conventional production systems can be transformed into novel data-driven environments, the well-known constant work-in-process (CONWIP) production systems and considered production sequencing assignments in flowshops were studied. As a result, a novel data-driven priority heuristic, Net-CONWIP was designed and studied, based on the ability to collect real-time information about customer demand and work-in-process inventory, which was applied as part of a distributed and decentralised production sequencing analysis. Application of heuristics like the Net-CONWIP is only possible through the ability to collect and use real-time data offered by a data-driven system. A four-stage application framework to assist practitioners in applying the proposed model was created.

Findings

To assess the robustness of the Net-CONWIP heuristic under the simultaneous effects of different levels of demand, its different levels of variability and the presence of bottlenecks, the performance of Net-CONWIP with conventional CONWIP systems that use first come, first served priority rule was compared. The results show that the Net-CONWIP priority rule significantly reduced customer wait time in all cases relative to FCFS.

Originality/value

Previous research suggests there is considerable value in creating data-driven environments. This study provides a simulation framework that guides the construction of a digital transformation environment. The suggested framework facilitates the inclusion and analysis of relevant smart manufacturing principles in production systems and enables the design and testing of new heuristics that employ real-time data to improve operational performance. An approach that can guide the structuring of data-driven environments in production systems is currently lacking. This paper bridges this gap by proposing a framework to facilitate the design of digital transformation activities, explore their impact on production systems and improve their operational performance.

Details

Industrial Management & Data Systems, vol. 123 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 February 2005

Yogesh Malhotra

To provide executives and scholars with pragmatic understanding about integrating knowledge management strategy and technologies in business processes for successful performance.

12594

Abstract

Purpose

To provide executives and scholars with pragmatic understanding about integrating knowledge management strategy and technologies in business processes for successful performance.

Design/methodology/approach

A comprehensive review of theory, research, and practices on knowledge management develops a framework that contrasts existing technology‐push models with proposed strategy‐pull models. The framework explains how the “critical gaps” between technology inputs, related knowledge processes, and business performance outcomes can be bridged for the two types of models. Illustrative case studies of realtime enterprise (RTE) business model designs for both successful and unsuccessful companies are used to provide real world understanding of the proposed framework.

Findings

Suggests superiority of strategy‐pull models made feasible by new “plug‐and‐play” information and communication technologies over the traditional technology‐push models. Critical importance of strategic execution in guiding the design of enterprise knowledge processes as well as selection and implementation of related technologies is explained.

Research limitations/implications

Given the limited number of cases, the framework is based on real world evidence about companies most popularized for real time technologies by some technology analysts. This limited sample helps understand the caveats in analysts' advice by highlighting the critical importance of strategic execution over selection of specific technologies. However, the framework needs to be tested with multiple enterprises to determine the contingencies that may be relevant to its application.

Originality/value

The first comprehensive analysis relating knowledge management and its integration into enterprise business processes for achieving agility and adaptability often associated with the “real time enterprise” business models. It constitutes critical knowledge for organizations that must depend on information and communication technologies for increasing strategic agility and adaptability.

Details

Journal of Knowledge Management, vol. 9 no. 1
Type: Research Article
ISSN: 1367-3270

Keywords

Open Access
Article
Publication date: 9 October 2023

Mingyao Sun and Tianhua Zhang

A real-time production scheduling method for semiconductor back-end manufacturing process becomes increasingly important in industry 4.0. Semiconductor back-end manufacturing…

Abstract

Purpose

A real-time production scheduling method for semiconductor back-end manufacturing process becomes increasingly important in industry 4.0. Semiconductor back-end manufacturing process is always accompanied by order splitting and merging; besides, in each stage of the process, there are always multiple machine groups that have different production capabilities and capacities. This paper studies a multi-agent based scheduling architecture for the radio frequency identification (RFID)-enabled semiconductor back-end shopfloor, which integrates not only manufacturing resources but also human factors.

Design/methodology/approach

The architecture includes a task management (TM) agent, a staff instruction (SI) agent, a task scheduling (TS) agent, an information management center (IMC), machine group (MG) agent and a production monitoring (PM) agent. Then, based on the architecture, the authors developed a scheduling method consisting of capability & capacity planning and machine configuration modules in the TS agent.

Findings

The authors used greedy policy to assign each order to the appropriate machine groups based on the real-time utilization ration of each MG in the capability & capacity (C&C) planning module, and used a partial swarm optimization (PSO) algorithm to schedule each splitting job to the identified machine based on the C&C planning results. At last, we conducted a case study to demonstrate the proposed multi-agent based real-time production scheduling models and methods.

Originality/value

This paper proposes a multi-agent based real-time scheduling framework for semiconductor back-end industry. A C&C planning and a machine configuration algorithm are developed, respectively. The paper provides a feasible solution for semiconductor back-end manufacturing process to realize real-time scheduling.

Details

IIMBG Journal of Sustainable Business and Innovation, vol. 1 no. 1
Type: Research Article
ISSN: 2976-8500

Keywords

Article
Publication date: 12 June 2017

Kehe Wu, Yayun Zhu, Quan Li and Ziwei Wu

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities…

Abstract

Purpose

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand.

Design/methodology/approach

First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework.

Findings

Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable.

Originality/value

This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…

381

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 11 September 2023

Zhongmei Zhang, Qingyang Hu, Guanxin Hou and Shuai Zhang

Vehicle companion is one of the most common companion patterns in daily life, which has great value to accident investigation, group tracking, carpooling recommendation and road…

Abstract

Purpose

Vehicle companion is one of the most common companion patterns in daily life, which has great value to accident investigation, group tracking, carpooling recommendation and road planning. Due to the complexity and large scale of vehicle sensor streaming data, existing work were difficult to ensure the efficiency and effectiveness of real-time vehicle companion discovery (VCD). This paper aims to provide a high-quality and low-cost method to discover vehicle companions in real time.

Design/methodology/approach

This paper provides a real-time VCD method based on pro-active data service collaboration. This study makes use of dynamic service collaboration to selectively process data produced by relative sensors, and relax the temporal and spatial constraints of vehicle companion pattern for discovering more potential companion vehicles.

Findings

Experiments based on real and simulated data show that the method can discover 67% more companion vehicles, with 62% less response time comparing with centralized method.

Originality/value

To reduce the amount of processing streaming data, this study provides a Service Collaboration-based Vehicle Companion Discovery method based on proactive data service model. And this study provides a new definition of vehicle companion through relaxing the temporal and spatial constraints for discover companion vehicles as many as possible.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 212000