Search results

1 – 10 of 273
Article
Publication date: 6 March 2024

Jayati Singh, Rupesh Kumar, Vinod Kumar and Sheshadri Chatterjee

The main aim of this study is to identify and prioritize the factors that influence the adoption of big data analytics (BDA) within the supply chain (SC) of the food industry in…

Abstract

Purpose

The main aim of this study is to identify and prioritize the factors that influence the adoption of big data analytics (BDA) within the supply chain (SC) of the food industry in India.

Design/methodology/approach

The study is carried out in two distinct phases. In the first phase, barriers hindering BDA adoption in the Indian food industry are identified. Subsequently, the second phase rates/prioritizes these barriers using multicriteria methodologies such as the “analytical hierarchical process” (AHP) and the “fuzzy analytical hierarchical process” (FAHP). Fifteen barriers have been identified, collectively influencing the BDA adoption in the SC of the Indian food industry.

Findings

The findings suggest that the lack of data security, availability of skilled IT professionals, and uncertainty about return on investments (ROI) are the top three apprehensions of the consultants and managers regarding the BDA adoption in the Indian food industry SC.

Research limitations/implications

This research has identified several reasons for the adoption of bigdata analytics in the supply chain management of foods in India. This study has also highlighted that big data analytics applications need specific skillsets, and there is a shortage of critical skills in this industry. Therefore, the technical skills of the employees need to be enhanced by their organizations. Also, utilizing similar services offered by other external agencies could help organizations potentially save time and resources for their in-house teams with a faster turnaround.

Originality/value

The present study will provide vital information to companies regarding roadblocks in BDA adoption in the Indian food industry SC and motivate academicians to explore this area further.

Details

British Food Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0007-070X

Keywords

Article
Publication date: 11 October 2023

Ruchi Mishra, Hemlata Gangwar and Saumyaranjan Sahoo

The objective of this research is to evaluate and rank the factors influencing omnichannel (OC) logistics, while also investigating the significant impact of big data analytics in…

Abstract

Purpose

The objective of this research is to evaluate and rank the factors influencing omnichannel (OC) logistics, while also investigating the significant impact of big data analytics in improving these drivers of OC logistics.

Design/methodology/approach

Using exploratory sequential mixed method design, an in-person interview survey was conducted to identify and stratifies the enablers of OC retailing. These interviews were supplemented with a case study in an apparel firm to prioritise the enablers of OC logistics. Further, a survey was conducted to understand the role of big data analytics in improving drivers of OC logistics as well as the role of Individual capability and organisational capability in big data usage for omnichannel retailing.

Findings

Findings represent that information management is the most important driver followed by inventory management and network design for improving OC logistics. Further, significant relationship between big data analytics and drivers of omnichannel logistics has been reported.

Practical implications

This study identifies and classifies the drivers of OC retailing relating to their level of criticality in OC logistics which will assists practitioners to prioritise their tasks for the successful development of OC logistics. The study will also help practitioners to use BDA for developing the drivers of OC.

Originality/value

The study substantiates and adds to the BDA literature by emphasising the positive role of BDA in development of OC driver and highlighting the significant role of drivers of BDA in its usage.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 21 November 2022

Babar Ali, Ajibade A. Aibinu and Vidal Paton-Cole

Delay and disruption claims involve a complex process that often result in disputes, unnecessary expenses and time loss on construction projects. This study aims to review and…

Abstract

Purpose

Delay and disruption claims involve a complex process that often result in disputes, unnecessary expenses and time loss on construction projects. This study aims to review and synthesize the contributions of previous research undertaken in this area and propose future directions for improving the process of delay and disruption claims.

Design/methodology/approach

This study adopted a holistic systematic review of literature following Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. A total of 230 articles were shortlisted related to delay and disruption claims in construction using Scopus and Web of Science databases.

Findings

Six research themes were identified and critically reviewed including delay analysis, disruption analysis, claim management, contract administration, dispute resolution and delay and disruption information and records. The systematic review showed that there is a dearth of research on managing the wide-ranging information required for delay and disruption claims, ensuring the transparency and uniformity in delay and disruption claims’ information and adopting an end-user’s centred research approach for resolving the problems in the process of delay and disruption claims.

Practical implications

Complexities in delay and disruption claims are real-world problems faced by industry practitioners. The findings will help the research community and industry practitioners to prioritize their energies toward information management of delay and disruption claims.

Originality/value

This study contributes to the body of knowledge in delay and disruption claims by identifying the need for conducting more research on its information requirements and management. Subsequently, it provides an insight on the use of modern technologies such as drones, building information modeling, radio frequency identifiers, blockchain, Bigdata and machine learning, as tools for more structured and efficient attainment of required information in a transparent and consistent manner. It also recommends greater use of design science research approach for delay and disruption claims. This will help to ensure delay and disruption claims are the least complex and less dispute-prone process.

Details

Construction Innovation , vol. 24 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 7 November 2019

Bijaya Kumar Panda

The purpose of this paper is to study the details of new age digital business using a freemium business model.

2970

Abstract

Purpose

The purpose of this paper is to study the details of new age digital business using a freemium business model.

Design/methodology/approach

Study of the various prospects of various digital business firms like revenues, customer base, share price, ranks. Uses of freemium business model to hold on to existing customers and attract new customers.

Findings

Innovative service or product offerings and growth strategy is the base of this business model. So businesses must assess innovation strategy before deciding whether to opt the freemium business model or not. Retaining the existing user and constant addition of new users are the founding stone of the freemium business model. So, the value offerings have to be well perceived by the customer so that switching costs will be increased for them and the customer will remain loyal.

Originality/value

Analyzing consumer behavior with recent analytical tools and techniques such as web analytics, bigdata analytics are required in order to get deeper market knowledge. It is crucial to get the knowledge of recent trends of markets, the perception of customer and customer’s journey mapping in order to run a business with freemium model.

Details

Journal of Management Development, vol. 39 no. 4
Type: Research Article
ISSN: 0262-1711

Keywords

Article
Publication date: 6 April 2020

Wei-Tek Tsai, Yong Luo, Enyan Deng, Jing Zhao, Xiaoqiang Ding, Jie Li and Bo Yuan

This paper aims to apply blockchains (BCs) for trade clearing and settlement in a realistic clearinghouse. The purpose is to demonstrate the feasibility and scalability of this…

Abstract

Purpose

This paper aims to apply blockchains (BCs) for trade clearing and settlement in a realistic clearinghouse. The purpose is to demonstrate the feasibility and scalability of this approach.

Design/methodology/approach

The study uses account BCs and trading BCs as building blocks for trade clearing and settlement. Careful design is made to ensure that this approach is feasible and scalable.

Findings

A design has been proposed that can process hundreds of thousands of trades for a clearinghouse and it addresses performance, privacy and scalability of realistic trade clearing and settlement. The design has been implemented and experimented in a clearinghouse for over two months and processes over 3B real transactions from an exchange. The first month was to experiment with the system with historical data, the second month was to experiment with real-time data during market trading hours. The system performed as designed and intended.

Research limitations/implications

This is the first large research paper that applied BCs for clearing in the world. The authors applied the system to a clearinghouse and processed over 3 billion transactions, equivalent to 13 years of London Stock Exchange transaction volume, demonstrating that BCs can handle a large number of transactions.

Practical implications

The design can be duplicated to many clearinghouses in the world, and this also paves the way BCs can be used in large financial institutions.

Social implications

An implication is that other trading firms, clearinghouses and banks can apply the same technology for trade clearing, ushering the way BCs can be used in institutions. As clearing is a core function in business transactions, this has significant implications. The design can be discussed and improved in various communities.

Originality/value

As this is the first application of BCs to large clearinghouses that uses unique BC designs. This has significant value. Many studies have been performed but few have been reported in the scientific community. The system has been implemented, experimented and demonstrated in public for months.

Details

The Journal of Risk Finance, vol. 21 no. 5
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 12 March 2020

José Francisco Martínez-Sánchez, Salvador Cruz-García and Francisco Venegas-Martínez

This paper is aimed at developing a regression tree model useful to quantify the Money Laundering (ML) risk associated to a customer profile and his contracted products…

Abstract

Purpose

This paper is aimed at developing a regression tree model useful to quantify the Money Laundering (ML) risk associated to a customer profile and his contracted products (customer’s inherent risk). ML is a risk to which different entities are exposed, but mainly the financial ones because of the nature of their activity, so that they are legally obliged to have an appropriate methodology to analyze and assess such a risk.

Design/methodology/approach

This paper uses the technique of regression trees to identify, measure and quantify the ML customer’s inherent risk.

Findings

After classifying customers as high- or low-risk based on a probability threshold of 0.5, this study finds that customers with 56 months or more of seniority are more risky than those with less seniority; the variables “contracted product” and “customer seniority” are statistically significant; the variables origin, legal entity and economic activity are not statistically significant for classifying customers; institution collection, business products and individual product are the most risky; and the percentage of effectiveness, suggested by the decision tree technique, is around 89.5 per cent.

Practical implications

In the daily practice of ML risk management, the two main issues to be considered are: 1) the knowledge of the customer, and 2) the detection of his inherent risk elements.

Originality/value

Information from the customer portfolio and his transaction profile is analyzed through BigData and data mining.

Details

Journal of Money Laundering Control, vol. 23 no. 2
Type: Research Article
ISSN: 1368-5201

Keywords

Open Access
Article
Publication date: 19 March 2021

Vicente Ramos, Woraphon Yamaka, Bartomeu Alorda and Songsak Sriboonchitta

This paper aims to illustrate the potential of high-frequency data for tourism and hospitality analysis, through two research objectives: First, this study describes and test a…

1914

Abstract

Purpose

This paper aims to illustrate the potential of high-frequency data for tourism and hospitality analysis, through two research objectives: First, this study describes and test a novel high-frequency forecasting methodology applied on big data characterized by fine-grained time and spatial resolution; Second, this paper elaborates on those estimates’ usefulness for visitors and tourism public and private stakeholders, whose decisions are increasingly focusing on short-time horizons.

Design/methodology/approach

This study uses the technical communications between mobile devices and WiFi networks to build a high frequency and precise geolocation of big data. The empirical section compares the forecasting accuracy of several artificial intelligence and time series models.

Findings

The results robustly indicate the long short-term memory networks model superiority, both for in-sample and out-of-sample forecasting. Hence, the proposed methodology provides estimates which are remarkably better than making short-time decision considering the current number of residents and visitors (Naïve I model).

Practical implications

A discussion section exemplifies how high-frequency forecasts can be incorporated into tourism information and management tools to improve visitors’ experience and tourism stakeholders’ decision-making. Particularly, the paper details its applicability to managing overtourism and Covid-19 mitigating measures.

Originality/value

High-frequency forecast is new in tourism studies and the discussion sheds light on the relevance of this time horizon for dealing with some current tourism challenges. For many tourism-related issues, what to do next is not anymore what to do tomorrow or the next week.

Plain Language Summary

This research initiates high-frequency forecasting in tourism and hospitality studies. Additionally, we detail several examples of how anticipating urban crowdedness requires high-frequency data and can improve visitors’ experience and public and private decision-making.

Details

International Journal of Contemporary Hospitality Management, vol. 33 no. 6
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 8 February 2021

Muhammad Saad Amjad, Muhammad Zeeshan Rafique and Mohammad Aamir Khan

In the modern manufacturing environment, it is imperative to apply the manufacturing concepts of lean, agile, resilient and green, collectively known as LARG manufacturing, to…

Abstract

Purpose

In the modern manufacturing environment, it is imperative to apply the manufacturing concepts of lean, agile, resilient and green, collectively known as LARG manufacturing, to achieve excellence in which lean manufacturing eliminates wastes; agile manufacturing makes processes fast, efficient and flexible; resilient paradigm deals with countering the uncertainty while green manufacturing improves environmental performance. The objective of this study is to develop an integration framework that synergizes LARG manufacturing with Industry 4.0.

Design/methodology/approach

Through a literature review, the authors have explored the possibility of collaboration between constituents of lean, agile, resilient and green manufacturing with the facets of Industry 4.0.

Findings

The authors have developed a comprehensive integration framework that has been divided into 11 phases and 31 steps in which the various Industry 4.0 facets have supplemented the lean, agile, resilient and green paradigms.

Practical implications

This investigation and adoption of technologically intensive automation shall provide clarity to practitioners regarding the synergy of LARG manufacturing & Industry 4.0, so that fast and efficient manufacturing processes can be achieved.

Originality/value

The framework provides detailed insight towards implementation of LARG practices in a manufacturing organization in coalescence with Industry 4.0 practices.

Details

International Journal of Lean Six Sigma, vol. 12 no. 5
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 18 July 2016

Maayan Zhitomirsky-Geffet, Judit Bar-Ilan and Mark Levene

One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that…

4827

Abstract

Purpose

One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that individual users might change their assessment of search results over time. It is also known that aggregated judgements of multiple individual users can lead to correct and reliable decisions; this phenomenon is known as the “wisdom of crowds”. The purpose of this paper is to examine whether aggregated judgements will be more stable and thus more reliable over time than individual user judgements.

Design/methodology/approach

In this study two simple measures are proposed to calculate the aggregated judgements of search results and compare their reliability and stability to individual user judgements. In addition, the aggregated “wisdom of crowds” judgements were used as a means to compare the differences between human assessments of search results and search engine’s rankings. A large-scale user study was conducted with 87 participants who evaluated two different queries and four diverse result sets twice, with an interval of two months. Two types of judgements were considered in this study: relevance on a four-point scale, and ranking on a ten-point scale without ties.

Findings

It was found that aggregated judgements are much more stable than individual user judgements, yet they are quite different from search engine rankings.

Practical implications

The proposed “wisdom of crowds”-based approach provides a reliable reference point for the evaluation of search engines. This is also important for exploring the need of personalisation and adapting search engine’s ranking over time to changes in users preferences.

Originality/value

This is a first study that applies the notion of “wisdom of crowds” to examine an under-explored in the literature phenomenon of “change in time” in user evaluation of relevance.

Details

Aslib Journal of Information Management, vol. 68 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 14 July 2022

Nishad A. and Sajimon Abraham

A wide number of technologies are currently in store to harness the challenges posed by pandemic situations. As such diseases transmit by way of person-to-person contact or by any…

Abstract

Purpose

A wide number of technologies are currently in store to harness the challenges posed by pandemic situations. As such diseases transmit by way of person-to-person contact or by any other means, the World Health Organization had recommended location tracking and tracing of people either infected or contacted with the patients as one of the standard operating procedures and has also outlined protocols for incident management. Government agencies use different inputs such as smartphone signals and details from the respondent to prepare the travel log of patients. Each and every event of their trace such as stay points, revisit locations and meeting points is important. More trained staffs and tools are required under the traditional system of contact tracing. At the time of the spiralling patient count, the time-bound tracing of primary and secondary contacts may not be possible, and there are chances of human errors as well. In this context, the purpose of this paper is to propose an algorithm called SemTraClus-Tracer, an efficient approach for computing the movement of individuals and analysing the possibility of pandemic spread and vulnerability of the locations.

Design/methodology/approach

Pandemic situations push the world into existential crises. In this context, this paper proposes an algorithm called SemTraClus-Tracer, an efficient approach for computing the movement of individuals and analysing the possibility of pandemic spread and vulnerability of the locations. By exploring the daily mobility and activities of the general public, the system identifies multiple levels of contacts with respect to an infected person and extracts semantic information by considering vital factors that can induce virus spread. It grades different geographic locations according to a measure called weightage of participation so that vulnerable locations can be easily identified. This paper gives directions on the advantages of using spatio-temporal aggregate queries for extracting general characteristics of social mobility. The system also facilitates room for the generation of various information by combing through the medical reports of the patients.

Findings

It is identified that context of movement is important; hence, the existing SemTraClus algorithm is modified by accounting for four important factors such as stay point, contact presence, stay time of primary contacts and waypoint severity. The priority level can be reconfigured according to the interest of authority. This approach reduces the overwhelming task of contact tracing. Different functionalities provided by the system are also explained. As the real data set is not available, experiments are conducted with similar data and results are shown for different types of journeys in different geographical locations. The proposed method efficiently handles computational movement and activity analysis by incorporating various relevant semantics of trajectories. The incorporation of cluster-based aggregate queries in the model do away with the computational headache of processing the entire mobility data.

Research limitations/implications

As the trajectory of patients is not available, the authors have used the standard data sets for experimentation, which serve the purpose.

Originality/value

This paper proposes a framework infrastructure that allows the emergency response team to grab multiple information based on the tracked mobility details of a patient and facilitates room for various activities for the mitigation of pandemics such as the prediction of hotspots, identification of stay locations and suggestion of possible locations of primary and secondary contacts, creation of clusters of hotspots and identification of nearby medical assistance. The system provides an efficient way of activity analysis by computing the mobility of people and identifying features of geographical locations where people travelled. While formulating the framework, the authors have reviewed many different implementation plans and protocols and arrived at the conclusion that the core strategy followed is more or less the same. For the sake of a reference model, the Indian scenario is adopted for defining the concepts.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

1 – 10 of 273