Search results
1 – 10 of over 1000Ahmad Albqowr, Malek Alsharairi and Abdelrahim Alsoussi
The purpose of this paper is to analyse and classify the literature that contributed to three questions, namely, what are the benefits of big data analytics (BDA) in the field of…
Abstract
Purpose
The purpose of this paper is to analyse and classify the literature that contributed to three questions, namely, what are the benefits of big data analytics (BDA) in the field of supply chain management (SCM) and logistics, what are the challenges in BDA applications in the field of SCM and logistics and what are the determinants of successful applications of BDA in the field of SCM and logistics.
Design/methodology/approach
This paper conducts a systematic literature review (SLR) to analyse the findings of 44 selected papers published in the period from 2016 to 2020, in the area of BDA and its impact on SCM. The designed protocol is composed of 14 steps in total, following Tranfeld (2003). The selected research papers are categorized into four themes.
Findings
This paper identifies sets of benefits to be gained from the use of BDA in SCM, including benefits in data analytics capabilities, operational efficiency of logistical operations and supply chain/logistics sustainability and agility. It also documents challenges to be addressed in this application, and determinants of successful implementation.
Research limitations/implications
The scope of the paper is limited to the related literature published until the beginning of Corona Virus (COVID) pandemic. Therefore, it does not cover the literature published since the COVID pandemic.
Originality/value
This paper contributes to the academic research by providing a roadmap for future empirical work into this field of study by summarising the findings of the recent work conducted to investigate the uses of BDA in SCM and logistics. Specifically, this paper culminates in a summary of the most relevant benefits, challenges and determinants discussed in recent research. As the field of BDA remains a newly established field with little practical application in SCM and logistics, this paper contributes by highlighting the most important developments in contemporary literature practical applications.
Details
Keywords
Subhodeep Mukherjee, Ramji Nagariya, K. Mathiyazhagan, Manish Mohan Baral, M.R. Pavithra and Andrea Appolloni
Reverse logistics services are designed to move goods from their point of consumption to an endpoint to capture value or properly dispose of products and materials. Artificial…
Abstract
Purpose
Reverse logistics services are designed to move goods from their point of consumption to an endpoint to capture value or properly dispose of products and materials. Artificial intelligence (AI)-based reverse logistics will help Micro, Small, and medium Enterprises (MSMEs) adequately recycle and reuse the materials in the firms. This research aims to measure the adoption of AI-based reverse logistics to improve circular economy (CE) performance.
Design/methodology/approach
In this study, we proposed ten hypotheses using the theory of natural resource-based view and technology, organizational and environmental framework. Data are collected from 363 Indian MSMEs as they are the backbone of the Indian economy, and there is a need for digital transformation in MSMEs. A structural equation modeling approach is applied to analyze and test the hypothesis.
Findings
Nine of the ten proposed hypotheses were accepted, and one was rejected. The results revealed that the relative advantage (RA), trust (TR), top management support (TMS), environmental regulations, industry dynamism (ID), compatibility, technology readiness and government support (GS) positively relate to AI-based reverse logistics adoption. AI-based reverse logistics indicated a positive relationship with CE performance. For mediation analysis, the results revealed that RA, TR, TMS and technological readiness are complementary mediation. Still, GS, ID, organizational flexibility, environmental uncertainty and technical capability have no mediation.
Practical implications
The study contributed to the CE performance and AI-based reverse logistics literature. The study will help managers understand the importance of AI-based reverse logistics for improving the performance of the CE in MSMEs. This study will help firms reduce their carbon footprint and achieve sustainable development goals.
Originality/value
Few studies focused on CE performance, but none measured the adoption of AI-based reverse logistics to enhance MSMEs’ CE performance.
Details
Keywords
H.P.M.N.L.B. Moragane, B.A.K.S. Perera, Asha Dulanjalie Palihakkara and Biyanka Ekanayake
Construction progress monitoring (CPM) is considered a difficult and tedious task in construction projects, which focuses on identifying discrepancies between the as-built product…
Abstract
Purpose
Construction progress monitoring (CPM) is considered a difficult and tedious task in construction projects, which focuses on identifying discrepancies between the as-built product and the as-planned design. Computer vision (CV) technology is applied to automate the CPM process. However, the synergy between the CV and CPM in literature and industry practice is lacking. This study aims to fulfil this research gap.
Design/methodology/approach
A Delphi qualitative approach was used in this study by conducting two interview rounds. The collected data was analysed using manual content analysis.
Findings
This study identified seven stages of CPM; data acquisition, information retrieval, verification, progress estimation and comparison, visualisation of the results and schedule updating. Factors such as higher accuracy in data, less labourious process, efficiency and near real-time access are some of the significant enablers in instigating CV for CPM. Major challenges identified were occlusions and lighting issues in the site images and lack of support from the management. The challenges can be easily overcome by implementing suitable strategies such as familiarisation of the workforce with CV technology and application of CV research for the construction industry to grow with the technology in line with other industries.
Originality/value
This study addresses the gap pertaining to the synergy between the CV in CPM literature and the industry practice. This research contributes by enabling the construction personnel to identify the shortcomings and the opportunities to apply automated technologies concerning each stage in the progress monitoring process.
Details
Keywords
Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang
The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…
Abstract
Purpose
The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.
Design/methodology/approach
This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.
Findings
In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.
Originality/value
In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.
Details
Keywords
Pietro Pavone, Paolo Ricci and Massimiliano Calogero
This paper aims to investigate the literacy corpus regarding the potential of big data to improve public decision-making processes and direct these processes toward the creation…
Abstract
Purpose
This paper aims to investigate the literacy corpus regarding the potential of big data to improve public decision-making processes and direct these processes toward the creation of public value. This paper presents a map of current knowledge in a sample of selected articles and explores the intersecting points between data from the private sector and the public dimension in relation to benefits for society.
Design/methodology/approach
A bibliometric analysis was performed to provide a retrospective review of published content in the past decade in the field of big data for the public interest. This paper describes citation patterns, key topics and publication trends.
Findings
The findings indicate a propensity in the current literature to deal with the issue of data value creation in the private dimension (data as input to improve business performance or customer relations). Research on data for the public good has so far been underestimated. Evidence shows that big data value creation is closely associated with a collective process in which multiple levels of interaction and data sharing develop between both private and public actors in data ecosystems that pose new challenges for accountability and legitimation processes.
Research limitations/implications
The bibliometric method focuses on academic papers. This paper does not include conference proceedings, books or book chapters. Consequently, a part of the existing literature was excluded from the investigation and further empirical research is required to validate some of the proposed theoretical assumptions.
Originality/value
Although this paper presents the main contents of previous studies, it highlights the need to systematize data-driven private practices for public purposes. This paper offers insights to better understand these processes from a public management perspective.
Details
Keywords
Anil Kumar Goswami, Anamika Sinha, Meghna Goswami and Prashant Kumar
This study aims to extend and explore patterns and trends of research in the linkage of big data and knowledge management (KM) by identifying growth in terms of numbers of papers…
Abstract
Purpose
This study aims to extend and explore patterns and trends of research in the linkage of big data and knowledge management (KM) by identifying growth in terms of numbers of papers and current and emerging themes and to propose areas of future research.
Design/methodology/approach
The study was conducted by systematically extracting, analysing and synthesizing the literature related to linkage between big data and KM published in top-tier journals in Web of Science (WOS) and Scopus databases by exploiting bibliometric techniques along with theory, context, characteristics, methodology (TCCM) analysis.
Findings
The study unfolds four major themes of linkage between big data and KM research, namely (1) conceptual understanding of big data as an enabler for KM, (2) big data–based models and frameworks for KM, (3) big data as a predictor variable in KM context and (4) big data applications and capabilities. It also highlights TCCM of big data and KM research through which it integrates a few previously reported themes and suggests some new themes.
Research limitations/implications
This study extends advances in the previous reviews by adding a new time line, identifying new themes and helping in the understanding of complex and emerging field of linkage between big data and KM. The study outlines a holistic view of the research area and suggests future directions for flourishing in this research area.
Practical implications
This study highlights the role of big data in KM context resulting in enhancement of organizational performance and efficiency. A summary of existing literature and future avenues in this direction will help, guide and motivate managers to think beyond traditional data and incorporate big data into organizational knowledge infrastructure in order to get competitive advantage.
Originality/value
To the best of authors’ knowledge, the present study is the first study to go deeper into understanding of big data and KM research using bibliometric and TCCM analysis and thus adds a new theoretical perspective to existing literature.
Details
Keywords
Bahman Arasteh and Ali Ghaffari
Reducing the number of generated mutants by clustering redundant mutants, reducing the execution time by decreasing the number of generated mutants and reducing the cost of…
Abstract
Purpose
Reducing the number of generated mutants by clustering redundant mutants, reducing the execution time by decreasing the number of generated mutants and reducing the cost of mutation testing are the main goals of this study.
Design/methodology/approach
In this study, a method is suggested to identify and prone the redundant mutants. In the method, first, the program source code is analyzed by the developed parser to filter out the effectless instructions; then the remaining instructions are mutated by the standard mutation operators. The single-line mutants are partially executed by the developed instruction evaluator. Next, a clustering method is used to group the single-line mutants with the same results. There is only one complete run per cluster.
Findings
The results of experiments on the Java benchmarks indicate that the proposed method causes a 53.51 per cent reduction in the number of mutants and a 57.64 per cent time reduction compared to similar experiments in the MuJava and MuClipse tools.
Originality/value
Developing a classifier that takes the source code of the program and classifies the programs' instructions into effective and effectless classes using a dependency graph; filtering out the effectless instructions reduces the total number of mutants generated; Developing and implementing an instruction parser and instruction-level mutant generator for Java programs; the mutant generator takes instruction in the original program as a string and generates its single-line mutants based on the standard mutation operators in MuJava; Developing a stack-based evaluator that takes an instruction (original or mutant) and the test data and evaluates its result without executing the whole program.
Details
Keywords
Temidayo Oluwasola Osunsanmi, Timothy O. Olawumi, Andrew Smith, Suha Jaradat, Clinton Aigbavboa, John Aliu, Ayodeji Oke, Oluwaseyi Ajayi and Opeyemi Oyeyipo
The study aims to develop a model that supports the application of data science techniques for real estate professionals in the fourth industrial revolution (4IR) era. The present…
Abstract
Purpose
The study aims to develop a model that supports the application of data science techniques for real estate professionals in the fourth industrial revolution (4IR) era. The present 4IR era gave birth to big data sets and is beyond real estate professionals' analysis techniques. This has led to a situation where most real estate professionals rely on their intuition while neglecting a rigorous analysis for real estate investment appraisals. The heavy reliance on their intuition has been responsible for the under-performance of real estate investment, especially in Africa.
Design/methodology/approach
This study utilised a survey questionnaire to randomly source data from real estate professionals. The questionnaire was analysed using a combination of Statistical package for social science (SPSS) V24 and Analysis of a Moment Structures (AMOS) graphics V27 software. Exploratory factor analysis was employed to break down the variables (drivers) into meaningful dimensions helpful in developing the conceptual framework. The framework was validated using covariance-based structural equation modelling. The model was validated using fit indices like discriminant validity, standardised root mean square (SRMR), comparative fit index (CFI), Normed Fit Index (NFI), etc.
Findings
The model revealed that an inclusive educational system, decentralised real estate market and data management system are the major drivers for applying data science techniques to real estate professionals. Also, real estate professionals' application of the drivers will guarantee an effective data analysis of real estate investments.
Originality/value
Numerous studies have clamoured for adopting data science techniques for real estate professionals. There is a lack of studies on the drivers that will guarantee the successful adoption of data science techniques. A modern form of data analysis for real estate professionals was also proposed in the study.
Details
Keywords
Edoardo Trincanato and Emidia Vagnoni
Business intelligence (BI) systems and tools are deemed to be a transformative source with the potential to contribute to reshaping the way different healthcare organizations’…
Abstract
Purpose
Business intelligence (BI) systems and tools are deemed to be a transformative source with the potential to contribute to reshaping the way different healthcare organizations’ (HCOs) services are offered and managed. However, this emerging field of research still appears underdeveloped and fragmented. Hence, this paper aims to reconciling, analyzing and synthesizing different strands of managerial-oriented literature on BI in HCOs and to enhance both theoretical and applied future contributions.
Design/methodology/approach
A literature-based framework was developed to establish and guide a three-stage state-of-the-art systematic literature review (SLR). The SLR was undertaken adopting a hybrid methodology that combines a bibliometric and a content analysis.
Findings
In total, 34 peer-review articles were included. Results revealed significant heterogeneity in theoretical basis and methodological strategies. Nonetheless, the knowledge structure of this research’s stream seems to be primarily composed of five clusters of interconnected topics: (1) decision-making, relevant capabilities and value creation; (2) user satisfaction and quality; (3) process management, organizational change and financial effectiveness; (4) decision-support information, dashboard and key performance indicators; and (5) performance management and organizational effectiveness.
Originality/value
To the authors’ knowledge, this is the first SLR providing a business and management-related state-of-the-art on the topic. Besides, the paper offers an original framework disentangling future research directions from each emerged cluster into issues pertaining to BI implementation, utilization and impact in HCOs. The paper also discusses the need of future contributions to explore possible integrations of BI with emerging data-driven technologies (e.g. artificial intelligence) in HCOs, as the role of BI in addressing sustainability challenges.
Details
Keywords
This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither…
Abstract
Purpose
This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither supervised machine learning nor manual engineering are used in this work. Instead, the OTV educates itself without instruction from humans or labeling. Beyond its link to stopping distance and lateral mobility, choosing the right speed is crucial. One of the biggest problems with autonomous operations is accurate perception. Obstacle avoidance is typically the focus of perceptive technology. The vehicle's shock is nonetheless controlled by the terrain's roughness at high speeds. The precision needed to recognize difficult terrain is far higher than the accuracy needed to avoid obstacles.
Design/methodology/approach
Robots that can drive unattended in an unfamiliar environment should be used for the Orbital Transfer Vehicle (OTV) for the clearance of space debris. In recent years, OTV research has attracted more attention and revealed several insights for robot systems in various applications. Improvements to advanced assistance systems like lane departure warning and intelligent speed adaptation systems are eagerly sought after by the industry, particularly space enterprises. OTV serves as a research basis for advancements in machine learning, computer vision, sensor data fusion, path planning, decision making and intelligent autonomous behavior from a computer science perspective. In the framework of autonomous OTV, this study offers a few perceptual technologies for autonomous driving in this study.
Findings
One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.
Originality/value
One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.
Details