Search results

1 – 10 of over 11000
Book part
Publication date: 4 April 2024

Ramin Rostamkhani and Thurasamy Ramayah

This chapter of the book seeks to use famous mathematical functions (statistical distribution functions) in evaluating and analyzing supply chain network data related to supply…

Abstract

This chapter of the book seeks to use famous mathematical functions (statistical distribution functions) in evaluating and analyzing supply chain network data related to supply chain management (SCM) elements in organizations. In other words, the main purpose of this chapter is to find the best-fitted statistical distribution functions for SCM data. Explaining how to best fit the statistical distribution function along with the explanation of all possible aspects of a function for selected components of SCM from this chapter will make a significant attraction for production and services experts who will lead their organization to the path of competitive excellence. The main core of the chapter is the reliability values related to the reliability function calculated by the relevant chart and extracting other information based on other aspects of statistical distribution functions such as probability density, cumulative distribution, and failure function. This chapter of the book will turn readers into professional users of statistical distribution functions in mathematics for analyzing supply chain element data.

Details

The Integrated Application of Effective Approaches in Supply Chain Networks
Type: Book
ISBN: 978-1-83549-631-2

Keywords

Open Access
Article
Publication date: 4 May 2023

Marco Bettiol, Mauro Capestro, Eleonora Di Maria and Roberto Grandinetti

This paper aims to investigate the impact of Industry 4.0 (I4.0) technologies on knowledge creation for innovation purposes by assessing the relationships among the variety of…

1252

Abstract

Purpose

This paper aims to investigate the impact of Industry 4.0 (I4.0) technologies on knowledge creation for innovation purposes by assessing the relationships among the variety of I4.0 technologies adopted (breadth I4.0), the penetration of these technologies within the firm’s value chain activities (depth I4.0) and the mediating role of both internal (inter-functional (IF)) and external [with knowledge-intensive business services (KIBS)] collaborations in this process.

Design/methodology/approach

The study employed a quantitative research design. By administering a survey to entrepreneurs, chief operation officers or managers in charge of the operational and technological processes of Italian manufacturing firms, the authors collected 137 useful questionnaires. To test this study's theoretical framework and hypotheses, the authors ran regression and mediation analyses.

Findings

First, the results highlight the positive link between breadth I4.0 and depth I4.0. Moreover, the results show the key role played by increased collaboration among the firm’s business functions and by relationships with KIBS in creating knowledge to innovate processes and products when I4.0 technologies are adopted.

Research limitations/implications

The variety of I4.0 technologies adopted enables a firm to use such technologies in various value chain activities. However, the penetration of I4.0 into the firm’s value chain activities (depth I4.0) does not per se directly imply the production of new knowledge, for which a firm needs internal collaboration among different business functions, in particular with the production area, or collaboration with external partners that favor I4.0 implementation, such as KIBS.

Practical implications

To achieve innovation goals by creating new knowledge, especially in the manufacturing industries, firms should encourage internal and external collaboration when I4.0 technologies are adopted. Moreover, policy makers should not only consider fiscal incentives for the adoption of such technologies, but also encourage the building of networks between adopting firms and external actors.

Originality/value

The study is one of the first attempt that provides empirical evidence of how I4.0 enables the creation of knowledge to innovate processes and products, highlighting the relevance of collaboration both within the company and with external partners.

Details

European Journal of Innovation Management, vol. 26 no. 7
Type: Research Article
ISSN: 1460-1060

Keywords

Article
Publication date: 30 January 2024

Mahnaz Ensafi, Walid Thabet and Deniz Besiktepe

The aim of this paper was to study current practices in FM work order processing to support and improve decision-making. Processing and prioritizing work orders constitute a…

Abstract

Purpose

The aim of this paper was to study current practices in FM work order processing to support and improve decision-making. Processing and prioritizing work orders constitute a critical part of facilities and maintenance management practices given the large amount of work orders submitted daily. User-driven approaches (UDAs) are currently more prevalent for processing and prioritizing work orders but have challenges including inconsistency and subjectivity. Data-driven approaches can provide an advantage over user-driven ones in work-order processing; however, specific data requirements need to be identified to collect and process the functional data needed while achieving more consistent and accurate results.

Design/methodology/approach

This paper presents the findings of an online survey conducted with facility management (FM) experts who are directly or indirectly involved in processing work orders in building maintenance.

Findings

The findings reflect the current practices of 71 survey participants on data requirements, criteria selection, rankings, with current shortcomings and challenges in prioritizing work orders. In addition, differences between criteria and their ranking within participants’ experience, facility types and facility sizes are investigated. The findings of the study provide a snapshot of the current practices in FM work order processing, which aids in developing a comprehensive framework to support data-driven decision-making and address the challenges with UDAs.

Originality/value

Although previous studies have explored the use of selected criteria for processing and prioritizing work orders, this paper investigated a comprehensive list of criteria used by various facilities for processing work orders. Furthermore, previous studies are focused on the processing and prioritization stage, whereas this paper explored the data collected following the completion of the maintenance tasks and the benefits it can provide for processing future work orders. In addition, previous studies have focused on one specific stage of work order processing, whereas this paper investigated the common data between different stages of work order processing for enhanced FM.

Details

Facilities , vol. 42 no. 5/6
Type: Research Article
ISSN: 0263-2772

Keywords

Article
Publication date: 8 November 2023

Sarah Amber Evans, Lingzi Hong, Jeonghyun Kim, Erin Rice-Oyler and Irhamni Ali

Data literacy empowers college students, equipping them with essential skills necessary for their personal lives and careers in today’s data-driven world. This study aims to…

Abstract

Purpose

Data literacy empowers college students, equipping them with essential skills necessary for their personal lives and careers in today’s data-driven world. This study aims to explore how community college students evaluate their data literacy and further examine demographic and educational/career advancement disparities in their self-assessed data literacy levels.

Design/methodology/approach

An online survey presenting a data literacy self-assessment scale was distributed and completed by 570 students at four community colleges. Statistical tests were performed between the data literacy factor scores and students’ demographic and educational/career advancement variables.

Findings

Male students rated their data literacy skills higher than females. The 18–19 age group has relatively lower confidence in their data literacy scores than other age groups. High school graduates do not feel proficient in data literacy to the level required for college and the workplace. Full-time employed students demonstrate more confidence in their data literacy than part-time and nonemployed students.

Originality/value

Given the lack of research on community college students’ data literacy, the findings of this study can be valuable in designing and implementing data literacy training programs for different groups of community college students.

Details

Information and Learning Sciences, vol. 125 no. 3/4
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 8 January 2024

Morteza Mohammadi Ostani, Jafar Ebadollah Amoughin and Mohadeseh Jalili Manaf

This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European…

Abstract

Purpose

This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European Research Information Format [CERIF] and Dublin Core [DC]) to enrich the Thesis-type properties for better description and processing on the Web.

Design/methodology/approach

This study is applied, descriptive analysis in nature and is based on content analysis in terms of method. The research population consisted of elements and attributes of the metadata model and standards (Bibframe, ETD-MS, CERIF and DC) and Thesis-type properties in the Schema.org. The data collection tool was a researcher-made checklist, and the data collection method was structured observation.

Findings

The results show that the 65 Thesis-type properties and the two levels of Thing and CreativeWork as its parents on Schema.org that corresponds to the elements and attributes of related models and standards. In addition, 12 properties are special to the Thesis type for better comprehensive description and processing, and 27 properties are added to the CreativeWork type.

Practical implications

Enrichment and expansion of Thesis-type properties on Schema.org is one of the practical applications of the present study, which have enabled more comprehensive description and processing and increased access points and visibility for ETDs in the environment Web and digital libraries.

Originality/value

This study has offered some new Thesis type properties and CreativeWork levels on Schema.org. To the best of the authors’ knowledge, this is the first time this issue is investigated.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 23 March 2023

Mohd Naz’ri Mahrin, Anusuyah Subbarao, Suriayati Chuprat and Nur Azaliah Abu Bakar

Cloud computing promises dependable services offered through next-generation data centres based on virtualization technologies for computation, network and storage. Big Data…

Abstract

Purpose

Cloud computing promises dependable services offered through next-generation data centres based on virtualization technologies for computation, network and storage. Big Data Applications have been made viable by cloud computing technologies due to the tremendous expansion of data. Disaster management is one of the areas where big data applications are rapidly being deployed. This study looks at how big data is being used in conjunction with cloud computing to increase disaster risk reduction (DRR). This paper aims to explore and review the existing framework for big data used in disaster management and to provide an insightful view of how cloud-based big data platform toward DRR is applied.

Design/methodology/approach

A systematic mapping study is conducted to answer four research questions with papers related to Big Data Analytics, cloud computing and disaster management ranging from the year 2013 to 2019. A total of 26 papers were finalised after going through five steps of systematic mapping.

Findings

Findings are based on each research question.

Research limitations/implications

A specific study on big data platforms on the application of disaster management, in general is still limited. The lack of study in this field is opened for further research sources.

Practical implications

In terms of technology, research in DRR leverage on existing big data platform is still lacking. In terms of data, many disaster data are available, but scientists still struggle to learn and listen to the data and take more proactive disaster preparedness.

Originality/value

This study shows that a very famous platform selected by researchers is central processing unit based processing, namely, Apache Hadoop. Apache Spark which uses memory processing requires a big capacity of memory, therefore this is less preferred in the world of research.

Details

Journal of Science and Technology Policy Management, vol. 14 no. 6
Type: Research Article
ISSN: 2053-4620

Keywords

Article
Publication date: 16 April 2024

Sonali Khatua, Manoranjan Dash and Padma Charan Mishra

Ores and minerals are extracted from the earth’s crust depending on the type of deposit. Iron ore mines come under massive deposit patterns and have their own mine development and…

Abstract

Purpose

Ores and minerals are extracted from the earth’s crust depending on the type of deposit. Iron ore mines come under massive deposit patterns and have their own mine development and life cycles. This study aims to depict the development and life cycle of large open-pit iron ore mines and the intertwined organizational design of the departments/sections operated within the industry.

Design/methodology/approach

Primary data were collected on the site by participant observation, in-depth interviews of the field staff and executives, and field notes. Secondary data were collected from the literature review to compare and cite similar or previous studies on each mining activity. Finally, interactions were conducted with academic experts and top field executives to validate the findings. An organizational ethnography methodology was employed to study and analyse four large-scale iron ore mines of India’s largest iron-producing state, Odisha, from January to April 2023.

Findings

Six stages were observed for development and life cycle, and the operations have been depicted in a schematic diagram for ease of understanding. The intertwined functioning of organizational set-up is also discovered.

Originality/value

The paper will benefit entrepreneurs, mining and geology students, new recruits, and professionals in allied services linked to large iron ore mines. It offers valuable insights for knowledge enhancement, operational manual preparation and further research endeavours.

Details

Journal of Organizational Ethnography, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6749

Keywords

Article
Publication date: 24 October 2022

Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…

Abstract

Purpose

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.

Design/methodology/approach

This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.

Findings

The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.

Research limitations/implications

The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.

Practical implications

To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.

Social implications

To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.

Originality/value

The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 3 April 2024

Mike Brookbanks and Glenn C. Parry

This study aims to examine the effect of Industry 4.0 technology on resilience in established cross-border supply chain(s) (SC).

Abstract

Purpose

This study aims to examine the effect of Industry 4.0 technology on resilience in established cross-border supply chain(s) (SC).

Design/methodology/approach

A literature review provides insight into the resilience capabilities of cross-border SC. The research uses a case study of operational international SC: the producers, importers, logistics companies and UK Government (UKG) departments. Semi-structured interviews determine the resilience capabilities and approaches of participants within cross-border SC and how implementing an Industry 4.0 Internet of Things (IoT) and capitals Distributed Ledger (blockchain) based technology platform changes SC resilience capabilities and approaches.

Findings

A blockchain-based platform introduces common assured data, reducing data duplication. When combined with IoT technology, the platform improves end-to-end SC visibility and information sharing. Industry 4.0 technology builds collaboration, trust, improved agility, adaptability and integration. It enables common resilience capabilities and approaches that reduce the de-coupling between government agencies and participants of cross-border SC.

Research limitations/implications

The case study presents challenges specific to UKG’s customs border operations; research needs to be repeated in different contexts to confirm findings are generalisable.

Practical implications

Operational SC and UKG customs and excise departments must align their resilience strategies to gain full advantage of Industry 4.0 technologies.

Originality/value

Case study research shows how Industry 4.0 technology reduces the de-coupling between the SC and UKG, enhancing common resilience capabilities within established cross-border operations. Improved information sharing and SC visibility provided by IoT and blockchain technologies support the development of resilience in established cross-border SC and enhance interactions with UKG at the customs border.

Details

Supply Chain Management: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1359-8546

Keywords

Article
Publication date: 24 March 2023

Francesco Caputo, Barbara Keller, Michael Möhring, Luca Carrubbo and Rainer Schmidt

In recognising the key role of business intelligence and big data analytics in influencing companies’ decision-making processes, this paper aims to codify the main phases through…

Abstract

Purpose

In recognising the key role of business intelligence and big data analytics in influencing companies’ decision-making processes, this paper aims to codify the main phases through which companies can approach, develop and manage big data analytics.

Design/methodology/approach

By adopting a research strategy based on case studies, this paper depicts the main phases and challenges that companies “live” through in approaching big data analytics as a way to support their decision-making processes. The analysis of case studies has been chosen as the main research method because it offers the possibility for different data sources to describe a phenomenon and subsequently to develop and test theories.

Findings

This paper provides a possible depiction of the main phases and challenges through which the approach(es) to big data analytics can emerge and evolve over time with reference to companies’ decision-making processes.

Research limitations/implications

This paper recalls the attention of researchers in defining clear patterns through which technology-based approaches should be developed. In its depiction of the main phases of the development of big data analytics in companies’ decision-making processes, this paper highlights the possible domains in which to define and renovate approaches to value. The proposed conceptual model derives from the adoption of an inductive approach. Despite its validity, it is discussed and questioned through multiple case studies. In addition, its generalisability requires further discussion and analysis in the light of alternative interpretative perspectives.

Practical implications

The reflections herein offer practitioners interested in company management the possibility to develop performance measurement tools that can evaluate how each phase can contribute to companies’ value creation processes.

Originality/value

This paper contributes to the ongoing debate about the role of digital technologies in influencing managerial and social models. This paper provides a conceptual model that is able to support both researchers and practitioners in understanding through which phases big data analytics can be approached and managed to enhance value processes.

Details

Journal of Knowledge Management, vol. 27 no. 10
Type: Research Article
ISSN: 1367-3270

Keywords

1 – 10 of over 11000