Search results
1 – 10 of over 3000Mahnaz Ensafi, Walid Thabet and Deniz Besiktepe
The aim of this paper was to study current practices in FM work order processing to support and improve decision-making. Processing and prioritizing work orders constitute a…
Abstract
Purpose
The aim of this paper was to study current practices in FM work order processing to support and improve decision-making. Processing and prioritizing work orders constitute a critical part of facilities and maintenance management practices given the large amount of work orders submitted daily. User-driven approaches (UDAs) are currently more prevalent for processing and prioritizing work orders but have challenges including inconsistency and subjectivity. Data-driven approaches can provide an advantage over user-driven ones in work-order processing; however, specific data requirements need to be identified to collect and process the functional data needed while achieving more consistent and accurate results.
Design/methodology/approach
This paper presents the findings of an online survey conducted with facility management (FM) experts who are directly or indirectly involved in processing work orders in building maintenance.
Findings
The findings reflect the current practices of 71 survey participants on data requirements, criteria selection, rankings, with current shortcomings and challenges in prioritizing work orders. In addition, differences between criteria and their ranking within participants’ experience, facility types and facility sizes are investigated. The findings of the study provide a snapshot of the current practices in FM work order processing, which aids in developing a comprehensive framework to support data-driven decision-making and address the challenges with UDAs.
Originality/value
Although previous studies have explored the use of selected criteria for processing and prioritizing work orders, this paper investigated a comprehensive list of criteria used by various facilities for processing work orders. Furthermore, previous studies are focused on the processing and prioritization stage, whereas this paper explored the data collected following the completion of the maintenance tasks and the benefits it can provide for processing future work orders. In addition, previous studies have focused on one specific stage of work order processing, whereas this paper investigated the common data between different stages of work order processing for enhanced FM.
Details
Keywords
Sarah Amber Evans, Lingzi Hong, Jeonghyun Kim, Erin Rice-Oyler and Irhamni Ali
Data literacy empowers college students, equipping them with essential skills necessary for their personal lives and careers in today’s data-driven world. This study aims to…
Abstract
Purpose
Data literacy empowers college students, equipping them with essential skills necessary for their personal lives and careers in today’s data-driven world. This study aims to explore how community college students evaluate their data literacy and further examine demographic and educational/career advancement disparities in their self-assessed data literacy levels.
Design/methodology/approach
An online survey presenting a data literacy self-assessment scale was distributed and completed by 570 students at four community colleges. Statistical tests were performed between the data literacy factor scores and students’ demographic and educational/career advancement variables.
Findings
Male students rated their data literacy skills higher than females. The 18–19 age group has relatively lower confidence in their data literacy scores than other age groups. High school graduates do not feel proficient in data literacy to the level required for college and the workplace. Full-time employed students demonstrate more confidence in their data literacy than part-time and nonemployed students.
Originality/value
Given the lack of research on community college students’ data literacy, the findings of this study can be valuable in designing and implementing data literacy training programs for different groups of community college students.
Details
Keywords
Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao
The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…
Abstract
Purpose
The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.
Design/methodology/approach
This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.
Findings
The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.
Research limitations/implications
The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.
Practical implications
To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.
Social implications
To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.
Originality/value
The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.
Details
Keywords
Dimitrios Kafetzopoulos, Spiridoula Margariti, Chrysostomos Stylios, Eleni Arvaniti and Panagiotis Kafetzopoulos
The objective of this study is to improve the food supply chain performance taking into consideration the fundamental concepts of traceability by combining the current frameworks…
Abstract
Purpose
The objective of this study is to improve the food supply chain performance taking into consideration the fundamental concepts of traceability by combining the current frameworks, its principles, its implications and the emerging technologies.
Design/methodology/approach
A narrative literature review of already existing empirical research on traceability systems was conducted resulting in 862 relevant papers. Following a step-by-step sampling process, the authors ended up with 46 final samples for the literature review.
Findings
The main findings of this study include the various descriptions of the architecture of traceability systems, the different sources enabling this practice, the common desirable attributes, and the enabling technologies for the deployment and implementation of traceability systems. Moreover, several technological solutions are presented, which are currently available for traceability systems, and finally, opportunities for future research are provided.
Practical implications
It provides an insight, which could affect the implementation process of traceability in the food supply chain and consequently the effective management of a food traceability system (FTS). Managers will be able to create a traceability system, which meets users' requirements, thus enhancing the value of products and food companies.
Originality/value
This study contributes to the food supply chain and the traceability systems literature by creating a holistic picture of where something has been and where it should go. It is a starting point for each food company to design and manage its traceability system more effectively.
Details
Keywords
Aamir Rashid, Rizwana Rasheed, Abdul Hafaz Ngah, Mahawattage Dona Ranmali Pradeepa Jayaratne, Samar Rahi and Muhammad Nawaz Tunio
Supply chain (SC) management is more challenging than ever. Significantly, the pandemic has provoked global and economic destruction that appeared in the manufacturing industry as…
Abstract
Purpose
Supply chain (SC) management is more challenging than ever. Significantly, the pandemic has provoked global and economic destruction that appeared in the manufacturing industry as a “black swan.” Therefore, the purpose of this study was to examine the role of information processing and digital supply chain in supply chain resilience through supply chain risk management.
Design/methodology/approach
This study examines SC risk management and resilience from an information processing theory perspective. The authors used data collected from 251 SC professionals in the manufacturing industry, and the authors used a quantitative method to analyze the data. The data was analyzed using partial least squares-structural equation modeling. To confirm the higher-order measurement model, the authors used SmartPLS version 4 software.
Findings
This study found that information processing capability (disruptive orientation and visibility in high-order) and digital SC significantly and positively affect SC risk management and resilience. Similarly, SC risk management positively mediates the relationship between information processing capability and digital SC. However, information processing capability was found to have a more substantial effect on SC risk management than the digital SC.
Research limitations/implications
This study has both academic and practical contributions. It contributed to existing information processing theory, and manufacturing firms can improve their performance by proactively responding to SC disruptions by recognizing the pivotal role of study variables in risk management for a resilient SC.
Originality/value
The conceptual model of this study is based on information processing theory, which asserts that synchronizing information processing capabilities and digital SCs allows a firm to deal with unplanned events. SC disruption orientation and visibility are considered risk controllers as they allow the firms to be more proactive. An integrated model of conceptualizing the disruption orientation, visibility (higher-order) and digital SC with information processing theory makes this research novel.
Details
Keywords
Fred Kyagante, Benjamin Tukamuhabwa, Joel Ngobi Makepu, Henry Mutebi and Colline Waiswa
This paper aims to investigate the relationship between information technology (IT) capabilities, information integration and supply chain resilience within the context of a…
Abstract
Purpose
This paper aims to investigate the relationship between information technology (IT) capabilities, information integration and supply chain resilience within the context of a developing country.
Design/methodology/approach
Employing a structured questionnaire survey, the study collected cross-sectional data from 205 agro-food processing firms in Uganda, drawn from a sample of 248. The data were subsequently analyzed using SPSS version 27 to validate the hypothesized relationships.
Findings
The study findings revealed that IT capabilities and information integration are positively and significantly associated with supply chain resilience. Moreover, it established a positive and significant link between IT capabilities and information integration. The results further revealed both IT capabilities and information integration account for 62.2% of the variance in supply chain resilience (SCRES) in agro-food processing firms in Uganda. Notably, the findings revealed the partial mediating role of information integration, addressing the need to understanding the mechanisms through which IT capabilities influence SCRES.
Research limitations/implications
First, the study used a cross-sectional design which makes it difficult to test causality. Some of the study variables need to be studied over time due to their inherent behavioral elements such as collaboration and information sharing. Hence, future research that could, where possible, collect longitudinal data on the study variables would add value to the findings. Second, the study was limited to agro-food processing firms in Uganda in selected districts of Kampala, Wakiso, Mukono and Jinja. Further research needs to be done in other sectors such as service industry and other geographical locations in Uganda and other developing economies to provide more generality of the findings. Third, the study was based on IT capabilities, information integration and supply chain resilience. There are other variables that affect supply chain resilience such as business continuity planning strategy, interactions between teams within an organization in building resilience, supply chain velocity, system orientation and flexibility among others which can be interesting for further research.
Practical implications
Managers are advised to motivate their IT-related personnel. Efficient use of IT systems by staff, especially who are skillful at self-study, enhances their ability to respond to disruptions accordingly. This enhances SCRES. Additionally, to get feedback from supply chain stakeholders, agro-food processing firms should assess the quality of their supply chain services through using IT capabilities as well as integrating their information.
Originality/value
This study contributes to existing literature by adopting information processing perspective to provide an empirical understanding of IT capabilities and information integration as key resources and capabilities essential for information processing in building SCRES. Furthermore, the study introduces the novel insight of the mediating role of information integration as a pathway in which IT capabilities enhance SCRES in agro-food processing firms in Uganda.
Details
Keywords
Sena Başak, İzzet Kılınç and Aslıhan Ünal
The purpose of this paper is to examine the contribution of big data in the transforming process of an IT firm to a learning organization.
Abstract
Purpose
The purpose of this paper is to examine the contribution of big data in the transforming process of an IT firm to a learning organization.
Design/methodology/approach
The authors adopted a qualitative research approach to define and interpret the ideas and experiences of the IT firms’ employees and to present them to the readers directly. For this purpose, they followed a single-case study design. They researched on a small and medium enterprise operating in the IT sector in Düzce province, Turkey. This paper used a semi-structured interview and document analysis as data collecting methods. In all, eight interviews were conducted with employees. Brochures and website of the organization were used as data sources for the document analysis.
Findings
As a result of in-depth interviews and document analysis, the authors formed five main themes that describe perception of big data and learning organization concepts, methods and practices adopted in transforming process, usage areas of big data in organization and how the sample organization uses big data as a learning organization. The findings of this paper show that the sample organization is a learning IT firm that has used big data in transforming to learning organization and in maintaining the learning culture.
Research limitations/implications
The findings contribute to literature as it is one of the first studies that examine the influence of big data on the transformation process of an IT firm to a learning organization. The findings reveal that IT firms benefit from the solutions of big data while learning. However, as the design of the research is single-case study, the findings may be specific to the sample organization. Future studies are required that examine the subject in different samples and by different research designs.
Originality/value
In literature, research on how IT firms’ managers and employees use big data in organizational learning process is limited. The authors expect that this paper will shed light on future research that examines the effect of big data on the learning process of the organization.
Details
Keywords
Armando Calabrese, Antonio D'Uffizi, Nathan Levialdi Ghiron, Luca Berloco, Elaheh Pourabbas and Nathan Proudlove
The primary objective of this paper is to show a systematic and methodological approach for the digitalization of critical clinical pathways (CPs) within the healthcare domain.
Abstract
Purpose
The primary objective of this paper is to show a systematic and methodological approach for the digitalization of critical clinical pathways (CPs) within the healthcare domain.
Design/methodology/approach
The methodology entails the integration of service design (SD) and action research (AR) methodologies, characterized by iterative phases that systematically alternate between action and reflective processes, fostering cycles of change and learning. Within this framework, stakeholders are engaged through semi-structured interviews, while the existing and envisioned processes are delineated and represented using BPMN 2.0. These methodological steps emphasize the development of an autonomous, patient-centric web application alongside the implementation of an adaptable and patient-oriented scheduling system. Also, business processes simulation is employed to measure key performance indicators of processes and test for potential improvements. This method is implemented in the context of the CP addressing transient loss of consciousness (TLOC), within a publicly funded hospital setting.
Findings
The methodology integrating SD and AR enables the detection of pivotal bottlenecks within diagnostic CPs and proposes optimal corrective measures to ensure uninterrupted patient care, all the while advancing the digitalization of diagnostic CP management. This study contributes to theoretical discussions by emphasizing the criticality of process optimization, the transformative potential of digitalization in healthcare and the paramount importance of user-centric design principles, and offers valuable insights into healthcare management implications.
Originality/value
The study’s relevance lies in its ability to enhance healthcare practices without necessitating disruptive and resource-intensive process overhauls. This pragmatic approach aligns with the imperative for healthcare organizations to improve their operations efficiently and cost-effectively, making the study’s findings relevant.
Details
Keywords
Daria Arkhipova, Marco Montemari, Chiara Mio and Stefano Marasca
This paper aims to critically examine the accounting and information systems literature to understand the changes that are occurring in the management accounting profession. The…
Abstract
Purpose
This paper aims to critically examine the accounting and information systems literature to understand the changes that are occurring in the management accounting profession. The changes the authors are interested in are linked to technology-driven innovations in managerial decision-making and in organizational structures. In addition, the paper highlights research gaps and opportunities for future research.
Design/methodology/approach
The authors adopted a grounded theory literature review method (Wolfswinkel et al., 2013) to achieve the study’s aims.
Findings
The authors identified four research themes that describe the changes in the management accounting profession due to technology-driven innovations: structured vs unstructured data, human vs algorithm-driven decision-making, delineated vs blurred functional boundaries and hierarchical vs platform-based organizations. The authors also identified tensions mentioned in the literature for each research theme.
Originality/value
Previous studies display a rather narrow focus on the role of digital technologies in accounting work and new competences that management accountants require in the digital era. By contrast, the authors focus on the broader technology-driven shifts in organizational processes and structures, which vastly change how accounting information is collected, processed and analyzed internally to support managerial decision-making. Hence, the paper focuses on how management accountants can adapt and evolve as their organizations transition toward a digital environment.
Details
Keywords
Hanuman Reddy N., Amit Lathigara, Rajanikanth Aluvalu and Uma Maheswari V.
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational…
Abstract
Purpose
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational resources to requests that have a high volume of pending processing. CC relies on load balancing to ensure that resources like servers and virtual machines (VMs) running on real servers share the same amount of load. VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data.
Design/methodology/approach
VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data. With a large number of VM or jobs, this method has a long makespan and is very difficult. A new idea to cloud loads without decreasing implementation time or resource consumption is therefore encouraged. Equilibrium optimization is used to cluster the VM into underloaded and overloaded VMs initially in this research. Underloading VMs is used to improve load balance and resource utilization in the second stage. The hybrid algorithm of BAT and the artificial bee colony (ABC) helps with TS using a multi-objective-based system. The VM manager performs VM migration decisions to provide load balance among physical machines (PMs). When a PM is overburdened and another PM is underburdened, the decision to migrate VMs is made based on the appropriate conditions. Balanced load and reduced energy usage in PMs are achieved in the former case. Manta ray foraging (MRF) is used to migrate VMs, and its decisions are based on a variety of factors.
Findings
The proposed approach provides the best possible scheduling for both VMs and PMs. To complete the task, improved whale optimization algorithm for Cloud TS has 42 s of completion time, enhanced multi-verse optimizer has 48 s, hybrid electro search with a genetic algorithm has 50 s, adaptive benefit factor-based symbiotic organisms search has 38 s and, finally, the proposed model has 30 s, which shows better performance of the proposed model.
Originality/value
User’s request or data transmission in a cloud data centre may cause the VMs to be under or overloaded with data. To identify the load on VM, initially EQ algorithm is used for clustering process. To figure out how well the proposed method works when the system is very busy by implementing hybrid algorithm called BAT–ABC. After the TS process, VM migration is occurred at the final stage, where optimal VM is identified by using MRF algorithm. The experimental analysis is carried out by using various metrics such as execution time, transmission time, makespan for various iterations, resource utilization and load fairness. With its system load, the metric gives load fairness. How load fairness is worked out depends on how long each task takes to do. It has been added that a cloud system may be able to achieve more load fairness if tasks take less time to finish.
Details