Search results

1 – 10 of over 10000
Open Access
Article
Publication date: 31 July 2023

Jingrui Ge, Kristoffer Vandrup Sigsgaard, Bjørn Sørskot Andersen, Niels Henrik Mortensen, Julie Krogh Agergaard and Kasper Barslund Hansen

This paper proposes a progressive, multi-level framework for diagnosing maintenance performance: rapid performance health checks of key performance for different equipment groups…

Abstract

Purpose

This paper proposes a progressive, multi-level framework for diagnosing maintenance performance: rapid performance health checks of key performance for different equipment groups and end-to-end process diagnostics to further locate potential performance issues. A question-based performance evaluation approach is introduced to support the selection and derivation of case-specific indicators based on diagnostic aspects.

Design/methodology/approach

The case research method is used to develop the proposed framework. The generic parts of the framework are built on existing maintenance performance measurement theories through a literature review. In the case study, empirical maintenance data of 196 emergency shutdown valves (ESDVs) are collected over a two-year period to support the development and validation of the proposed approach.

Findings

To improve processes, companies need a separate performance measurement structure. This paper suggests a hierarchical model in four layers (objective, domain, aspect and performance measurement) to facilitate the selection and derivation of indicators, which could potentially reduce management complexity and help prioritize continuous performance improvement. Examples of new indicators are derived from a case study that includes 196 ESDVs at an offshore oil and gas production plant.

Originality/value

Methodological approaches to deriving various performance indicators have rarely been addressed in the maintenance field. The proposed diagnostic framework provides a structured way to identify and locate process performance issues by creating indicators that can bridge generic evaluation aspects and maintenance data. The framework is highly adaptive as data availability functions are used as inputs to generate indicators instead of passively filtering out non-applicable existing indicators.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 20 July 2020

Abdelghani Bakhtouchi

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds…

1849

Abstract

With the progress of new technologies of information and communication, more and more producers of data exist. On the other hand, the web forms a huge support of all these kinds of data. Unfortunately, existing data is not proper due to the existence of the same information in different sources, as well as erroneous and incomplete data. The aim of data integration systems is to offer to a user a unique interface to query a number of sources. A key challenge of such systems is to deal with conflicting information from the same source or from different sources. We present, in this paper, the resolution of conflict at the instance level into two stages: references reconciliation and data fusion. The reference reconciliation methods seek to decide if two data descriptions are references to the same entity in reality. We define the principles of reconciliation method then we distinguish the methods of reference reconciliation, first on how to use the descriptions of references, then the way to acquire knowledge. We finish this section by discussing some current data reconciliation issues that are the subject of current research. Data fusion in turn, has the objective to merge duplicates into a single representation while resolving conflicts between the data. We define first the conflicts classification, the strategies for dealing with conflicts and the implementing conflict management strategies. We present then, the relational operators and data fusion techniques. Likewise, we finish this section by discussing some current data fusion issues that are the subject of current research.

Details

Applied Computing and Informatics, vol. 18 no. 3/4
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 9 March 2020

Rebecca Wolf, Joseph M. Reilly and Steven M. Ross

This article informs school leaders and staffs about existing research findings on the use of data-driven decision-making in creating class rosters. Given that teachers are the…

2234

Abstract

Purpose

This article informs school leaders and staffs about existing research findings on the use of data-driven decision-making in creating class rosters. Given that teachers are the most important school-based educational resource, decisions regarding the assignment of students to particular classes and teachers are highly impactful for student learning. Classroom compositions of peers can also influence student learning.

Design/methodology/approach

A literature review was conducted on the use of data-driven decision-making in the rostering process. The review addressed the merits of using various quantitative metrics in the rostering process.

Findings

Findings revealed that, despite often being purposeful about rostering, school leaders and staffs have generally not engaged in data-driven decision-making in creating class rosters. Using data-driven rostering may have benefits, such as limiting the questionable practice of assigning the least effective teachers in the school to the youngest or lowest performing students. School leaders and staffs may also work to minimize negative peer effects due to concentrating low-achieving, low-income, or disruptive students in any one class. Any data-driven system used in rostering, however, would need to be adequately complex to account for multiple influences on student learning. Based on the research reviewed, quantitative data alone may not be sufficient for effective rostering decisions.

Practical implications

Given the rich data available to school leaders and staffs, data-driven decision-making could inform rostering and contribute to more efficacious and equitable classroom assignments.

Originality/value

This article is the first to summarize relevant research across multiple bodies of literature on the opportunities for and challenges of using data-driven decision-making in creating class rosters.

Details

Journal of Research in Innovative Teaching & Learning, vol. 14 no. 2
Type: Research Article
ISSN: 2397-7604

Keywords

Open Access
Article
Publication date: 8 June 2015

Elisabeth Ilie-Zudor, Anikó Ekárt, Zsolt Kemeny, Christopher Buckingham, Philip Welch and Laszlo Monostori

– The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution.

7831

Abstract

Purpose

The purpose of this paper is to examine challenges and potential of big data in heterogeneous business networks and relate these to an implemented logistics solution.

Design/methodology/approach

The paper establishes an overview of challenges and opportunities of current significance in the area of big data, specifically in the context of transparency and processes in heterogeneous enterprise networks. Within this context, the paper presents how existing components and purpose-driven research were combined for a solution implemented in a nationwide network for less-than-truckload consignments.

Findings

Aside from providing an extended overview of today’s big data situation, the findings have shown that technical means and methods available today can comprise a feasible process transparency solution in a large heterogeneous network where legacy practices, reporting lags and incomplete data exist, yet processes are sensitive to inadequate policy changes.

Practical implications

The means introduced in the paper were found to be of utility value in improving process efficiency, transparency and planning in logistics networks. The particular system design choices in the presented solution allow an incremental introduction or evolution of resource handling practices, incorporating existing fragmentary, unstructured or tacit knowledge of experienced personnel into the theoretically founded overall concept.

Originality/value

The paper extends previous high-level view on the potential of big data, and presents new applied research and development results in a logistics application.

Details

Supply Chain Management: An International Journal, vol. 20 no. 4
Type: Research Article
ISSN: 1359-8546

Keywords

Open Access
Article
Publication date: 14 October 2022

Thomas W. Jackson and Ian Richard Hodgkinson

In the pursuit of net-zero, the decarbonization activities of organizations are a critical feature of any sustainability strategy. However, government policy and recent…

6446

Abstract

Purpose

In the pursuit of net-zero, the decarbonization activities of organizations are a critical feature of any sustainability strategy. However, government policy and recent technological innovations do not address the digital carbon footprint of organizations. The paper aims to present the concept of single-use dark data and how knowledge reuse by organizations is a means to digital decarbonization.

Design/methodology/approach

Businesses in all sectors must contribute to reducing digital carbon emissions globally, and to the best of the authors’ knowledge, this paper is the first to examine “how” from a knowledge (re)use perspective. Drawing on insights from the knowledge creation process, the paper presents a set of pathways to greater knowledge reuse for the reduction of organizations’ digital carbon footprint.

Findings

Businesses continually collect, process and store knowledge but generally fail to reuse these knowledge assets – referred to as dark data. Consequently, this dark data has a huge impact on energy use and global emissions. This model is the first to show explicit pathways that businesses can follow to sustainable knowledge practices.

Practical implications

If businesses are to be proactive in their collective pursuit of net-zero, then it becomes paramount that reducing the digital carbon footprint becomes a key sustainability target. The paper presents how this might be accomplished, offering practical and actionable guidance to businesses for digital decarbonization.

Originality/value

Two critical questions are facing businesses: how can decarbonization be achieved? And can it be achieved at a low-cost? Awareness of the damaging impact digitalization may be having on the environment is in its infancy, yet knowledge reuse is a proactive and cost-effective route to reduce carbon emissions, which is explored in the paper.

Open Access
Article
Publication date: 2 February 2023

Chiara Bertolin and Elena Sesana

The overall objective of this study is envisaged to provide decision makers with actionable insights and access to multi-risk maps for the most in-danger stave churches (SCs…

1194

Abstract

Purpose

The overall objective of this study is envisaged to provide decision makers with actionable insights and access to multi-risk maps for the most in-danger stave churches (SCs) among the existing 28 churches at high spatial resolution to better understand, reduce and mitigate single- and multi-risk. In addition, the present contribution aims to provide decision makers with some information to face the exacerbation of the risk caused by the expected climate change.

Design/methodology/approach

Material and data collection started with the consultation of the available literature related to: (1) SCs' conservation status, (2) available methodologies suitable in multi-hazard approach and (3) vulnerability leading indicators to consider when dealing with the impact of natural hazards specifically on immovable cultural heritage.

Findings

The paper contributes to a better understanding of place-based vulnerability with local mapping dimension also considering future threats posed by climate change. The results highlight the danger at which the SCs of Røldal, in case of floods, and of Ringebu, Torpo and Øye, in case of landslide, may face and stress the urgency of increasing awareness and preparedness on these potential hazards.

Originality/value

The contribution for the first time aims to homogeneously collect and report all together existing spread information on architectural features, conservation status and geographical attributes for the whole group of SCs by accompanying this information with as much as possible complete 2D sections collection from existing drawings and novel 3D drawn sketches created for this contribution. Then the paper contributes to a better understanding of place-based vulnerability with local mapping dimension also considering future threats posed by climate change. Then it highlights the danger of floods and landslides at which the 28 SCs are subjected. Finally it reports how these risks will change under the ongoing impact of climate change.

Details

International Journal of Building Pathology and Adaptation, vol. 42 no. 1
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 20 January 2023

Marisa Agostini, Daria Arkhipova and Chiara Mio

This paper aims to identify, synthesise and critically examine the extant academic research on the relation between big data analytics (BDA), corporate accountability and…

3374

Abstract

Purpose

This paper aims to identify, synthesise and critically examine the extant academic research on the relation between big data analytics (BDA), corporate accountability and non-financial disclosure (NFD) across several disciplines.

Design/methodology/approach

This paper uses a structured literature review methodology and applies “insight-critique-transformative redefinition” framework to interpret the findings, develop critique and formulate future research directions.

Findings

This paper identifies and critically examines 12 research themes across four macro categories. The insights presented in this paper indicate that the nature of the relationship between BDA and accountability depends on whether an organisation considers BDA as a value creation instrument or as a revenue generation source. This paper discusses how NFD can effectively increase corporate accountability for ethical, social and environmental consequences of BDA.

Practical implications

This paper presents the results of a structured literature review exploring the state-of-the-art of academic research on the relation between BDA, NFD and corporate accountability. This paper uses a systematic approach, to provide an exhaustive analysis of the phenomenon with rigorous and reproducible research criteria. This paper also presents a series of actionable insights of how corporate accountability for the use of big data and algorithmic decision-making can be enhanced.

Social implications

This paper discusses how NFD can reduce negative social and environmental impact stemming from the corporate use of BDA.

Originality/value

To the best of the authors’ knowledge, this paper is the first one to provide a comprehensive synthesis of academic literature, identify research gaps and outline a prospective research agenda on the implications of big data technologies for NFD and corporate accountability along social, environmental and ethical dimensions.

Details

Sustainability Accounting, Management and Policy Journal, vol. 14 no. 7
Type: Research Article
ISSN: 2040-8021

Keywords

Open Access
Article
Publication date: 1 March 2022

Elisabetta Colucci, Francesca Matrone, Francesca Noardo, Vanessa Assumma, Giulia Datola, Federica Appiotti, Marta Bottero, Filiberto Chiabrando, Patrizia Lombardi, Massimo Migliorini, Enrico Rinaldi, Antonia Spanò and Andrea Lingua

The study, within the Increasing Resilience of Cultural Heritage (ResCult) project, aims to support civil protection to prevent, lessen and mitigate disasters impacts on cultural…

2055

Abstract

Purpose

The study, within the Increasing Resilience of Cultural Heritage (ResCult) project, aims to support civil protection to prevent, lessen and mitigate disasters impacts on cultural heritage using a unique standardised-3D geographical information system (GIS), including both heritage and risk and hazard information.

Design/methodology/approach

A top-down approach, starting from existing standards (an INSPIRE extension integrated with other parts from the standardised and shared structure), was completed with a bottom-up integration according to current requirements for disaster prevention procedures and risk analyses. The results were validated and tested in case studies (differentiated concerning the hazard and type of protected heritage) and refined during user forums.

Findings

Besides the ensuing reusable database structure, the filling with case studies data underlined the tough challenges and allowed proposing a sample of workflows and possible guidelines. The interfaces are provided to use the obtained knowledge base.

Originality/value

The increasing number of natural disasters could severely damage the cultural heritage, causing permanent damage to movable and immovable assets and tangible and intangible heritage. The study provides an original tool properly relating the (spatial) information regarding cultural heritage and the risk factors in a unique archive as a standard-based European tool to cope with these frequent losses, preventing risk.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. 14 no. 2
Type: Research Article
ISSN: 2044-1266

Keywords

Open Access
Article
Publication date: 31 October 2022

Sunday Adewale Olaleye, Emmanuel Mogaji, Friday Joseph Agbo, Dandison Ukpabi and Akwasi Gyamerah Adusei

The data economy mainly relies on the surveillance capitalism business model, enabling companies to monetize their data. The surveillance allows for transforming private human…

2107

Abstract

Purpose

The data economy mainly relies on the surveillance capitalism business model, enabling companies to monetize their data. The surveillance allows for transforming private human experiences into behavioral data that can be harnessed in the marketing sphere. This study aims to focus on investigating the domain of data economy with the methodological lens of quantitative bibliometric analysis of published literature.

Design/methodology/approach

The bibliometric analysis seeks to unravel trends and timelines for the emergence of the data economy, its conceptualization, scientific progression and thematic synergy that could predict the future of the field. A total of 591 data between 2008 and June 2021 were used in the analysis with the Biblioshiny app on the web interfaced and VOSviewer version 1.6.16 to analyze data from Web of Science and Scopus.

Findings

This study combined findable, accessible, interoperable and reusable (FAIR) data and data economy and contributed to the literature on big data, information discovery and delivery by shedding light on the conceptual, intellectual and social structure of data economy and demonstrating data relevance as a key strategic asset for companies and academia now and in the future.

Research limitations/implications

Findings from this study provide a steppingstone for researchers who may engage in further empirical and longitudinal studies by employing, for example, a quantitative and systematic review approach. In addition, future research could expand the scope of this study beyond FAIR data and data economy to examine aspects such as theories and show a plausible explanation of several phenomena in the emerging field.

Practical implications

The researchers can use the results of this study as a steppingstone for further empirical and longitudinal studies.

Originality/value

This study confirmed the relevance of data to society and revealed some gaps to be undertaken for the future.

Details

Information Discovery and Delivery, vol. 51 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 8 February 2024

Leo Van Audenhove, Lotte Vermeire, Wendy Van den Broeck and Andy Demeulenaere

The purpose of this paper is to analyse data literacy in the new Digital Competence Framework for Citizens (DigComp 2.2). Mid-2022 the Joint Research Centre of the European…

Abstract

Purpose

The purpose of this paper is to analyse data literacy in the new Digital Competence Framework for Citizens (DigComp 2.2). Mid-2022 the Joint Research Centre of the European Commission published a new version of the DigComp (EC, 2022). This new version focusses more on the datafication of society and emerging technologies, such as artificial intelligence. This paper analyses how DigComp 2.2 defines data literacy and how the framework looks at this from a societal lens.

Design/methodology/approach

This study critically examines DigComp 2.2, using the data literacy competence model developed by the Knowledge Centre for Digital and Media Literacy Flanders-Belgium. The examples of knowledge, skills and attitudes focussing on data literacy (n = 84) are coded and mapped onto the data literacy competence model, which differentiates between using data and understanding data.

Findings

Data literacy is well-covered in the framework, but there is a stronger emphasis on understanding data rather than using data, for example, collecting data is only coded once. Thematically, DigComp 2.2 primarily focusses on security and privacy (31 codes), with less attention given to the societal impact of data, such as environmental impact or data fairness.

Originality/value

Given the datafication of society, data literacy has become increasingly important. DigComp is widely used across different disciplines and now integrates data literacy as a required competence for citizens. It is, thus, relevant to analyse its views on data literacy and emerging technologies, as it will have a strong impact on education in Europe.

Details

Information and Learning Sciences, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5348

Keywords

1 – 10 of over 10000