Search results
1 – 4 of 4Diego Espinosa Gispert, Ibrahim Yitmen, Habib Sadri and Afshin Taheri
The purpose of this research is to develop a framework of an ontology-based Asset Information Model (AIM) for a Digital Twin (DT) platform and enhance predictive maintenance…
Abstract
Purpose
The purpose of this research is to develop a framework of an ontology-based Asset Information Model (AIM) for a Digital Twin (DT) platform and enhance predictive maintenance practices in building facilities that could enable proactive and data-driven decision-making during the Operation and Maintenance (O&M) process.
Design/methodology/approach
A scoping literature review was accomplished to establish the theoretical foundation for the current investigation. A study on developing an ontology-based AIM for predictive maintenance in building facilities was conducted. Semi-structured interviews were conducted with industry professionals to gather qualitative data for ontology-based AIM framework validation and insights.
Findings
The research findings indicate that while the development of ontology faced challenges in defining missing entities and relations in the context of predictive maintenance, insights gained from the interviews enabled the establishment of a comprehensive framework for ontology-based AIM adoption in the Facility Management (FM) sector.
Practical implications
The proposed ontology-based AIM has the potential to enable proactive and data-driven decision-making during the process, optimizing predictive maintenance practices and ultimately enhancing energy efficiency and sustainability in the building industry.
Originality/value
The research contributes to a practical guide for ontology development processes and presents a framework of an Ontology-based AIM for a Digital Twin platform.
Details
Keywords
Jonathan David Schöps and Philipp Jaufenthaler
Large-scale text-based data increasingly poses methodological challenges due to its size, scope and nature, requiring sophisticated methods for managing, visualizing, analyzing…
Abstract
Purpose
Large-scale text-based data increasingly poses methodological challenges due to its size, scope and nature, requiring sophisticated methods for managing, visualizing, analyzing and interpreting such data. This paper aims to propose semantic network analysis (SemNA) as one possible solution to these challenges, showcasing its potential for consumer and marketing researchers through three application areas in phygital contexts.
Design/methodology/approach
This paper outlines three general application areas for SemNA in phygital contexts and presents specific use cases, data collection methodologies, analyses, findings and discussions for each application area.
Findings
The paper uncovers three application areas and use cases where SemNA holds promise for providing valuable insights and driving further adoption of the method: (1) Investigating phygital experiences and consumption phenomena; (2) Exploring phygital consumer and market discourse, trends and practices; and (3) Capturing phygital social constructs.
Research limitations/implications
The limitations section highlights the specific challenges of the qualitative, interpretivist approach to SemNA, along with general methodological constraints.
Practical implications
Practical implications highlight SemNA as a pragmatic tool for managers to analyze and visualize company-/brand-related data, supporting strategic decision-making in physical, digital and phygital spaces.
Originality/value
This paper contributes to the expanding body of computational, tool-based methods by providing an overview of application areas for the qualitative, interpretivist approach to SemNA in consumer and marketing research. It emphasizes the diversity of research contexts and data, where the boundaries between physical and digital spaces have become increasingly intertwined with physical and digital elements closely integrated – a phenomenon known as phygital.
Details
Keywords
This paper aims to examine the concept of standardization beyond its traditional use in generating and implementing standards and good practice guidelines (S&GPG) by looking at…
Abstract
Purpose
This paper aims to examine the concept of standardization beyond its traditional use in generating and implementing standards and good practice guidelines (S&GPG) by looking at existing and emerging trends.
Design/methodology/approach
This paper utilizes two primary approaches to categorizing S&GPG for better comprehension: categorization based on provenance as well as based on subject matter.
Findings
A significant concern related to categorizing S&GPG based on provenance or subject is the constant proliferation of standards being developed and introduced every year. This rapid growth in standards requires frequent re-categorization to keep up with the dynamic nature of this field. To tackle this problem, this paper explores emerging concepts such as ontological representation and frameworks that offer archives and records management (ARM) professionals.
Practical implications
Standardization refers to establishing uniform rules through mutual agreement to ensure consistency. The study of standardization goes beyond the development of individual S&GPG, encompassing their practical application in work settings. Categorizing standards alone may not fully capture their actual use. However, abstraction mechanisms like ontological representations, models and frameworks can demonstrate how these standards have been leveraged. This paper provides illustrative examples rather than an exhaustive list to showcase how these mechanisms have been applied in research projects or as practical tools.
Originality/value
This paper explores the emerging topic of standardization from the perspective of ontological representations and models or frameworks. In addition, it also contributes to the discussion of the 2022 version of ARMA International’s Information Governance Implementation Model and the 2020 version of the World Bank Group's Records Management Roadmap, providing unique insights into these topics.
Details
Keywords
Christian Schwägerl, Peter Stücheli-Herlach, Philipp Dreesen and Julia Krasselt
This study operationalizes risks in stakeholder dialog (SD). It conceptualizes SD as co-produced organizational discourse and examines the capacities of organizers' and…
Abstract
Purpose
This study operationalizes risks in stakeholder dialog (SD). It conceptualizes SD as co-produced organizational discourse and examines the capacities of organizers' and stakeholders' practices to create a shared understanding of an organization’s risks to their mutual benefit. The meetings and online forum of a German public service media (PSM) organization were used as a case study.
Design/methodology/approach
The authors applied corpus-driven linguistic discourse analysis (topic modeling) to analyze citizens' (n = 2,452) forum posts (n = 14,744). Conversation analysis was used to examine video-recorded online meetings.
Findings
Organizers suspended actors' reciprocity in meetings. In the forums, topics emerged autonomously. Citizens' articulation of their identities was more diverse than the categories the organizer provided, and organizers did not respond to the autonomous emergence of contextualizations of citizens' perceptions of PSM performance in relation to their identities. The results suggest that risks arise from interactionally achieved occasions that prevent reasoned agreement and from actors' practices, which constituted autonomous discursive formations of topics and identities in the forums.
Originality/value
This study disentangles actors' practices, mutuality orientation and risk enactment during SD. It advances the methodological knowledge of strategic communication research on SD, utilizing social constructivist research methods to examine the contingencies of organization-stakeholder interaction in SD.
Details