Search results
1 – 10 of 43Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji
The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…
Abstract
Purpose
The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.
Design/methodology/approach
This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.
Findings
The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.
Originality/value
According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.
Details
Keywords
Miquel Centelles and Núria Ferran-Ferrer
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific…
Abstract
Purpose
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific focus on enhancing management and retrieval with a gender nonbinary perspective.
Design/methodology/approach
This study employs heuristic and inspection methods to assess Wikipedia’s KOS, ensuring compliance with international standards. It evaluates the efficiency of retrieving non-masculine gender-related articles using the Catalan Wikipedian category scheme, identifying limitations. Additionally, a novel assessment of Wikidata ontologies examines their structure and coverage of gender-related properties, comparing them to Wikipedia’s taxonomy for advantages and enhancements.
Findings
This study evaluates Wikipedia’s taxonomy and Wikidata’s ontologies, establishing evaluation criteria for gender-based categorization and exploring their structural effectiveness. The evaluation process suggests that Wikidata ontologies may offer a viable solution to address Wikipedia’s categorization challenges.
Originality/value
The assessment of Wikipedia categories (taxonomy) based on KOS standards leads to the conclusion that there is ample room for improvement, not only in matters concerning gender identity but also in the overall KOS to enhance search and retrieval for users. These findings bear relevance for the design of tools to support information retrieval on knowledge-rich websites, as they assist users in exploring topics and concepts.
Details
Keywords
We present configurational theorising as a novel approach to developing middle-range theory in two steps: (1) we illustrate configurational theorising as a new form of supply…
Abstract
Purpose
We present configurational theorising as a novel approach to developing middle-range theory in two steps: (1) we illustrate configurational theorising as a new form of supply chain inquiry by connecting its philosophical assumptions with a methodological execution, and (2) we generate new insights underpinning a middle-range theory for supply chain resilience.
Design/methodology/approach
We synthesise information from a range of sources and invoke ‘critical realism” to suggest a five-phase configurational theorising roadmap to develop middle-range theory. We demonstrate this roadmap to explain supply chain resilience by analysing qualitative data from 22 organisations within the Australian food supply chain.
Findings
Coopetition and supply chain collaboration are necessary causal conditions, but they need to combine with either supply chain agility or multi-sourcing strategy to build supply chain resilience. Asymmetrical analyses showed that the simultaneous absence of supply chain collaboration, supply chain agility and multi-sourcing results in low supply chain resilience, but coopetition was indifferent to low supply chain resilience. Similarly, high supply chain resilience is possible with the non-presence of supply chain agility and multi-sourcing.
Research limitations/implications
The configurational middle-range theorising roadmap presented and empirically tested in this paper constitutes a substantial advancement to both theory and the methodological domain.
Originality/value
This is the first attempt at developing a middle-range theory for supply chains by explicitly drawing on configurational theorising.
Details
Keywords
Security assurance evaluation (SAE) is a well-established approach for assessing the effectiveness of security measures in systems. However, one aspect that is often overlooked in…
Abstract
Purpose
Security assurance evaluation (SAE) is a well-established approach for assessing the effectiveness of security measures in systems. However, one aspect that is often overlooked in these evaluations is the assurance context in which they are conducted. This paper aims to explore the role of assurance context in system SAEs and proposes a conceptual model to integrate the assurance context into the evaluation process.
Design/methodology/approach
The conceptual model highlights the interrelationships between the various elements of the assurance context, including system boundaries, stakeholders, security concerns, regulatory compliance and assurance assumptions and regulatory compliance.
Findings
By introducing the proposed conceptual model, this research provides a framework for incorporating the assurance context into SAEs and offers insights into how it can influence the evaluation outcomes.
Originality/value
By delving into the concept of assurance context, this research seeks to shed light on how it influences the scope, methodologies and outcomes of assurance evaluations, ultimately enabling organizations to strengthen their system security postures and mitigate risks effectively.
Details
Keywords
Chi-Un Lei, Wincy Chan and Yuyue Wang
Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…
Abstract
Purpose
Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.
Design/methodology/approach
In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.
Findings
The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.
Research limitations/implications
The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.
Originality/value
The proposed approach explores the possibility of using machine learning for SDG classifications in scale.
Details
Keywords
Gregory Vial and Camille Grange
This paper presents a new conceptualization of digital service anchored in a coconstitutive ontology of digital “x” phenomena, illuminating the pivotal role of the digital…
Abstract
Purpose
This paper presents a new conceptualization of digital service anchored in a coconstitutive ontology of digital “x” phenomena, illuminating the pivotal role of the digital qualifier in the service context. Our objective is to provide a theoretically grounded conceptualization of digital service and its impact on the nature of the value cocreation process that characterizes digital phenomena.
Design/methodology/approach
Drawing from scholarly works on digital phenomena and fundamental principles of service-dominant logic, this paper delineates the essence of digital service based on the interplay between digitization and digitalization as well as the operational dynamics of generativity and its constitutive dimensions (architecture, community, governance).
Findings
The paper defines digital service as a sociotechnical process of value cocreation, where participants dynamically architect, govern and leverage digital resources. This perspective highlights the organic development of digital service and the prevalence of decentralized control mechanisms. It also underscores how the intersection between generativity’s dimensions—architecture, community and governance—shapes the dynamic evolution and outcomes of digital services.
Originality/value
Our conceptual framework sheds light on our understanding of digital service, offering a foundation to further explore its nature and implications for research and practice, which we illustrate using the case of ChatGPT.
Details
Keywords
Pierre Jouan and Pierre Hallot
The purpose of this paper is to address the challenging issue of developing a quantitative approach for the representation of cultural significance data in heritage information…
Abstract
Purpose
The purpose of this paper is to address the challenging issue of developing a quantitative approach for the representation of cultural significance data in heritage information systems (HIS). The authors propose to provide experts in the field with a dedicated framework to structure and integrate targeted data about historical objects' significance in such environments.
Design/methodology/approach
This research seeks the identification of key indicators which allow to better inform decision-makers about cultural significance. Identified concepts are formalized in a data structure through conceptual data modeling, taking advantage on unified modeling language (HIS). The design science research (DSR) method is implemented to facilitate the development of the data model.
Findings
This paper proposes a practical solution for the formalization of data related to the significance of objects in HIS. The authors end up with a data model which enables multiple knowledge representations through data analysis and information retrieval.
Originality/value
The framework proposed in this article supports a more sustainable vision of heritage preservation as the framework enhances the involvement of all stakeholders in the conservation and management of historical sites. The data model supports explicit communications of the significance of historical objects and strengthens the synergy between the stakeholders involved in different phases of the conservation process.
Details
Keywords
This paper introduces a new approach to theorising and learning from Black, Asian and Minority Ethnic (BAME) women’s experiences of inequality in academia. It offers a versatile…
Abstract
Purpose
This paper introduces a new approach to theorising and learning from Black, Asian and Minority Ethnic (BAME) women’s experiences of inequality in academia. It offers a versatile model with which the structure of a particular racist-sexist inequality regime can be theorised from empirical evidence.
Design/methodology/approach
The paper presents composite, fictionalised accounts of intersectional discrimination which are then analysed through critical realist frameworks, employing critical race feminist theory insights. This novel “whisper network” method centres the knowledge of BAME women in academia, and is translatable to other marginalised actors, offering a more protective means by which to access their knowledge as a foundation for organisational change.
Findings
Through theorising the ontological arrangement of key causal mechanisms responsible for the reproduction of inequality regimes, the paper illuminates links between micro-level intersectional discrimination and meso-level institutional inequality.
Research limitations/implications
In order to preserve anonymity and reduce potential backlash, the vignettes in this paper are not intended to precisely capture specific empirical realities, but instead reflect wider patterns from the author's own whisper network knowledge. Nonetheless, the analytical method developed here could be applied to rigorously collected empirical data, with clear implications for improving organisational practice.
Practical implications
The paper offers a structured and systematic process by which qualitative data on institutional inequality can be analysed and stakeholders engaged to develop and propose solutions, even by individuals new to the field.
Social implications
A methodical basis for strategic action addressing the issues revealed through such an analysis can be developed in order to galvanise and steer organisational change.
Originality/value
The novelty of the paper is twofold: in its original synthesis of critical realist depth ontology and ontological insights from critical race feminist theory about social structures of oppression, and in the development of the innovative “whisper network” method based upon a critical race theory counter-storytelling epistemology, in conversation with the emergent stream of literature within feminist organisation studies regarding the importance of “writing differently”.
Details
Keywords
Linda Salma Angreani, Annas Vijaya and Hendro Wicaksono
A maturity model for Industry 4.0 (I4.0 MM) with influencing factors is designed to address maturity issues in adopting Industry 4.0. Standardisation in I4.0 supports…
Abstract
Purpose
A maturity model for Industry 4.0 (I4.0 MM) with influencing factors is designed to address maturity issues in adopting Industry 4.0. Standardisation in I4.0 supports manufacturing industry transformation, forming reference architecture models (RAMs). This paper aligns key factors and maturity levels in I4.0 MMs with reputable I4.0 RAMs to enhance strategy for I4.0 transformation and implementation.
Design/methodology/approach
Three steps of alignment consist of the systematic literature review (SLR) method to study the current published high-quality I4.0 MMs, the taxonomy development of I4.0 influencing factors by adapting and implementing the categorisation of system theories and aligning I4.0 MMs with RAMs.
Findings
The study discovered that different I4.0 MMs lead to varied organisational interpretations. Challenges and insights arise when aligning I4.0 MMs with RAMs. Aligning MM levels with RAM stages is a crucial milestone in the journey toward I4.0 transformation. Evidence indicates that I4.0 MMs and RAMs often overlook the cultural domain.
Research limitations/implications
Findings contribute to the literature on aligning capabilities with implementation strategies while employing I4.0 MMs and RAMs. We use five RAMs (RAMI4.0, NIST-SME, IMSA, IVRA and IIRA), and as a common limitation in SLR, there could be a subjective bias in reading and selecting literature.
Practical implications
To fully leverage the capabilities of RAMs as part of the I4.0 implementation strategy, companies should initiate the process by undertaking a thorough needs assessment using I4.0 MMs.
Originality/value
The novelty of this paper lies in being the first to examine the alignment of I4.0 MMs with established RAMs. It offers valuable insights for improving I4.0 implementation strategies, especially for companies using both MMs and RAMs in their transformation efforts.
Details