Search results
1 – 10 of 426Li Chen, Sheng-Qun Chen and Long-Hao Yang
This paper aims to solve the major assessment problem in matching the satisfaction of psychological gratification and mission accomplishment pertaining to volunteers with the…
Abstract
Purpose
This paper aims to solve the major assessment problem in matching the satisfaction of psychological gratification and mission accomplishment pertaining to volunteers with the disaster rescue and recovery tasks.
Design/methodology/approach
An extended belief rule-based (EBRB) method is applied with the method's input and output parameters classified based on expert knowledge and data from literature. These parameters include volunteer self-satisfaction, experience, peer-recognition, and cooperation. First, the model parameters are set; then, the parameters are optimized through data envelopment analysis (DEA) and differential evolution (DE) algorithm. Finally, a numerical mountain rescue example and comparative analysis between with-DEA and without-DEA are presented to demonstrate the efficiency of the proposed method. The proposed model is suitable for a two-way matching evaluation between rescue tasks and volunteers.
Findings
Disasters are unexpected events in which emergency rescue is crucial to human survival. When a disaster occurs, volunteers provide crucial assistance to official rescue teams. This paper finds that decision-makers have a better understanding of two-sided match objects through bilateral feedback over time. With the changing of the matching preference information between rescue tasks and volunteers, the satisfaction of volunteer's psychological gratification and mission accomplishment are also constantly changing. Therefore, considering matching preference information and satisfaction at two-sided match objects simultaneously is necessary to get reasonable target values of matching results for rescue tasks and volunteers.
Originality/value
Based on the authors' novel EBRB method, a matching assessment model is constructed, with two-sided matching of volunteers to rescue tasks. This method will provide matching suggestions in the field of emergency dispatch and contribute to the assessment of emergency plans around the world.
Details
Keywords
The curation of ontologies and knowledge graphs (KGs) is an essential task for industrial knowledge-based applications, as they rely on the contained knowledge to be correct and…
Abstract
Purpose
The curation of ontologies and knowledge graphs (KGs) is an essential task for industrial knowledge-based applications, as they rely on the contained knowledge to be correct and error-free. Often, a significant amount of a KG is curated by humans. Established validation methods, such as Shapes Constraint Language, Shape Expressions or Web Ontology Language, can detect wrong statements only after their materialization, which can be too late. Instead, an approach that avoids errors and adequately supports users is required.
Design/methodology/approach
For solving that problem, Property Assertion Constraints (PACs) have been developed. PACs extend the range definition of a property with additional logic expressed with SPARQL. For the context of a given instance and property, a tailored PAC query is dynamically built and triggered on the KG. It can determine all values that will result in valid property value assertions.
Findings
PACs can avoid the expansion of KGs with invalid property value assertions effectively, as their contained expertise narrows down the valid options a user can choose from. This simplifies the knowledge curation and, most notably, relieves users or machines from knowing and applying this expertise, but instead enables a computer to take care of it.
Originality/value
PACs are fundamentally different from existing approaches. Instead of detecting erroneous materialized facts, they can determine all semantically correct assertions before materializing them. This avoids invalid property value assertions and provides users an informed, purposeful assistance. To the author's knowledge, PACs are the only such approach.
Details
Keywords
Xiaoyan Jiang, Sai Wang, Yong Liu, Bo Xia, Martin Skitmore, Madhav Nepal and Amir Naser Ghanbaripour
With the increasing complexity of public–private partnership (PPP) projects, the amount of data generated during the construction process is massive. This paper aims to develop a…
Abstract
Purpose
With the increasing complexity of public–private partnership (PPP) projects, the amount of data generated during the construction process is massive. This paper aims to develop a new information management method to cope with the risk problems involved in dealing with such data, based on domain ontologies of the construction industry, to help manage PPP risks, share and reuse risk knowledge.
Design/methodology/approach
Risk knowledge concepts are acquired and summarized through PPP failure cases and an extensive literature review to establish a domain framework for risk knowledge using ontology technology to help manage PPP risks.
Findings
The results indicate that the risk ontology is capable of capturing key concepts and relationships involved in managing PPP risks and can be used to facilitate knowledge reuse and storage beneficial to risk management.
Research limitations/implications
The classes in the risk knowledge ontology model constructed in this research do not yet cover all the information in PPP project risks and need to be further extended. Moreover, only the framework and basic methods needed are developed, while the construction of a working ontology model and the relationship between implicit and explicit knowledge is a complicated process that requires repeated modifications and evaluations before it can be implemented.
Practical implications
The ontology provides a basis for turning PPP risk information into risk knowledge to allow the effective sharing and communication of project risks between different project stakeholders. It can also have the potential to help reduce the dependence on subjectivity by mining, using and storing tacit knowledge in the risk management process.
Originality/value
The apparent suitability of the nine classes of PPP risk knowledge (project model, risk type, risk occurrence stage, risk source, risk consequence, risk likelihood, risk carrier, risk management measures and risk case) is identified, and the proposed construction method and steps for a complete domain ontology for PPP risk management are unique. A combination of criteria- and task-based evaluations is also developed for assessing the PPP risk ontology for the first time.
Details
Keywords
Ammar Chakhrit and Mohammed Chennoufi
This paper aims to enable the analysts of reliability and safety system to assess the criticality and prioritize failure modes perfectly to prefer actions for controlling the…
Abstract
Purpose
This paper aims to enable the analysts of reliability and safety system to assess the criticality and prioritize failure modes perfectly to prefer actions for controlling the risks of undesirable scenarios.
Design/methodology/approach
To resolve the challenge of uncertainty and ambiguous related to the parameters, frequency, non-detection and severity considered in the traditional approach failure mode effect and criticality analysis (FMECA) for risk evaluation, the authors used fuzzy logic where these parameters are shown as members of a fuzzy set, which fuzzified by using appropriate membership functions. The adaptive neuro-fuzzy inference system process is suggested as a dynamic, intelligently chosen model to ameliorate and validate the results obtained by the fuzzy inference system and effectively predict the criticality evaluation of failure modes. A new hybrid model is proposed that combines the grey relational approach and fuzzy analytic hierarchy process to improve the exploitation of the FMECA conventional method.
Findings
This research project aims to reflect the real case study of the gas turbine system. Using this analysis allows evaluating the criticality effectively and provides an alternate prioritizing to that obtained by the conventional method. The obtained results show that the integration of two multi-criteria decision methods and incorporating their results enable to instill confidence in decision-makers regarding the criticality prioritizations of failure modes and the shortcoming concerning the lack of established rules of inference system which necessitate a lot of experience and shows the weightage or importance to the three parameters severity, detection and frequency, which are considered to have equal importance in the traditional method.
Originality/value
This paper is providing encouraging results regarding the risk evaluation and prioritizing failures mode and decision-makers guidance to refine the relevance of decision-making to reduce the probability of occurrence and the severity of the undesirable scenarios with handling different forms of ambiguity, uncertainty and divergent judgments of experts.
Details
Keywords
Ying Tao Chai and Ting-Kwei Wang
Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection…
Abstract
Purpose
Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection of surface defects requires inspectors to judge, evaluate and make decisions, which requires sufficient experience and is time-consuming and labor-intensive, and the expertise cannot be effectively preserved and transferred. In addition, the evaluation standards of different inspectors are not identical, which may lead to cause discrepancies in inspection results. Although computer vision can achieve defect recognition, there is a gap between the low-level semantics acquired by computer vision and the high-level semantics that humans understand from images. Therefore, computer vision and ontology are combined to achieve intelligent evaluation and decision-making and to bridge the above gap.
Design/methodology/approach
Combining ontology and computer vision, this paper establishes an evaluation and decision-making framework for concrete surface quality. By establishing concrete surface quality ontology model and defect identification quantification model, ontology reasoning technology is used to realize concrete surface quality evaluation and decision-making.
Findings
Computer vision can identify and quantify defects, obtain low-level image semantics, and ontology can structurally express expert knowledge in the field of defects. This proposed framework can automatically identify and quantify defects, and infer the causes, responsibility, severity and repair methods of defects. Through case analysis of various scenarios, the proposed evaluation and decision-making framework is feasible.
Originality/value
This paper establishes an evaluation and decision-making framework for concrete surface quality, so as to improve the standardization and intelligence of surface defect inspection and potentially provide reusable knowledge for inspecting concrete surface quality. The research results in this paper can be used to detect the concrete surface quality, reduce the subjectivity of evaluation and improve the inspection efficiency. In addition, the proposed framework enriches the application scenarios of ontology and computer vision, and to a certain extent bridges the gap between the image features extracted by computer vision and the information that people obtain from images.
Details
Keywords
Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…
Abstract
Purpose
Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.
Design/methodology/approach
Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.
Findings
The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.
Originality/value
This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.
Details
Keywords
Ruan Wang, Jun Deng, Xinhui Guan and Yuming He
With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data…
Abstract
Purpose
With the development of data mining technology, diverse and broader domain knowledge can be extracted automatically. However, the research on applying knowledge mapping and data visualization techniques to genealogical data is limited. This paper aims to fill this research gap by providing a systematic framework and process guidance for practitioners seeking to uncover hidden knowledge from genealogy.
Design/methodology/approach
Based on a literature review of genealogy's current knowledge reasoning research, the authors constructed an integrated framework for knowledge inference and visualization application using a knowledge graph. Additionally, the authors applied this framework in a case study using “Manchu Clan Genealogy” as the data source.
Findings
The case study shows that the proposed framework can effectively decompose and reconstruct genealogy. It demonstrates the reasoning, discovery, and web visualization application process of implicit information in genealogy. It enhances the effective utilization of Manchu genealogy resources by highlighting the intricate relationships among people, places, and time entities.
Originality/value
This study proposed a framework for genealogy knowledge reasoning and visual analysis utilizing a knowledge graph, including five dimensions: the target layer, the resource layer, the data layer, the inference layer, and the application layer. It helps to gather the scattered genealogy information and establish a data network with semantic correlations while establishing reasoning rules to enable inference discovery and visualization of hidden relationships.
Details
Keywords
Debolina Dutta and Anasha Kannan Poyil
The importance of learning in development in increasingly dynamic contexts can help individuals and organizations adapt to disruption. Artificial intelligence (AI) is emerging as…
Abstract
Purpose
The importance of learning in development in increasingly dynamic contexts can help individuals and organizations adapt to disruption. Artificial intelligence (AI) is emerging as a disruptive technology, with increasing adoption by various human resource management (HRM) functions. However, learning and development (L&D) adoption of AI is lagging, and there is a need to understand of this low adoption based on the internal/external contexts and organization types. Building on open system theory and adopting a technology-in-practice lens, the authors examine the various L&D approaches and the roles of human and technology agencies, enabled by differing structures, different types of organizations and the use of AI in L&D.
Design/methodology/approach
Through a qualitative interview design, data were collected from 27 key stakeholders and L&D professionals of MSMEs, NGOs and MNEs organizations. The authors used Gioia's qualitative research approach for the thematic analysis of the collected data.
Findings
The authors argue that human and technology agencies develop organizational protocols and structures consistent with their internal/external contexts, resource availability and technology adoptions. While the reasons for lagging AI adoption in L&D were determined, the future potential of AI to support L&D also emerges. The authors theorize about the socialization of human and technology-mediated interactions to develop three emerging structures for L&D in organizations of various sizes, industries, sectors and internal/external contexts.
Research limitations/implications
The study hinges on open system theory (OST) and technology-in-practice to demonstrate the interdependence and inseparability of human activity, technological advancement and capability, and structured contexts. The authors examine the reasons for lagging AI adoption in L&D and how agentic focus shifts contingent on the organization's internal/external contexts.
Originality/value
While AI-HRM scholarship has primarily relied on psychological theories to examine impact and outcomes, the authors adopt the OST and technology in practice lens to explain how organizational contexts, resources and technology adoption may influence L&D. This study investigates the use of AI-based technology and its enabling factors for L&D, which has been under-researched.
Details
Keywords
Richard G. Mathieu and Alan E. Turovlin
Cyber risk has significantly increased over the past twenty years. In many organizations, data and operations are managed through a complex technology stack underpinned by an…
Abstract
Purpose
Cyber risk has significantly increased over the past twenty years. In many organizations, data and operations are managed through a complex technology stack underpinned by an Enterprise Resource Planning (ERP) system such as systemanalyse programmentwicklung (SAP). The ERP environment by itself can be overwhelming for a typical ERP Manager, coupled with increasing cybersecurity issues that arise creating periods of intense time pressure, stress and workload, increasing risk to the organization. This paper aims to identify a pragmatic approach to prioritize vulnerabilities for the ERP Manager.
Design/methodology/approach
Applying attention-based theory, a pragmatic approach is developed to prioritize an organization’s response to the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) vulnerabilities using a Classification and Regression Tree (CART).
Findings
The application of classification and regression tree (CART) to the National Institute of Standards and Technology’s National Vulnerability Database identifies prioritization unavailable within the NIST’s categorization.
Practical implications
The ERP Manager is a role between technology, functionality, centralized control and organization data. Without CART, vulnerabilities are left to a reactive approach, subject to overwhelming situations due to intense time pressure, stress and workload.
Originality/value
To the best of the authors’ knowledge, this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. CART has previously not been applied to the prioritizing cybersecurity vulnerabilities.
Details
Keywords