Search results
11 – 20 of over 1000Sara El-Ateif, Ali Idri and José Luis Fernández-Alemán
COVID-19 continues to spread, and cause increasing deaths. Physicians diagnose COVID-19 using not only real-time polymerase chain reaction but also the computed tomography (CT…
Abstract
Purpose
COVID-19 continues to spread, and cause increasing deaths. Physicians diagnose COVID-19 using not only real-time polymerase chain reaction but also the computed tomography (CT) and chest x-ray (CXR) modalities, depending on the stage of infection. However, with so many patients and so few doctors, it has become difficult to keep abreast of the disease. Deep learning models have been developed in order to assist in this respect, and vision transformers are currently state-of-the-art methods, but most techniques currently focus only on one modality (CXR).
Design/methodology/approach
This work aims to leverage the benefits of both CT and CXR to improve COVID-19 diagnosis. This paper studies the differences between using convolutional MobileNetV2, ViT DeiT and Swin Transformer models when training from scratch and pretraining on the MedNIST medical dataset rather than the ImageNet dataset of natural images. The comparison is made by reporting six performance metrics, the Scott–Knott Effect Size Difference, Wilcoxon statistical test and the Borda Count method. We also use the Grad-CAM algorithm to study the model's interpretability. Finally, the model's robustness is tested by evaluating it on Gaussian noised images.
Findings
Although pretrained MobileNetV2 was the best model in terms of performance, the best model in terms of performance, interpretability, and robustness to noise is the trained from scratch Swin Transformer using the CXR (accuracy = 93.21 per cent) and CT (accuracy = 94.14 per cent) modalities.
Originality/value
Models compared are pretrained on MedNIST and leverage both the CT and CXR modalities.
Details
Keywords
Ziming Zeng, Yu Shi, Lavinia Florentina Pieptea and Junhua Ding
Aspects extracted from the user’s historical records are widely used to define user’s fine-grained preferences for building interpretable recommendation systems. As the aspects…
Abstract
Purpose
Aspects extracted from the user’s historical records are widely used to define user’s fine-grained preferences for building interpretable recommendation systems. As the aspects were extracted from the historical records, the aspects that represent user’s negative preferences cannot be identified because of their absence from the records. However, these latent aspects are also as important as those aspects representing user’s positive preferences for building a recommendation system. This paper aims to identify the user’s positive preferences and negative preferences for building an interpretable recommendation.
Design/methodology/approach
First, high-frequency tags are selected as aspects to describe user preferences in aspect-level. Second, user positive and negative preferences are calculated according to the positive and negative preference model, and the interaction between similar aspects is adopted to address the aspect sparsity problem. Finally, an experiment is designed to evaluate the effectiveness of the model. The code and the experiment data link is: https://github.com/shiyu108/Recommendation-system
Findings
Experimental results show the proposed approach outperformed the state-of-the-art methods in widely used public data sets. These latent aspects are also as important as those aspects representing the user’s positive preferences for building a recommendation system.
Originality/value
This paper provides a new approach that identifies and uses not only users’ positive preferences but also negative preferences, which can capture user preference precisely. Besides, the proposed model provides good interpretability.
Details
Keywords
Hui Yuan and Weiwei Deng
Recommending suitable doctors to patients on healthcare consultation platforms is important to both the patients and the platforms. Although doctor recommendation methods have…
Abstract
Purpose
Recommending suitable doctors to patients on healthcare consultation platforms is important to both the patients and the platforms. Although doctor recommendation methods have been proposed, they failed to explain recommendations and address the data sparsity problem, i.e. most patients on the platforms are new and provide little information except disease descriptions. This research aims to develop an interpretable doctor recommendation method based on knowledge graph and interpretable deep learning techniques to fill the research gaps.
Design/methodology/approach
This research proposes an advanced doctor recommendation method that leverages a health knowledge graph to overcome the data sparsity problem and uses deep learning techniques to generate accurate and interpretable recommendations. The proposed method extracts interactive features from the knowledge graph to indicate implicit interactions between patients and doctors and identifies individual features that signal the doctors' service quality. Then, the authors feed the features into a deep neural network with layer-wise relevance propagation to generate readily usable and interpretable recommendation results.
Findings
The proposed method produces more accurate recommendations than diverse baseline methods and can provide interpretations for the recommendations.
Originality/value
This study proposes a novel doctor recommendation method. Experimental results demonstrate the effectiveness and robustness of the method in generating accurate and interpretable recommendations. The research provides a practical solution and some managerial implications to online platforms that confront information overload and transparency issues.
Details
Keywords
Kuldeep Lamba and Surya Prakash Singh
The purpose of this paper is to identify and analyse the interactions among various enablers which are critical to the success of big data initiatives in operations and supply…
Abstract
Purpose
The purpose of this paper is to identify and analyse the interactions among various enablers which are critical to the success of big data initiatives in operations and supply chain management (OSCM).
Design/methodology/approach
Fourteen enablers of big data in OSCM have been selected from literature and consequent deliberations with experts from industry. Three different multi criteria decision-making (MCDM) techniques, namely, interpretive structural modeling (ISM), fuzzy total interpretive structural modeling (fuzzy-TISM) and decision-making trial and evaluation laboratory (DEMATEL) have been used to identify driving enablers. Further, common enablers from each technique, their hierarchies and inter-relationships have been established.
Findings
The enabler modelings using ISM, Fuzzy-TISM and DEMATEL shows that the top management commitment, financial support for big data initiatives, big data/data science skills, organizational structure and change management program are the most influential/driving enablers. Across all three different techniques, these five different enablers has been identified as the most promising ones to implement big data in OSCM. On the other hand, interpretability of analysis, big data quality management, data capture and storage and data security and privacy have been commonly identified across all three different modeling techniques as the most dependent big data enablers for OSCM.
Research limitations/implications
The MCDM models of big data enablers have been formulated based on the inputs from few domain experts and may not reflect the opinion of whole practitioners community.
Practical implications
The findings enable the decision makers to appropriately choose the desired and drop undesired enablers in implementing the big data initiatives to improve the performance of OSCM. The most common driving big data enablers can be given high priority over others and can significantly enhance the performance of OSCM.
Originality/value
MCDM-based hierarchical models and causal diagram for big data enablers depicting contextual inter-relationships has been proposed which is a new effort for implementation of big data in OSCM.
Details
Keywords
Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…
Abstract
Purpose
Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.
Design/methodology/approach
The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.
Findings
The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).
Research limitations/implications
As in other systematic literature review studies, the results are limited by the content of the selected papers.
Practical implications
The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.
Originality/value
This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.
Details
Keywords
Hamid Hassani, Azadeh Mohebi, M.J. Ershadi and Ammar Jalalimanesh
The purpose of this research is to provide a framework in which new data quality dimensions are defined. The new dimensions provide new metrics for the assessment of lecture video…
Abstract
Purpose
The purpose of this research is to provide a framework in which new data quality dimensions are defined. The new dimensions provide new metrics for the assessment of lecture video indexing. As lecture video indexing involves various steps, the proposed framework containing new dimensions, introduces new integrated approach for evaluating an indexing method or algorithm from the beginning to the end.
Design/methodology/approach
The emphasis in this study is on the fifth step of design science research methodology (DSRM), known as evaluation. That is, the methods that are developed in the field of lecture video indexing as an artifact, should be evaluated from different aspects. In this research, nine dimensions of data quality including accuracy, value-added, relevancy, completeness, appropriate amount of data, concise, consistency, interpretability and accessibility have been redefined based on previous studies and nominal group technique (NGT).
Findings
The proposed dimensions are implemented as new metrics to evaluate a newly developed lecture video indexing algorithm, LVTIA and numerical values have been obtained based on the proposed definitions for each dimension. In addition, the new dimensions are compared with each other in terms of various aspects. The comparison shows that each dimension that is used for assessing lecture video indexing, is able to reflect a different weakness or strength of an indexing method or algorithm.
Originality/value
Despite development of different methods for indexing lecture videos, the issue of data quality and its various dimensions have not been studied. Since data with low quality can affect the process of scientific lecture video indexing, the issue of data quality in this process requires special attention.
Details
Keywords
The purpose of this paper is to critically analyze the value of the written comments section on student evaluations of teaching and develop a framework to improve the…
Abstract
Purpose
The purpose of this paper is to critically analyze the value of the written comments section on student evaluations of teaching and develop a framework to improve the interpretability of such data.
Design/methodology/approach
The paper reviews past investigations into the reliability and interpretability of student evaluations of teaching, and then constructs a framework that can potentially improve the value of data gathered from written comments.
Findings
It is shown that including information about the congruence of the comment writer's empirical ratings with those of the average class participant may help instructors separate thoughtful comments that represent the majority sentiment from attitudes of a vocal minority or those with personal biases.
Practical implications
The proposed scheme can be implemented electronically while preserving the confidentiality of the evaluators.
Originality/value
The paper offers constructive suggestions on improving the written comments section, a component of student evaluations of teaching that has so far received little systematic appraisal.
Details
Keywords
James Wakiru, Liliane Pintelon, Peter Muchiri and Peter Chemweno
The purpose of this paper is to develop a maintenance decision support system (DSS) framework using in-service lubricant data for fault diagnosis. The DSS reveals embedded…
Abstract
Purpose
The purpose of this paper is to develop a maintenance decision support system (DSS) framework using in-service lubricant data for fault diagnosis. The DSS reveals embedded patterns in the data (knowledge discovery) and automatically quantifies the influence of lubricant parameters on the unhealthy state of the machine using alternative classifiers. The classifiers are compared for robustness from which decision-makers select an appropriate classifier given a specific lubricant data set.
Design/methodology/approach
The DSS embeds a framework integrating cluster and principal component analysis, for feature extraction, and eight classifiers among them extreme gradient boosting (XGB), random forest (RF), decision trees (DT) and logistic regression (LR). A qualitative and quantitative criterion is developed in conjunction with practitioners for comparing the classifier models.
Findings
The results show the importance of embedded knowledge, explored via a knowledge discovery approach. Moreover, the efficacy of the embedded knowledge on maintenance DSS is emphasized. Importantly, the proposed framework is demonstrated as plausible for decision support due to its high accuracy and consideration of practitioners needs.
Practical implications
The proposed framework will potentially assist maintenance managers in accurately exploiting lubricant data for maintenance DSS, while offering insights with reduced time and errors.
Originality/value
Advances in lubricant-based intelligent approach for fault diagnosis is seldom utilized in practice, however, may be incorporated in the information management systems offering high predictive accuracy. The classification models' comparison approach, will inevitably assist the industry in selecting amongst divergent models' for DSS.
Details
Keywords
Lei Zhao, Yingyi Zhang and Chengzhi Zhang
To understand the meaning of a sentence, humans can focus on important words in the sentence, which reflects our eyes staying on each word in different gaze time or times. Thus…
Abstract
Purpose
To understand the meaning of a sentence, humans can focus on important words in the sentence, which reflects our eyes staying on each word in different gaze time or times. Thus, some studies utilize eye-tracking values to optimize the attention mechanism in deep learning models. But these studies lack to explain the rationality of this approach. Whether the attention mechanism possesses this feature of human reading needs to be explored.
Design/methodology/approach
The authors conducted experiments on a sentiment classification task. Firstly, they obtained eye-tracking values from two open-source eye-tracking corpora to describe the feature of human reading. Then, the machine attention values of each sentence were learned from a sentiment classification model. Finally, a comparison was conducted to analyze machine attention values and eye-tracking values.
Findings
Through experiments, the authors found the attention mechanism can focus on important words, such as adjectives, adverbs and sentiment words, which are valuable for judging the sentiment of sentences on the sentiment classification task. It possesses the feature of human reading, focusing on important words in sentences when reading. Due to the insufficient learning of the attention mechanism, some words are wrongly focused. The eye-tracking values can help the attention mechanism correct this error and improve the model performance.
Originality/value
Our research not only provides a reasonable explanation for the study of using eye-tracking values to optimize the attention mechanism but also provides new inspiration for the interpretability of attention mechanism.
Details
Keywords
Leony Derick, Gayane Sedrakyan, Pedro J. Munoz-Merino, Carlos Delgado Kloos and Katrien Verbert
The purpose of this paper is to evaluate four visualizations that represent affective states of students.
Abstract
Purpose
The purpose of this paper is to evaluate four visualizations that represent affective states of students.
Design/methodology/approach
An empirical-experimental study approach was used to assess the usability of affective state visualizations in a learning context. The first study was conducted with students who had knowledge of visualization techniques (n=10). The insights from this pilot study were used to improve the interpretability and ease of use of the visualizations. The second study was conducted with the improved visualizations with students who had no or limited knowledge of visualization techniques (n=105).
Findings
The results indicate that usability, measured by perceived usefulness and insight, is overall acceptable. However, the findings also suggest that interpretability of some visualizations, in terms of the capability to support emotional awareness, still needs to be improved. The level of students’ awareness of their emotions during learning activities based on the visualization interpretation varied depending on previous knowledge of information visualization techniques. Awareness was found to be high for the most frequently experienced emotions and activities that were the most frustrating, but lower for more complex insights such as interpreting differences with peers. Furthermore, simpler visualizations resulted in better outcomes than more complex techniques.
Originality/value
Detection of affective states of students and visualizations of these states in computer-based learning environments have been proposed to support student awareness and improve learning. However, the evaluation of visualizations of these affective states with students to support awareness in real life settings is an open issue.
Details