Search results

1 – 10 of over 1000
To view the access options for this content please click here
Article
Publication date: 26 July 2021

Zekun Yang and Zhijie Lin

Tags help promote customer engagement on video-sharing platforms. Video tag recommender systems are artificial intelligence-enabled frameworks that strive for recommending…

Downloads
113

Abstract

Purpose

Tags help promote customer engagement on video-sharing platforms. Video tag recommender systems are artificial intelligence-enabled frameworks that strive for recommending precise tags for videos. Extant video tag recommender systems are uninterpretable, which leads to distrust of the recommendation outcome, hesitation in tag adoption and difficulty in the system debugging process. This study aims at constructing an interpretable and novel video tag recommender system to assist video-sharing platform users in tagging their newly uploaded videos.

Design/methodology/approach

The proposed interpretable video tag recommender system is a multimedia deep learning framework composed of convolutional neural networks (CNNs), which receives texts and images as inputs. The interpretability of the proposed system is realized through layer-wise relevance propagation.

Findings

The case study and user study demonstrate that the proposed interpretable multimedia CNN model could effectively explain its recommended tag to users by highlighting keywords and key patches that contribute the most to the recommended tag. Moreover, the proposed model achieves an improved recommendation performance by outperforming state-of-the-art models.

Practical implications

The interpretability of the proposed recommender system makes its decision process more transparent, builds users’ trust in the recommender systems and prompts users to adopt the recommended tags. Through labeling videos with human-understandable and accurate tags, the exposure of videos to their target audiences would increase, which enhances information technology (IT) adoption, customer engagement, value co-creation and precision marketing on the video-sharing platform.

Originality/value

The proposed model is not only the first explainable video tag recommender system but also the first explainable multimedia tag recommender system to the best of our knowledge.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

Content available
Article
Publication date: 4 December 2020

Sergei O. Kuznetsov, Alexey Masyutin and Aleksandr Ageev

The purpose of this study is to show that closure-based classification and regression models provide both high accuracy and interpretability.

Abstract

Purpose

The purpose of this study is to show that closure-based classification and regression models provide both high accuracy and interpretability.

Design/methodology/approach

Pattern structures allow one to approach the knowledge extraction problem in case of partially ordered descriptions. They provide a way to apply techniques based on closed descriptions to non-binary data. To provide scalability of the approach, the author introduced a lazy (query-based) classification algorithm.

Findings

The experiments support the hypothesis that closure-based classification and regression allow one to both achieve higher accuracy in scoring models as compared to results obtained with classical banking models and retain interpretability of model results, whereas black-box methods grant better accuracy for the cost of losing interpretability.

Originality/value

This is an original research showing the advantage of closure-based classification and regression models in the banking sphere.

Details

Asian Journal of Economics and Banking, vol. 4 no. 3
Type: Research Article
ISSN: 2615-9821

Keywords

Content available
Article
Publication date: 25 March 2021

Per Hilletofth, Movin Sequeira and Wendy Tate

This paper investigates the suitability of fuzzy-logic-based support tools for initial screening of manufacturing reshoring decisions.

Abstract

Purpose

This paper investigates the suitability of fuzzy-logic-based support tools for initial screening of manufacturing reshoring decisions.

Design/methodology/approach

Two fuzzy-logic-based support tools are developed together with experts from a Swedish manufacturing firm. The first uses a complete rule base and the second a reduced rule base. Sixteen inference settings are used in both of the support tools.

Findings

The findings show that fuzzy-logic-based support tools are suitable for initial screening of manufacturing reshoring decisions. The developed support tools are capable of suggesting whether a reshoring decision should be further evaluated or not, based on six primary competitiveness criteria. In contrast to existing literature this research shows that it does not matter whether a complete or reduced rule base is used when it comes to accuracy. The developed support tools perform similarly with no statistically significant differences. However, since the interpretability is much higher when a reduced rule base is used and it require fewer resources to develop, the second tool is more preferable for initial screening purposes.

Research limitations/implications

The developed support tools are implemented at a primary-criteria level and to make them more applicable, they should also include the sub-criteria level. The support tools should also be expanded to not only consider competitiveness criteria, but also other criteria related to availability of resources and strategic orientation of the firm. This requires further research with regard to multi-stage architecture and automatic generation of fuzzy rules in the manufacturing reshoring domain.

Practical implications

The support tools help managers to invest their scarce time on the most promising reshoring projects and to make timely and resilient decisions by taking a holistic perspective on competitiveness. Practitioners are advised to choose the type of support tool based on the available data.

Originality/value

There is a general lack of decision support tools in the manufacturing reshoring domain. This paper addresses the gap by developing fuzzy-logic-based support tools for initial screening of manufacturing reshoring decisions.

Details

Industrial Management & Data Systems, vol. 121 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Article
Publication date: 31 July 2019

Zhe Zhang and Yue Dai

For classification problems of customer relationship management (CRM), the purpose of this paper is to propose a method with interpretability of the classification results…

Abstract

Purpose

For classification problems of customer relationship management (CRM), the purpose of this paper is to propose a method with interpretability of the classification results that combines multiple decision trees based on a genetic algorithm.

Design/methodology/approach

In the proposed method, multiple decision trees are combined in parallel. Subsequently, a genetic algorithm is used to optimize the weight matrix in the combination algorithm.

Findings

The method is applied to customer credit rating assessment and customer response behavior pattern recognition. The results demonstrate that compared to a single decision tree, the proposed combination method improves the predictive accuracy and optimizes the classification rules, while maintaining interpretability of the classification results.

Originality/value

The findings of this study contribute to research methodologies in CRM. It specifically focuses on a new method with interpretability by combining multiple decision trees based on genetic algorithms for customer classification.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 32 no. 5
Type: Research Article
ISSN: 1355-5855

Keywords

To view the access options for this content please click here
Article
Publication date: 9 April 2019

Helena Webb, Menisha Patel, Michael Rovatsos, Alan Davoust, Sofia Ceppi, Ansgar Koene, Liz Dowthwaite, Virginia Portillo, Marina Jirotka and Monica Cano

The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have…

Abstract

Purpose

The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be achieved remains unclear.

Design/methodology/approach

An empirical study was conducted to begin unpacking issues around algorithmic interpretability and transparency. The study involved discussion-based experiments centred around a limited resource allocation scenario which required participants to select their most and least preferred algorithms in a particular context. In addition to collecting quantitative data about preferences, qualitative data captured participants’ expressed reasoning behind their selections.

Findings

Even when provided with the same information about the scenario, participants made different algorithm preference selections and rationalised their selections differently. The study results revealed diversity in participant responses but consistency in the emphasis they placed on normative concerns and the importance of context when accounting for their selections. The issues raised by participants as important to their selections resonate closely with values that have come to the fore in current debates over algorithm prevalence.

Originality/value

This work developed a novel empirical approach that demonstrates the value in pursuing algorithmic interpretability and transparency while also highlighting the complexities surrounding their accomplishment.

Details

Journal of Information, Communication and Ethics in Society, vol. 17 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

To view the access options for this content please click here
Article
Publication date: 11 June 2021

Wei Du, Qiang Yan, Wenping Zhang and Jian Ma

Patent trade recommendations necessitate recommendation interpretability in addition to recommendation accuracy because of patent transaction risks and the technological…

Abstract

Purpose

Patent trade recommendations necessitate recommendation interpretability in addition to recommendation accuracy because of patent transaction risks and the technological complexity of patents. This study designs an interpretable knowledge-aware patent recommendation model (IKPRM) for patent trading. IKPRM first creates a patent knowledge graph (PKG) for patent trade recommendations and then leverages paths in the PKG to achieve recommendation interpretability.

Design/methodology/approach

First, we construct a PKG to integrate online company behaviors and patent information using natural language processing techniques. Second, a bidirectional long short-term memory network (BiLSTM) is utilized with an attention mechanism to establish the connecting paths of a company — patent pair in PKG. Finally, the prediction score of a company — patent pair is calculated by assigning different weights to their connecting paths. The semantic relationships in connecting paths help explain why a candidate patent is recommended.

Findings

Experiments on a real dataset from a patent trading platform verify that IKPRM significantly outperforms baseline methods in terms of hit ratio and normalized discounted cumulative gain (nDCG). The analysis of an online user study verified the interpretability of our recommendations.

Originality/value

A meta-path-based recommendation can achieve certain explainability but suffers from low flexibility when reasoning on heterogeneous information. To bridge this gap, we propose the IKPRM to explain the full paths in the knowledge graph. IKPRM demonstrates good performance and transparency and is a solid foundation for integrating interpretable artificial intelligence into complex tasks such as intelligent recommendations.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 20 September 2019

Tingyu Weng, Wenyang Liu and Jun Xiao

The purpose of this paper is to design a model that can accurately forecast the supply chain sales.

Abstract

Purpose

The purpose of this paper is to design a model that can accurately forecast the supply chain sales.

Design/methodology/approach

This paper proposed a new model based on lightGBM and LSTM to forecast the supply chain sales. In order to verify the accuracy and efficiency of this model, three representative supply chain sales data sets are selected for experiments.

Findings

The experimental results show that the combined model can forecast supply chain sales with high accuracy, efficiency and interpretability.

Practical implications

With the rapid development of big data and AI, using big data analysis and algorithm technology to accurately forecast the long-term sales of goods will provide the database for the supply chain and key technical support for enterprises to establish supply chain solutions. This paper provides an effective method for supply chain sales forecasting, which can help enterprises to scientifically and reasonably forecast long-term commodity sales.

Originality/value

The proposed model not only inherits the ability of LSTM model to automatically mine high-level temporal features, but also has the advantages of lightGBM model, such as high efficiency, strong interpretability, which is suitable for industrial production environment.

Details

Industrial Management & Data Systems, vol. 120 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Article
Publication date: 17 May 2021

Ziming Zeng, Yu Shi, Lavinia Florentina Pieptea and Junhua Ding

Aspects extracted from the user’s historical records are widely used to define user’s fine-grained preferences for building interpretable recommendation systems. As the…

Abstract

Purpose

Aspects extracted from the user’s historical records are widely used to define user’s fine-grained preferences for building interpretable recommendation systems. As the aspects were extracted from the historical records, the aspects that represent user’s negative preferences cannot be identified because of their absence from the records. However, these latent aspects are also as important as those aspects representing user’s positive preferences for building a recommendation system. This paper aims to identify the user’s positive preferences and negative preferences for building an interpretable recommendation.

Design/methodology/approach

First, high-frequency tags are selected as aspects to describe user preferences in aspect-level. Second, user positive and negative preferences are calculated according to the positive and negative preference model, and the interaction between similar aspects is adopted to address the aspect sparsity problem. Finally, an experiment is designed to evaluate the effectiveness of the model. The code and the experiment data link is: https://github.com/shiyu108/Recommendation-system

Findings

Experimental results show the proposed approach outperformed the state-of-the-art methods in widely used public data sets. These latent aspects are also as important as those aspects representing the user’s positive preferences for building a recommendation system.

Originality/value

This paper provides a new approach that identifies and uses not only users’ positive preferences but also negative preferences, which can capture user preference precisely. Besides, the proposed model provides good interpretability.

To view the access options for this content please click here
Article
Publication date: 8 June 2021

Hui Yuan and Weiwei Deng

Recommending suitable doctors to patients on healthcare consultation platforms is important to both the patients and the platforms. Although doctor recommendation methods…

Downloads
272

Abstract

Purpose

Recommending suitable doctors to patients on healthcare consultation platforms is important to both the patients and the platforms. Although doctor recommendation methods have been proposed, they failed to explain recommendations and address the data sparsity problem, i.e. most patients on the platforms are new and provide little information except disease descriptions. This research aims to develop an interpretable doctor recommendation method based on knowledge graph and interpretable deep learning techniques to fill the research gaps.

Design/methodology/approach

This research proposes an advanced doctor recommendation method that leverages a health knowledge graph to overcome the data sparsity problem and uses deep learning techniques to generate accurate and interpretable recommendations. The proposed method extracts interactive features from the knowledge graph to indicate implicit interactions between patients and doctors and identifies individual features that signal the doctors' service quality. Then, the authors feed the features into a deep neural network with layer-wise relevance propagation to generate readily usable and interpretable recommendation results.

Findings

The proposed method produces more accurate recommendations than diverse baseline methods and can provide interpretations for the recommendations.

Originality/value

This study proposes a novel doctor recommendation method. Experimental results demonstrate the effectiveness and robustness of the method in generating accurate and interpretable recommendations. The research provides a practical solution and some managerial implications to online platforms that confront information overload and transparency issues.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

Content available
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely…

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

1 – 10 of over 1000