Search results

1 – 10 of over 2000
Article
Publication date: 11 September 2019

Duen-Ren Liu, Yu-Shan Liao and Jun-Yi Lu

Providing online news recommendations to users has become an important trend for online media platforms, enabling them to attract more users. The purpose of this paper is to…

Abstract

Purpose

Providing online news recommendations to users has become an important trend for online media platforms, enabling them to attract more users. The purpose of this paper is to propose an online news recommendation system for recommending news articles to users when browsing news on online media platforms.

Design/methodology/approach

A Collaborative Semantic Topic Modeling (CSTM) method and an ensemble model (EM) are proposed to predict user preferences based on the combination of matrix factorization with articles’ semantic latent topics derived from word embedding and latent topic modeling. The proposed EM further integrates an online interest adjustment (OIA) mechanism to adjust users’ online recommendation lists based on their current news browsing.

Findings

This study evaluated the proposed approach using offline experiments, as well as an online evaluation on an existing online media platform. The evaluation shows that the proposed method can improve the recommendation quality and achieve better performance than other recommendation methods can. The online evaluation also shows that integrating the proposed method with OIA can improve the click-through rate for online news recommendation.

Originality/value

The novel CSTM and EM combined with OIA are proposed for news recommendation. The proposed novel recommendation system can improve the click-through rate of online news recommendations, thus increasing online media platforms’ commercial value.

Details

Industrial Management & Data Systems, vol. 119 no. 8
Type: Research Article
ISSN: 0263-5577

Keywords

Book part
Publication date: 26 October 2017

Sudhanshu Joshi, Manu Sharma and Shalu Rathi

The chapter examines a comprehensive review of cross-disciplinary literature in the domain of supply chain forecasting during research period 1991–2017, with the primary aim of…

Abstract

The chapter examines a comprehensive review of cross-disciplinary literature in the domain of supply chain forecasting during research period 1991–2017, with the primary aim of exploring the growth of literature from operational to demand centric forecasting and decision making in service supply chain systems. A noted list of 15,000 articles from journals and search results are used from academic databases (viz. Science Direct, Web of Sciences). Out of various content analysis techniques (Seuring & Gold, 2012), latent sementic analysis (LSA) is used as a content analysis tool (Wei, Yang, & Lin, 2008; Kundu et al., 2015). The reason for adoption of LSA over existing bibliometric techniques is to use the combination of text analysis and mining method to formulate latent factors. LSA creates the scientific grounding to understand the trends. Using LSA, Understanding future research trends will assist researchers in the area of service supply chain forecasting. The study will be beneficial for practitioners of the strategic and operational aspects of service supply chain decision making. The chapter incorporates four sections. The first section describes the introduction to service supply chain management and research development in this domain. The second section describes usage of LSA for current study. The third section describes the finding and results. The fourth and final sections conclude the chapter with a brief discussion on research findings, its limitations, and the implications for future research. The outcomes of analysis presented in this chapter also provide opportunities for researchers/professionals to position their future service supply chain research and/or implementation strategies.

Article
Publication date: 20 July 2023

Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Abstract

Purpose

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Design/methodology/approach

This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.

Findings

The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.

Originality/value

This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Book part
Publication date: 13 December 2017

Qiongwei Ye and Baojun Ma

Internet + and Electronic Business in China is a comprehensive resource that provides insight and analysis into E-commerce in China and how it has revolutionized and continues to…

Abstract

Internet + and Electronic Business in China is a comprehensive resource that provides insight and analysis into E-commerce in China and how it has revolutionized and continues to revolutionize business and society. Split into four distinct sections, the book first lays out the theoretical foundations and fundamental concepts of E-Business before moving on to look at internet+ innovation models and their applications in different industries such as agriculture, finance and commerce. The book then provides a comprehensive analysis of E-business platforms and their applications in China before finishing with four comprehensive case studies of major E-business projects, providing readers with successful examples of implementing E-Business entrepreneurship projects.

Internet + and Electronic Business in China is a comprehensive resource that provides insights and analysis into how E-commerce has revolutionized and continues to revolutionize business and society in China.

Details

Internet+ and Electronic Business in China: Innovation and Applications
Type: Book
ISBN: 978-1-78743-115-7

Article
Publication date: 22 February 2011

Lin‐Chih Chen

Term suggestion is a very useful information retrieval technique that tries to suggest relevant terms for users' queries, to help advertisers find more appropriate terms relevant…

Abstract

Purpose

Term suggestion is a very useful information retrieval technique that tries to suggest relevant terms for users' queries, to help advertisers find more appropriate terms relevant to their target market. This paper aims to focus on the problem of using several semantic analysis methods to implement a term suggestion system.

Design/methodology/approach

Three semantic analysis techniques are adopted – latent semantic indexing (LSI), probabilistic latent semantic indexing (PLSI), and a keyword relationship graph (KRG) – to implement a term suggestion system.

Findings

This paper shows that using multiple semantic analysis techniques can give significant performance improvements.

Research limitations/implications

The suggested terms returned from the system may be out of date, since the system uses a batch processing mode to update the training parameter.

Originality/value

The paper shows that the benefit of the techniques is to overcome the problems of synonymy and polysemy over the information retrieval field, by using a vector space model. Moreover, an intelligent stopping strategy is proposed to save the required number of iterations for probabilistic latent semantic indexing.

Details

Online Information Review, vol. 35 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 February 2016

Manoj Manuja and Deepak Garg

Syntax-based text classification (TC) mechanisms have been overtly replaced by semantic-based systems in recent years. Semantic-based TC systems are particularly useful in those…

Abstract

Purpose

Syntax-based text classification (TC) mechanisms have been overtly replaced by semantic-based systems in recent years. Semantic-based TC systems are particularly useful in those scenarios where similarity among documents is computed considering semantic relationships among their terms. Kernel functions have received major attention because of the unprecedented popularity of SVMs in the field of TC. Most of the kernel functions exploit syntactic structures of the text, but quite a few also use a priori semantic information for knowledge extraction. The purpose of this paper is to investigate semantic kernel functions in the context of TC.

Design/methodology/approach

This work presents performance and accuracy analysis of seven semantic kernel functions (Semantic Smoothing Kernel, Latent Semantic Kernel, Semantic WordNet-based Kernel, Semantic Smoothing Kernel having Implicit Superconcept Expansions, Compactness-based Disambiguation Kernel Function, Omiotis-based S-VSM semantic kernel function and Top-k S-VSM semantic kernel) being implemented with SVM as kernel method. All seven semantic kernels are implemented in SVM-Light tool.

Findings

Performance and accuracy parameters of seven semantic kernel functions have been evaluated and compared. The experimental results show that Top-k S-VSM semantic kernel has the highest performance and accuracy among all the evaluated kernel functions which make it a preferred building block for kernel methods for TC and retrieval.

Research limitations/implications

A combination of semantic kernel function with syntactic kernel function needs to be investigated as there is a scope of further improvement in terms of accuracy and performance in all the seven semantic kernel functions.

Practical implications

This research provides an insight into TC using a priori semantic knowledge. Three commonly used data sets are being exploited. It will be quite interesting to explore these kernel functions on live web data which may test their actual utility in real business scenarios.

Originality/value

Comparison of performance and accuracy parameters is the novel point of this research paper. To the best of the authors’ knowledge, this type of comparison has not been done previously.

Details

Program, vol. 50 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 21 June 2019

Aniruddha Anil Wagire, A.P.S. Rathore and Rakesh Jain

In recent years, Industry 4.0 has received immense attention from academic community, practitioners and the governments across nations resulting in explosive growth in the…

1748

Abstract

Purpose

In recent years, Industry 4.0 has received immense attention from academic community, practitioners and the governments across nations resulting in explosive growth in the publication of articles, thereby making it imperative to reveal and discern the core research areas and research themes of Industry 4.0 extant literature. The purpose of this paper is to discuss research dynamics and to propose a taxonomy of Industry 4.0 research landscape along with future research directions.

Design/methodology/approach

A data-driven text mining approach, Latent Semantic Analysis (LSA), is used to review and extract knowledge from the large corpus of the 503 abstracts of academic papers published in various journals and conference proceedings. The adopted technique extracts several latent factors that characterise the emerging pattern of research. The cross-loading analysis of high-loaded papers is performed to identify the semantic link between research areas and themes.

Findings

LSA results uncover 13 principal research areas and 100 research themes. The study discovers “smart factory” and “new business model” as dominant research areas. A taxonomy is developed which contains five topical areas of Industry 4.0 field.

Research limitations/implications

The data set developed is based on systematic article refining process which includes the keywords search in selected electronic databases and articles limited to English language only. So, there is a possibility that other related work may not be captured in the data set which may be published in other than examined databases and are in non-English language.

Originality/value

To the best of the authors’ knowledge, this study is the first of its kind that has used the LSA technique to reveal research trends in Industry 4.0 domain. This review will be beneficial to scholars and practitioners to understand the diversity and to draw a roadmap of Industry 4.0 research. The taxonomy and outlined future research agenda could help the practitioners and academicians to position their research work.

Details

Journal of Manufacturing Technology Management, vol. 31 no. 1
Type: Research Article
ISSN: 1741-038X

Keywords

Book part
Publication date: 20 September 2018

Arthur C. Graesser, Nia Dowell, Andrew J. Hampton, Anne M. Lippert, Haiying Li and David Williamson Shaffer

This chapter describes how conversational computer agents have been used in collaborative problem-solving environments. These agent-based systems are designed to (a) assess the…

Abstract

This chapter describes how conversational computer agents have been used in collaborative problem-solving environments. These agent-based systems are designed to (a) assess the students’ knowledge, skills, actions, and various other psychological states on the basis of the students’ actions and the conversational interactions, (b) generate discourse moves that are sensitive to the psychological states and the problem states, and (c) advance a solution to the problem. We describe how this was accomplished in the Programme for International Student Assessment (PISA) for Collaborative Problem Solving (CPS) in 2015. In the PISA CPS 2015 assessment, a single human test taker (15-year-old student) interacts with one, two, or three agents that stage a series of assessment episodes. This chapter proposes that this PISA framework could be extended to accommodate more open-ended natural language interaction for those languages that have developed technologies for automated computational linguistics and discourse. Two examples support this suggestion, with associated relevant empirical support. First, there is AutoTutor, an agent that collaboratively helps the student answer difficult questions and solve problems. Second, there is CPS in the context of a multi-party simulation called Land Science in which the system tracks progress and knowledge states of small groups of 3–4 students. Human mentors or computer agents prompt them to perform actions and exchange open-ended chat in a collaborative learning and problem-solving environment.

Details

Building Intelligent Tutoring Systems for Teams
Type: Book
ISBN: 978-1-78754-474-1

Keywords

Article
Publication date: 29 August 2008

Marco Kalz, Jan van Bruggen, Bas Giesbers, Wim Waterink, Jannes Eshuis and Rob Koper

The purpose of this paper is twofold: first the paper aims to sketch the theoretical basis for the use of electronic portfolios for prior learning assessment; second it endeavours…

Abstract

Purpose

The purpose of this paper is twofold: first the paper aims to sketch the theoretical basis for the use of electronic portfolios for prior learning assessment; second it endeavours to introduce latent semantic analysis (LSA) as a powerful method for the computation of semantic similarity between texts and a basis for a new observation link for prior learning assessment.

Design/methodology/approach

A short literature review about e‐assessment was conducted with the result that none of the reviews included new and innovative methods for the assessment of open responses and narrative of learners. On a theoretical basis the connection between e‐portfolio research and research about prior learning assessment is explained based on existing literature. After that, LSA is introduced and several examples from similar educational applications are provided. A model for prior learning assessment on the basis of LSA is presented. A case study at the Open University of The Netherlands is presented and preliminary results are discussed.

Findings

A first inspection of the results shows that the similarity measurement that is produced by the system can differentiate between learners who sent in different material and between the learning activities and chapters.

Originality/value

The paper is original because it combines research from natural language processing with very practical educational problems in higher education and technology‐enhanced learning. For faculty members the presented model and technology can help them in the assessment phase in an APL procedure. In addition, the presented model offers a dynamic method for reasoning about prior knowledge in adaptive e‐learning systems.

Details

Campus-Wide Information Systems, vol. 25 no. 4
Type: Research Article
ISSN: 1065-0741

Keywords

Article
Publication date: 8 May 2017

Panagiotis Mazis and Andrianos Tsekrekos

The purpose of this paper is to analyze the content of the statements that are released by the Federal Open Market Committee (FOMC) after its meetings, identify the main textual…

Abstract

Purpose

The purpose of this paper is to analyze the content of the statements that are released by the Federal Open Market Committee (FOMC) after its meetings, identify the main textual associative patterns in the statements and examine their impact on the US treasury market.

Design/methodology/approach

Latent semantic analysis (LSA), a language processing technique that allows recognition of the textual associative patterns in documents, is applied to all the statements released by the FOMC between 2003 and 2014, so as to identify the main textual “themes” used by the Committee in its communication to the public. The importance of the main identified “themes” is tracked over time, before examining their (collective and individual) effect on treasury market yield volatility via time-series regression analysis.

Findings

We find that FOMC statements incorporate multiple, multifaceted and recurring textual themes, with six of them being able to characterize most of the communicated monetary policy in the authors’ sample period. The themes are statistically significant in explaining the variation in three-month, two-year, five-year and ten-year treasury yields, even after controlling for monetary policy uncertainty and the concurrent economic outlook.

Research limitations/implications

The main research implication of the authors’ study is that the LSA can successfully identify the most economically significant themes underlying the Fed’s communication, as the latter is expressed in monetary policy statements. The authors feel that the findings of the study would be strengthened if the analysis was repeated using intra-day (tick-by-tick or five-minute) data on treasury yields.

Social implications

The authors’ findings are consistent with the notion that the move to “increased transparency” by the Fed is important and meaningful for financial and capital markets, as suggested by the significant effect that the most important identified textual themes have on treasury yield volatility.

Originality/value

This paper makes a timely contribution to a fairly recent stream of research that combines specific textual and statistical techniques so as to conduct content analysis. To the best of their knowledge, the authors’ study is the first that applies the LSA to the statements released by the FOMC.

Details

Review of Accounting and Finance, vol. 16 no. 2
Type: Research Article
ISSN: 1475-7702

Keywords

1 – 10 of over 2000