Search results
1 – 10 of 34Abhijit Thakuria, Indranil Chakraborty and Dipen Deka
Websites, search engines, recommender systems, artificial intelligence and digital libraries have the potential to support serendipity for unexpected interaction with information…
Abstract
Purpose
Websites, search engines, recommender systems, artificial intelligence and digital libraries have the potential to support serendipity for unexpected interaction with information and ideas which would lead to favored information discoveries. This paper aims to explore the current state of research into serendipity particularly related to information encountering.
Design/methodology/approach
This study provides bibliometric review of 166 studies on serendipity extracted from the Web of Science. Two bibliometric analysis tools HisCite and RStudio (Biblioshiny) are used on 30 years of data. Citation counts and bibliographic records of the papers are assessed using HisCite. Moreover, visualization of prominent sources, countries, keywords and the collaborative networks of authors and institutions are assessed using RStudio (Biblioshiny) software. A total of 166 papers on serendipity were found from the period 1989 to 2022, and the most influential authors, articles, journals, institutions and countries among these were determined.
Findings
The highest numbers of 11 papers were published in the year 2019. Makri and Erdelez are the most influential authors for contributing studies on serendipity. “Journal of Documentation” is the top-ranking journal. University College London is the prominent affiliation contributing highest number of studies on serendipity. The UK and the USA are the prominent nations contributing highest number of research. Authorship pattern for research on serendipity reveals involvement of single author in majority of the studies. OA Green model is the most preferred model for archiving of research articles by the authors who worked on serendipity. In addition, majority of the research outputs have received a citation ranging from 0 to 50.
Originality/value
To the best of the authors’ knowledge, this paper may be the first bibliometric analysis on serendipity research using bibliometric tools in library and information science studies. The paper would definitely open new avenues for other serendipity researchers.
Details
Keywords
Ville Jylhä, Noora Hirvonen and Jutta Haider
This study addresses how algorithmic recommendations and their affordances shape everyday information practices among young people.
Abstract
Purpose
This study addresses how algorithmic recommendations and their affordances shape everyday information practices among young people.
Design/methodology/approach
Thematic interviews were conducted with 20 Finnish young people aged 15–16 years. The material was analysed using qualitative content analysis, with a focus on everyday information practices involving online platforms.
Findings
The key finding of the study is that the current affordances of algorithmic recommendations enable users to engage in more passive practices instead of active search and evaluation practices. Two major themes emerged from the analysis: enabling not searching, inviting high trust, which highlights the how the affordances of algorithmic recommendations enable the delegation of search to a recommender system and, at the same time, invite trust in the system, and constraining finding, discouraging diversity, which focuses on the constraining degree of affordances and breakdowns associated with algorithmic recommendations.
Originality/value
This study contributes new knowledge regarding the ways in which algorithmic recommendations shape the information practices in young people's everyday lives specifically addressing the constraining nature of affordances.
Details
Keywords
Akinade Adebowale Adewojo, Adetola Adebisi Akanbiemu and Uloma Doris Onuoha
This study explores the implementation of personalised information access, driven by machine learning, in Nigerian public libraries. The purpose of this paper is to address…
Abstract
Purpose
This study explores the implementation of personalised information access, driven by machine learning, in Nigerian public libraries. The purpose of this paper is to address existing challenges, enhance the user experience and bridge the digital divide by leveraging advanced technologies.
Design/methodology/approach
This study assesses the current state of Nigerian public libraries, emphasising challenges such as underfunding and lack of technology adoption. It proposes the integration of machine learning to provide personalised recommendations, predictive analytics for collection development and improved information retrieval processes.
Findings
The findings underscore the transformative potential of machine learning in Nigerian public libraries, offering tailored services, optimising resource allocation and fostering inclusivity. Challenges, including financial constraints and ethical considerations, are acknowledged.
Originality/value
This study contributes to the literature by outlining strategies for responsible implementation and emphasising transparency, user consent and diversity. The research highlights future directions, anticipating advancements in recommendation systems and collaborative efforts for impactful solutions.
Details
Keywords
Jayesh Prakash Gupta, Hongxiu Li, Hannu Kärkkäinen and Raghava Rao Mukkamala
In this study, the authors sought to investigate how the implicit social ties of both project owners and potential backers are associated with crowdfunding project success.
Abstract
Purpose
In this study, the authors sought to investigate how the implicit social ties of both project owners and potential backers are associated with crowdfunding project success.
Design/methodology/approach
Drawing on social ties theory and factors that affect crowdfunding success, in this research, the authors developed a model to study how project owners' and potential backers' implicit social ties are associated with crowdfunding projects' degrees of success. The proposed model was empirically tested with crowdfunding data collected from Kickstarter and social media data collected from Twitter. The authors performed the test using an ordinary least squares (OLS) regression model with fixed effects.
Findings
The authors found that project owners' implicit social ties (specifically, their social media activities, degree centrality and betweenness centrality) are significantly and positively associated with crowdfunding projects' degrees of success. Meanwhile, potential project backers' implicit social ties (their social media activities and degree centrality) are negatively associated with crowdfunding projects' degrees of success. The authors also found that project size moderates the effects of project owners' social media activities on projects' degrees of success.
Originality/value
This work contributes to the literature on crowdfunding by investigating how the implicit social ties of both potential backers and project owners on social media are associated with crowdfunding project success. This study extends the previous research on social ties' roles in explaining crowdfunding project success by including implicit social ties, while the literature explored only explicit social ties.
Details
Keywords
Donghee Shin, Kulsawasd Jitkajornwanich, Joon Soo Lim and Anastasia Spyridou
This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a…
Abstract
Purpose
This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.
Design/methodology/approach
We proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.
Findings
The effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.
Research limitations/implications
When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.
Practical implications
Perceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.
Social implications
Although there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.
Originality/value
The mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167
Details
Keywords
Ismael Gómez-Talal, Lydia González-Serrano, José Luis Rojo-Álvarez and Pilar Talón-Ballestero
This study aims to address the global food waste problem in restaurants by analyzing customer sales information provided by restaurant tickets to gain valuable insights into…
Abstract
Purpose
This study aims to address the global food waste problem in restaurants by analyzing customer sales information provided by restaurant tickets to gain valuable insights into directing sales of perishable products and optimizing product purchases according to customer demand.
Design/methodology/approach
A system based on unsupervised machine learning (ML) data models was created to provide a simple and interpretable management tool. This system performs analysis based on two elements: first, it consolidates and visualizes mutual and nontrivial relationships between information features extracted from tickets using multicomponent analysis, bootstrap resampling and ML domain description. Second, it presents statistically relevant relationships in color-coded tables that provide food waste-related recommendations to restaurant managers.
Findings
The study identified relationships between products and customer sales in specific months. Other ticket elements have been related, such as products with days, hours or functional areas and products with products (cross-selling). Big data (BD) technology helped analyze restaurant tickets and obtain information on product sales behavior.
Research limitations/implications
This study addresses food waste in restaurants using BD and unsupervised ML models. Despite limitations in ticket information and lack of product detail, it opens up research opportunities in relationship analysis, cross-selling, productivity and deep learning applications.
Originality/value
The value and originality of this work lie in the application of BD and unsupervised ML technologies to analyze restaurant tickets and obtain information on product sales behavior. Better sales projection can adjust product purchases to customer demand, reducing food waste and optimizing profits.
Details
Keywords
Manpreet Kaur, Amit Kumar and Anil Kumar Mittal
In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered…
Abstract
Purpose
In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered considerable attention from researchers worldwide. The present study aims to synthesize the research field concerning ANN applications in the stock market to a) systematically map the research trends, key contributors, scientific collaborations, and knowledge structure, and b) uncover the challenges and future research areas in the field.
Design/methodology/approach
To provide a comprehensive appraisal of the extant literature, the study adopted the mixed approach of quantitative (bibliometric analysis) and qualitative (intensive review of influential articles) assessment to analyse 1,483 articles published in the Scopus and Web of Science indexed journals during 1992–2022. The bibliographic data was processed and analysed using VOSviewer and R software.
Findings
The results revealed the proliferation of articles since 2018, with China as the dominant country, Wang J as the most prolific author, “Expert Systems with Applications” as the leading journal, “computer science” as the dominant subject area, and “stock price forecasting” as the predominantly explored research theme in the field. Furthermore, “portfolio optimization”, “sentiment analysis”, “algorithmic trading”, and “crisis prediction” are found as recently emerged research areas.
Originality/value
To the best of the authors’ knowledge, the current study is a novel attempt that holistically assesses the existing literature on ANN applications throughout the entire domain of stock market. The main contribution of the current study lies in discussing the challenges along with the viable methodological solutions and providing application area-wise knowledge gaps for future studies.
Details
Keywords
The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that…
Abstract
Purpose
The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that user-generated content can be efficiently utilised for business intelligence using data science and develops an approach to demonstrate the methods and benefits of the different techniques.
Design/methodology/approach
Using Python Selenium, Beautiful Soup and various text mining approaches in R to access, retrieve and analyse user-generated content, we argue that (1) companies can extract information about the product attributes that matter most to consumers and (2) user-generated reviews enable the use of text mining results in combination with other demographic and statistical information (e.g. ratings) as an efficient input for competitive analysis.
Findings
The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.
Research limitations/implications
The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.
Originality/value
The study makes several contributions to the marketing and management literature, mainly by illustrating the methodological advantages of text mining and accompanying statistical analysis, the different types of distilled information and their use in decision-making.
Details
Keywords
Ke Zhang and Ailing Huang
The purpose of this paper is to provide a guiding framework for studying the travel patterns of PT users. The combination of public transit (PT) users’ travel data and user…
Abstract
Purpose
The purpose of this paper is to provide a guiding framework for studying the travel patterns of PT users. The combination of public transit (PT) users’ travel data and user profiling (UP) technology to draw a portrait of PT users can effectively understand users’ travel patterns, which is important to help optimize the scheduling of PT operations and planning of the network.
Design/methodology/approach
To achieve the purpose, the paper presents a three-level classification method to construct the labeling framework. A station area attribute mining method based on the term frequency-inverse document frequency weighting algorithm is proposed to determine the point of interest attributes of user travel stations, and the spatial correlation patterns of user travel stations are calculated by Moran’s Index. User travel feature labels are extracted from travel data containing Beijing PT data for one consecutive week.
Findings
In this paper, a universal PT user labeling system is obtained and some related methods are conducted including four categories of user-preferred travel area patterns mining and a station area attribute mining method. In the application of the Beijing case, a precise exploration of the spatiotemporal characteristics of PT users is conducted, resulting in the final Beijing PTUP system.
Originality/value
This paper combines UP technology with big data analysis techniques to study the travel patterns of PT users. A user profile label framework is constructed, and data visualization, statistical analysis and K-means clustering are applied to extract specific labels instructed by this system framework. Through these analytical processes, the user labeling system is improved, and its applicability is validated through the analysis of a Beijing PT case.
Details
Keywords
Mitali Desai, Rupa G. Mehta and Dipti P. Rana
Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have…
Abstract
Purpose
Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have raised a concern about the content anomalies in these Q&A and suggested a proper validation before utilizing them in scholarly applications such as influence analysis and content-based recommendation systems. The content anomalies are referred as disinformation in this research. The purpose of this research is firstly, to assess scholarly communications in order to identify disinformation and secondly, to help scholarly platforms determine the scholars who probably disseminate such disinformation. These scholars are referred as the probable sources of disinformation.
Design/methodology/approach
To identify disinformation, the proposed model deduces (1) content redundancy and contextual redundancy in questions (2) contextual nonrelevance in answers with respect to the questions and (3) quality of answers with respect to the expertise of the answering scholars. Then, the model determines the probable sources of disinformation using the statistical analysis.
Findings
The model is evaluated on ResearchGate (RG) data. Results suggest that the model efficiently identifies disinformation from scholarly communications and accurately detects the probable sources of disinformation.
Practical implications
Different platforms with communication portals can use this model as a regulatory mechanism to restrict the prorogation of disinformation. Scholarly platforms can use this model to generate an accurate influence assessment mechanism and also relevant recommendations for their scholars.
Originality/value
The existing studies majorly deal with validating the answers using statistical measures. The proposed model focuses on questions as well as answers and performs a contextual analysis using an advanced word embedding technique.
Details