Search results

1 – 10 of 154
Article
Publication date: 1 June 2015

Tadahiko Kumamoto, Hitomi Wada and Tomoya Suzuki

The purpose of this paper is to propose a Web application system for visualizing Twitter users based on temporal changes in the impressions received from the tweets posted by the…

Abstract

Purpose

The purpose of this paper is to propose a Web application system for visualizing Twitter users based on temporal changes in the impressions received from the tweets posted by the users on Twitter.

Design/methodology/approach

The system collects a specified user’s tweets posted during a specified period using Twitter API, rates each tweet based on three distinct impressions using an impression mining system, and then generates pie and line charts to visualize results of the previous processing using Google Chart API.

Findings

Because there are more news articles featuring somber topics than those featuring cheerful topics, the impression mining system, which uses impression lexicons created from a newspaper database, is considered to be more effective for analyzing negative tweets.

Research limitations/implications

The system uses Twitter API to collect tweets from Twitter. This suggests that the system cannot collect tweets of the users who maintain private timelines. According to our questionnaire, about 30 per cent of Twitter users’ timelines are private. This is one of the limitations to using the system.

Originality/value

The system enables people to grasp the personality of Twitter users by visualizing the impressions received from tweets the users normally post on Twitter. The target impressions are limited to those represented by three bipolar scales of impressions: “Happy/Sad”, “Glad/Angry” and “Peaceful/Strained”. The system also enables people to grasp the context in which keywords are used by visualizing the impressions from tweets in which the keywords were found.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 5 August 2014

Fei Xu

The purpose of this paper is to explore methods of producing Quick Response (QR) Code, its customization, artistic look and applications and elaborate the technique of generating…

1472

Abstract

Purpose

The purpose of this paper is to explore methods of producing Quick Response (QR) Code, its customization, artistic look and applications and elaborate the technique of generating QR Code for library bibliographic records.

Design/methodology/approach

Through literature review, the study explored methods of generating QR Code and its applications in academic libraries. Based on research work and implementation experience, an efficient procedure for generating QR Code for bibliographic records was developed.

Findings

The study identified methods of generating QR Code, its customization and applications, and established the technique of generating QR Code for library bibliographic records.

Originality/value

The study is expected to facilitate the growth of QR Code’s visibility and success, and its mainstream adoption. The technique of generating QR Code for library bibliographic records in the study should be instructive for similar projects.

Details

VINE, vol. 44 no. 3
Type: Research Article
ISSN: 0305-5728

Keywords

Article
Publication date: 8 June 2012

Joyce Chapman and David Woodbury

The purpose of this paper is to encourage administrators of device‐lending programs to leverage existing quantitative data for management purposes by integrating analysis of…

1905

Abstract

Purpose

The purpose of this paper is to encourage administrators of device‐lending programs to leverage existing quantitative data for management purposes by integrating analysis of quantitative data into the day‐to‐day workflow.

Design/methodology/approach

This is a case study of NCSU Libraries' efforts to analyze and visualize transactional data to aid in the on‐going management of a device‐lending program.

Findings

Analysis and visualization of qualitative data related to technology lending revealed patterns in lending over the course of the semester, day, and week that had previously gone unrecognized. With more concrete data about trends in wait times, capacity lending, and circulation volume, staff are now able to make more informed purchasing decisions, modify systems and workflows to better meet user needs, and begin to explore new ideas for services and staffing models.

Practical implications

The concepts and processes described here can be replicated by other libraries that wish to leverage transactional data analysis and data visualization to aid in management of a device‐lending program.

Originality/value

Although much literature exists on the implementation and qualitative evaluation of device‐lending programs, this paper is the first to provide librarians with ideas for leveraging analysis of transactional data to improve management of a device‐lending program.

Article
Publication date: 8 June 2010

Paula MacKinnon and Cathy Sanford

The purpose of this paper is to describe why and how Contra Costa County Library is using two‐dimensional barcodes called (QR, quick response) codes and a mobile patron support…

1076

Abstract

Puropose

The purpose of this paper is to describe why and how Contra Costa County Library is using two‐dimensional barcodes called (QR, quick response) codes and a mobile patron support system to deliver library service to mobile phone users through a service called “Snap & Go”.

Desing/methodology/approach

The paper finds that case study to review the process of defining and delivering mobile library service through the use of QR codes.

Findings

QR codes provide a quick and easy way for library patrons with mobile phones to access relevant information and service both inside the library and out in the community.

Originality/value

The paper discusses one library's initiative to pilot the use of QR codes to deliver mobile library service.

Details

Library Hi Tech News, vol. 27 no. 4/5
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 22 January 2018

Richard Manly Adams Jr

The purpose of this paper is to argue that academic librarians must learn to use web service APIs and to introduce APIs to a non-technical audience.

1450

Abstract

Purpose

The purpose of this paper is to argue that academic librarians must learn to use web service APIs and to introduce APIs to a non-technical audience.

Design/methodology/approach

This paper is a viewpoint that argues for the importance of APIs by identifying the shifting paradigms of libraries in the digital age. Showing that the primary function of librarians will be to share and curate digital content, the paper shows that APIs empower a librarian to do that.

Findings

The implementation of web service APIs is within the reach of librarians who are not trained as software developers. Online documentation and free courses offer sufficient training for librarians to learn these new ways of sharing and curating digital content.

Research limitations/implications

The argument of this paper depends upon an assumption of a shift in the paradigm of libraries away from collections of materials to access points of information. The need for libraries to learn APIs depends upon a new role for librarians that anecdotal evidence supports is rising.

Practical implications

By learning a few technical skills, librarians can help patrons find relevant information within a world of proliferating information sources.

Originality/value

The literature on APIs is highly technical and overwhelming for those without training in software development. This paper translates technical language for those who have not programmed before.

Details

Library Hi Tech, vol. 36 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 9 February 2018

Arshad Ahmad, Chong Feng, Shi Ge and Abdallah Yousif

Software developers extensively use stack overflow (SO) for knowledge sharing on software development. Thus, software engineering researchers have started mining the…

1736

Abstract

Purpose

Software developers extensively use stack overflow (SO) for knowledge sharing on software development. Thus, software engineering researchers have started mining the structured/unstructured data present in certain software repositories including the Q&A software developer community SO, with the aim to improve software development. The purpose of this paper is show that how academics/practitioners can get benefit from the valuable user-generated content shared on various online social networks, specifically from Q&A community SO for software development.

Design/methodology/approach

A comprehensive literature review was conducted and 166 research papers on SO were categorized about software development from the inception of SO till June 2016.

Findings

Most of the studies revolve around a limited number of software development tasks; approximately 70 percent of the papers used millions of posts data, applied basic machine learning methods, and conducted investigations semi-automatically and quantitative studies. Thus, future research should focus on the overcoming existing identified challenges and gaps.

Practical implications

The work on SO is classified into two main categories; “SO design and usage” and “SO content applications.” These categories not only give insights to Q&A forum providers about the shortcomings in design and usage of such forums but also provide ways to overcome them in future. It also enables software developers to exploit such forums for the identified under-utilized tasks of software development.

Originality/value

The study is the first of its kind to explore the work on SO about software development and makes an original contribution by presenting a comprehensive review, design/usage shortcomings of Q&A sites, and future research challenges.

Details

Data Technologies and Applications, vol. 52 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 13 September 2019

Collins Udanor and Chinatu C. Anyanwu

Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media…

2138

Abstract

Purpose

Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media provides a breeding ground for hate speech and makes combating it seems like a lost battle. However, what may constitute a hate speech in a cultural or religious neutral society may not be perceived as such in a polarized multi-cultural and multi-religious society like Nigeria. Defining hate speech, therefore, may be contextual. Hate speech in Nigeria may be perceived along ethnic, religious and political boundaries. The purpose of this paper is to check for the presence of hate speech in social media platforms like Twitter, and to what degree is hate speech permissible, if available? It also intends to find out what monitoring mechanisms the social media platforms like Facebook and Twitter have put in place to combat hate speech. Lexalytics is a term coined by the authors from the words lexical analytics for the purpose of opinion mining unstructured texts like tweets.

Design/methodology/approach

This research developed a Python software called polarized opinions sentiment analyzer (POSA), adopting an ego social network analytics technique in which an individual’s behavior is mined and described. POSA uses a customized Python N-Gram dictionary of local context-based terms that may be considered as hate terms. It then applied the Twitter API to stream tweets from popular and trending Nigerian Twitter handles in politics, ethnicity, religion, social activism, racism, etc., and filtered the tweets against the custom dictionary using unsupervised classification of the texts as either positive or negative sentiments. The outcome is visualized using tables, pie charts and word clouds. A similar implementation was also carried out using R-Studio codes and both results are compared and a t-test was applied to determine if there was a significant difference in the results. The research methodology can be classified as both qualitative and quantitative. Qualitative in terms of data classification, and quantitative in terms of being able to identify the results as either negative or positive from the computation of text to vector.

Findings

The findings from two sets of experiments on POSA and R are as follows: in the first experiment, the POSA software found that the Twitter handles analyzed contained between 33 and 55 percent hate contents, while the R results show hate contents ranging from 38 to 62 percent. Performing a t-test on both positive and negative scores for both POSA and R-studio, results reveal p-values of 0.389 and 0.289, respectively, on an α value of 0.05, implying that there is no significant difference in the results from POSA and R. During the second experiment performed on 11 local handles with 1,207 tweets, the authors deduce as follows: that the percentage of hate contents classified by POSA is 40 percent, while the percentage of hate contents classified by R is 51 percent. That the accuracy of hate speech classification predicted by POSA is 87 percent, while free speech is 86 percent. And the accuracy of hate speech classification predicted by R is 65 percent, while free speech is 74 percent. This study reveals that neither Twitter nor Facebook has an automated monitoring system for hate speech, and no benchmark is set to decide the level of hate contents allowed in a text. The monitoring is rather done by humans whose assessment is usually subjective and sometimes inconsistent.

Research limitations/implications

This study establishes the fact that hate speech is on the increase on social media. It also shows that hate mongers can actually be pinned down, with the contents of their messages. The POSA system can be used as a plug-in by Twitter to detect and stop hate speech on its platform. The study was limited to public Twitter handles only. N-grams are effective features for word-sense disambiguation, but when using N-grams, the feature vector could take on enormous proportions and in turn increasing sparsity of the feature vectors.

Practical implications

The findings of this study show that if urgent measures are not taken to combat hate speech there could be dare consequences, especially in highly polarized societies that are always heated up along religious and ethnic sentiments. On daily basis tempers are flaring in the social media over comments made by participants. This study has also demonstrated that it is possible to implement a technology that can track and terminate hate speech in a micro-blog like Twitter. This can also be extended to other social media platforms.

Social implications

This study will help to promote a more positive society, ensuring the social media is positively utilized to the benefit of mankind.

Originality/value

The findings can be used by social media companies to monitor user behaviors, and pin hate crimes to specific persons. Governments and law enforcement bodies can also use the POSA application to track down hate peddlers.

Details

Data Technologies and Applications, vol. 53 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 June 2021

Boby John

The purpose of this paper is to develop a control chart pattern recognition methodology for monitoring the weekly customer complaints of outsourced information technology-enabled…

Abstract

Purpose

The purpose of this paper is to develop a control chart pattern recognition methodology for monitoring the weekly customer complaints of outsourced information technology-enabled service (ITeS) processes.

Design/methodology/approach

A two-step methodology is used to classify the processes as having natural or unnatural variation based on past 20 weeks' customer complaints. The step one is to simulate data on various control chart patterns namely natural variation, upward shift, upward trend, etc. Then a deep learning neural network model consisting of two dense layers is developed to classify the patterns as of natural or unnatural variation.

Findings

The validation of the methodology on telecom vertical processes has correctly detected unnatural variations in two terminated processes. The implementation of the methodology on banking and financial vertical processes has detected unnatural variation in one of the processes. This helped the company management to take remedial actions, renegotiate the deal and get it renewed for another period.

Practical implications

This study provides valuable information on controlling information technology-enabled processes using pattern recognition methodology. The methodology gives a lot of flexibility to managers to monitor multiple processes collectively and avoids the manual plotting and interpretation of control charts.

Originality/value

The application of control chart pattern recognition methodology for monitoring service industry processes are rare. This is an application of the methodology for controlling information technology-enabled processes. This study also demonstrates the usefulness of deep learning techniques for process control.

Details

International Journal of Productivity and Performance Management, vol. 71 no. 8
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 27 April 2020

Marcus Renatus Johannes Wolkenfelt and Frederik Bungaran Ishak Situmeang

The purpose of this paper is to contribute to the marketing literature and practice by examining the effect of product pricing on consumer behaviours with regard to the…

1059

Abstract

Purpose

The purpose of this paper is to contribute to the marketing literature and practice by examining the effect of product pricing on consumer behaviours with regard to the assertiveness and the sentiments expressed in their product reviews. In addition, the paper uses new data collection and machine learning tools that can also be extended for other research of online consumer reviewing behaviours.

Design/methodology/approach

Using web crawling techniques, a large data set was extracted from the Google Play Store. Following this, the authors created machine learning algorithms to identify topics from product reviews and to quantify assertiveness and sentiments from the review texts.

Findings

The results indicate that product pricing models affect consumer review sentiment, assertiveness and topics. Removing upfront payment obligations positively impacts the overall and pricing specific consumer sentiment and reduces assertiveness.

Research limitations/implications

The results reveal new effects of pricing models on the nature of consumer reviews of products and form a basis for future research. The study was conducted in the gaming category of the Google Play Store and the generalisability of the findings for other app segments or marketplaces should be further tested.

Originality/value

The findings can help companies that create digital products in choosing a pricing strategy for their apps. The paper is the first to investigate how pricing modes affect the nature of online reviews written by consumers.

Details

Journal of Research in Interactive Marketing, vol. 14 no. 1
Type: Research Article
ISSN: 2040-7122

Keywords

Article
Publication date: 16 February 2021

Elena Villaespesa and Seth Crider

Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags…

Abstract

Purpose

Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems.

Design/methodology/approach

This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags.

Findings

This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results.

Practical implications

This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects.

Originality/value

The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.

Details

Journal of Documentation, vol. 77 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of 154