Search results

11 – 15 of 15
Article
Publication date: 1 June 2015

Tadahiko Kumamoto, Hitomi Wada and Tomoya Suzuki

The purpose of this paper is to propose a Web application system for visualizing Twitter users based on temporal changes in the impressions received from the tweets posted by the…

Abstract

Purpose

The purpose of this paper is to propose a Web application system for visualizing Twitter users based on temporal changes in the impressions received from the tweets posted by the users on Twitter.

Design/methodology/approach

The system collects a specified user’s tweets posted during a specified period using Twitter API, rates each tweet based on three distinct impressions using an impression mining system, and then generates pie and line charts to visualize results of the previous processing using Google Chart API.

Findings

Because there are more news articles featuring somber topics than those featuring cheerful topics, the impression mining system, which uses impression lexicons created from a newspaper database, is considered to be more effective for analyzing negative tweets.

Research limitations/implications

The system uses Twitter API to collect tweets from Twitter. This suggests that the system cannot collect tweets of the users who maintain private timelines. According to our questionnaire, about 30 per cent of Twitter users’ timelines are private. This is one of the limitations to using the system.

Originality/value

The system enables people to grasp the personality of Twitter users by visualizing the impressions received from tweets the users normally post on Twitter. The target impressions are limited to those represented by three bipolar scales of impressions: “Happy/Sad”, “Glad/Angry” and “Peaceful/Strained”. The system also enables people to grasp the context in which keywords are used by visualizing the impressions from tweets in which the keywords were found.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 11 November 2014

Mai Miyabe, Akiyo Nadamoto and Eiji Aramaki

– This aim of this paper is to elucidate rumor propagation on microblogs and to assess a system for collecting rumor information to prevent rumor-spreading.

Abstract

Purpose

This aim of this paper is to elucidate rumor propagation on microblogs and to assess a system for collecting rumor information to prevent rumor-spreading.

Design/methodology/approach

We present a case study of how rumors spread on Twitter during a recent disaster situation, the Great East Japan earthquake of March 11, 2011, based on comparison to a normal situation. We specifically examine rumor disaffirmation because automatic rumor extraction is difficult. Extracting rumor-disaffirmation is easier than extracting the rumors themselves. We classify tweets in disaster situations, analyze tweets in disaster situations based on users' impressions and compare the spread of rumor tweets in a disaster situation to that in a normal situation.

Findings

The analysis results showed the following characteristics of rumors in a disaster situation. The information transmission is 74.9 per cent, representing the greatest number of tweets in our data set. Rumor tweets give users strong behavioral facilitation, make them feel negative and foment disorder. Rumors of a normal situation spread through many hierarchies but the rumors of disaster situations are two or three hierarchies, which means that the rumor spreading style differs in disaster situations and in normal situations.

Originality/value

The originality of this paper is to target rumors on Twitter and to analyze rumor characteristics by multiple aspects using not only rumor-tweets but also disaffirmation-tweets as an investigation object.

Details

International Journal of Web Information Systems, vol. 10 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 June 2015

Robin Mueller, Sebastian Schrittwieser, Peter Fruehwirt, Peter Kieseberg and Edgar Weippl

This paper aims to give an overview on a number of selected applications in comparison to a previous evaluation conducted two years ago, as well as performing an analysis on…

1860

Abstract

Purpose

This paper aims to give an overview on a number of selected applications in comparison to a previous evaluation conducted two years ago, as well as performing an analysis on several new applications. Mobile messaging and VoIP applications for smartphones have seen a massive surge in popularity, which has also sparked the interest in research related to their security and privacy protection, leading to in-depth analyses of specific applications or vulnerabilities.

Design/methodology/approach

The evaluation methods mostly focus on known vulnerabilities in connection with authentication and validation mechanisms but also describe some newly identified attack vectors.

Findings

The results show a positive trend for new applications, which are mostly being developed with security and privacy features, whereas some of the older applications have shown little progress or have even introduced new vulnerabilities. In addition, this paper shows privacy implications of smartphone messaging that are not even solved by today’s most sophisticated “secure” smartphone messaging applications, as well as discusses methods for protecting user privacy during the creation of the user network.

Research limitations/implications

Currently, there is no perfect solution available; thus, further research on this topic needs to be conducted.

Originality/value

In addition to conducting a security evaluation of existing applications together with newly designed messengers that were designed with a security background in mind, several methods for protecting user privacy were discussed. Furthermore, some new attack vectors were discussed.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 11 November 2014

Hao Han, Hidekazu Nakawatase and Keizo Oyama

The purpose of this article was to confirm whether users’ interests are reflected by tweeted Web pages, and to evaluate the credibility of interest reflection of tweeted Web…

Abstract

Purpose

The purpose of this article was to confirm whether users’ interests are reflected by tweeted Web pages, and to evaluate the credibility of interest reflection of tweeted Web pages.

Design/methodology/approach

Interest reflection of Twitter is investigated based on the context of sharing behavior. A context-oriented approach is proposed to evaluate the interest reflection of tweeted Web pages based on machine learning. Some different distribution models of similarity are present, and infer whether tweeted Web pages reflect respective users’ interests by analyzing user access profiles.

Findings

The analysis of browsing behaviors finds that many users partially hide their own concerns, hobbies and interests, and emphasize the concerns about social phenomenon. The extensive experimental results showed the context-oriented approach is effective on real net view data.

Originality/value

As the first-of-its-kind study on evaluating the credibility of interest reflection on Twitter, extensive experiments have been conducted on the data sets containing real net view data. For higher accuracy and less subjectivity, various features are generated from user’s Web view and Twitter submission background with some different context factors.

Details

International Journal of Web Information Systems, vol. 10 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 11 May 2020

Bojan Bozic, Andre Rios and Sarah Jane Delany

This paper aims to investigate the methods for the prediction of tags on a textual corpus that describes diverse data sets based on short messages; as an example, the authors…

Abstract

Purpose

This paper aims to investigate the methods for the prediction of tags on a textual corpus that describes diverse data sets based on short messages; as an example, the authors demonstrate the usage of methods based on hotel staff inputs in a ticketing system as well as the publicly available StackOverflow corpus. The aim is to improve the tagging process and find the most suitable method for suggesting tags for a new text entry.

Design/methodology/approach

The paper consists of two parts: exploration of existing sample data, which includes statistical analysis and visualisation of the data to provide an overview, and evaluation of tag prediction approaches. The authors have included different approaches from different research fields to cover a broad spectrum of possible solutions. As a result, the authors have tested a machine learning model for multi-label classification (using gradient boosting), a statistical approach (using frequency heuristics) and three similarity-based classification approaches (nearest centroid, k-nearest neighbours (k-NN) and naive Bayes). The experiment that compares the approaches uses recall to measure the quality of results. Finally, the authors provide a recommendation of the modelling approach that produces the best accuracy in terms of tag prediction on the sample data.

Findings

The authors have calculated the performance of each method against the test data set by measuring recall. The authors show recall for each method with different features (except for frequency heuristics, which does not provide the option to add additional features) for the dmbook pro and StackOverflow data sets. k-NN clearly provides the best recall. As k-NN turned out to provide the best results, the authors have performed further experiments with values of k from 1–10. This helped us to observe the impact of the number of neighbours used on the performance and to identify the best value for k.

Originality/value

The value and originality of the paper are given by extensive experiments with several methods from different domains. The authors have used probabilistic methods, such as naive Bayes, statistical methods, such as frequency heuristics, and similarity approaches, such as k-NN. Furthermore, the authors have produced results on an industrial-scale data set that has been provided by a company and used directly in their project, as well as a community-based data set with a large amount of data and dimensionality. The study results can be used to select a model based on diverse corpora for a specific use case, taking into account advantages and disadvantages when applying the model to your data.

Details

International Journal of Web Information Systems, vol. 16 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

11 – 15 of 15