Search results

1 – 10 of over 31000
Article
Publication date: 2 August 2022

Seema Rani and Mukesh Kumar

Community detection is a significant research field in the study of social networks and analysis because of its tremendous applicability in multiple domains such as recommendation…

Abstract

Purpose

Community detection is a significant research field in the study of social networks and analysis because of its tremendous applicability in multiple domains such as recommendation systems, link prediction and information diffusion. The majority of the present community detection methods considers either node information only or edge information only, but not both, which can result in loss of important information regarding network structures. In real-world social networks such as Facebook and Twitter, there are many heterogeneous aspects of the entities that connect them together such as different type of interactions occurring, which are difficult to study with the help of homogeneous network structures. The purpose of this study is to explore multilayer network design to capture these heterogeneous aspects by combining different modalities of interactions in single network.

Design/methodology/approach

In this work, multilayer network model is designed while taking into account node information as well as edge information. Existing community detection algorithms are applied on the designed multilayer network to find the densely connected nodes. Community scoring functions and partition comparison are used to further analyze the community structures. In addition to this, analytic hierarchical processing-technique for order preference by similarity to ideal solution (AHP-TOPSIS)-based framework is proposed for selection of an optimal community detection algorithm.

Findings

In the absence of reliable ground-truth communities, it becomes hard to perform evaluation of generated network communities. To overcome this problem, in this paper, various community scoring functions are computed and studied for different community detection methods.

Research limitations/implications

In this study, evaluation criteria are considered to be independent. The authors observed that the criteria used are having some interdependencies, which could not be captured by the AHP method. Therefore, in future, analytic network process may be explored to capture these interdependencies among the decision attributes.

Practical implications

Proposed ranking can be used to improve the search strategy of algorithms to decrease the search time of the best fitting one according to the case study. The suggested study ranks existing community detection algorithms to find the most appropriate one.

Social implications

Community detection is useful in many applications such as recommendation systems, health care, politics, economics, e-commerce, social media and communication network.

Originality/value

Ranking of the community detection algorithms is performed using community scoring functions as well as AHP-TOPSIS methods.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 17 August 2018

Guillaume Gadek, Alexandre Pauchet, Nicolas Malandain, Laurent Vercouter, Khaled Khelif, Stéphan Brunessaux and Bruno Grilhères

Most of the existing literature on online social networks (OSNs) either focuses on community detection in graphs without considering the topic of the messages exchanged, or…

Abstract

Purpose

Most of the existing literature on online social networks (OSNs) either focuses on community detection in graphs without considering the topic of the messages exchanged, or concentrates exclusively on the messages without taking into account the social links. The purpose of this paper is to characterise the semantic cohesion of such groups through the introduction of new measures.

Design/methodology/approach

A theoretical model for social links and salient topics on Twitter is proposed. Also, measures to evaluate the topical cohesiveness of a group are introduced. Inspired from precision and recall, the proposed measures, called expertise and representativeness, assess how a set of groups match the topic distribution. An adapted measure is also introduced when a topic similarity can be computed. Finally, a topic relevance measure is defined, similar to tf.idf (term-frequency, inverse document frequency).

Findings

The measures yield interesting results, notably on a large tweet corpus: the metrics accurately describe the topics discussed in the tweets and enable to identify topic-focused groups. Combined with topological measures, they provide a global and concise view of the detected groups.

Originality/value

Many algorithms, applied on OSN, detect communities which often lack of meaning and internal semantic cohesion. This paper is among the first to quantify this aspect, and more precisely the topical cohesion and topical relevance of a group. Moreover, the proposed indicators can be exploited for social media monitoring, to investigate the impact of a group of people: for instance, they could be used for journalism, marketing and security purposes.

Details

Data Technologies and Applications, vol. 52 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 June 1997

James L. Price

Addresses the standardization of the measurements and the labels for concepts commonly used in the study of work organizations. As a reference handbook and research tool, seeks to…

16098

Abstract

Addresses the standardization of the measurements and the labels for concepts commonly used in the study of work organizations. As a reference handbook and research tool, seeks to improve measurement in the study of work organizations and to facilitate the teaching of introductory courses in this subject. Focuses solely on work organizations, that is, social systems in which members work for money. Defines measurement and distinguishes four levels: nominal, ordinal, interval and ratio. Selects specific measures on the basis of quality, diversity, simplicity and availability and evaluates each measure for its validity and reliability. Employs a set of 38 concepts ‐ ranging from “absenteeism” to “turnover” as the handbook’s frame of reference. Concludes by reviewing organizational measurement over the past 30 years and recommending future measurement reseach.

Details

International Journal of Manpower, vol. 18 no. 4/5/6
Type: Research Article
ISSN: 0143-7720

Keywords

Article
Publication date: 5 October 2021

Hongming Gao, Hongwei Liu, Haiying Ma, Cunjun Ye and Mingjun Zhan

A good decision support system for credit scoring enables telecom operators to measure the subscribers' creditworthiness in a fine-grained manner. This paper aims to propose a…

Abstract

Purpose

A good decision support system for credit scoring enables telecom operators to measure the subscribers' creditworthiness in a fine-grained manner. This paper aims to propose a robust credit scoring system by leveraging latent information embedded in the telecom subscriber relation network based on multi-source data sources, including telecom inner data, online app usage, and offline consumption footprint.

Design/methodology/approach

Rooting from network science, the relation network model and singular value decomposition are integrated to infer different subscriber subgroups. Employing the results of network inference, the paper proposed a network-aware credit scoring system to predict the continuous credit scores by implementing several state-of-art techniques, i.e. multivariate linear regression, random forest regression, support vector regression, multilayer perceptron, and a deep learning algorithm. The authors use a data set consisting of 926 users of a Chinese major telecom operator within one month of 2018 to verify the proposed approach.

Findings

The distribution of telecom subscriber relation network follows a power-law function instead of the Gaussian function previously thought. This network-aware inference divides the subscriber population into a connected subgroup and a discrete subgroup. Besides, the findings demonstrate that the network-aware decision support system achieves better and more accurate prediction performance. In particular, the results show that our approach considering stochastic equivalence reveals that the forecasting error of the connected-subgroup model is significantly reduced by 7.89–25.64% as compared to the benchmark. Deep learning performs the best which might indicate that a non-linear relationship exists between telecom subscribers' credit scores and their multi-channel behaviours.

Originality/value

This paper contributes to the existing literature on business intelligence analytics and continuous credit scoring by incorporating latent information of the relation network and external information from multi-source data (e.g. online app usage and offline consumption footprint). Also, the authors have proposed a power-law distribution-based network-aware decision support system to reinforce the prediction performance of individual telecom subscribers' credit scoring for the telecom marketing domain.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 34 no. 5
Type: Research Article
ISSN: 1355-5855

Keywords

Article
Publication date: 8 March 2022

Nany Yuliastuti, Ega Varian Okta, Vica Gitya Haryanti and Farhan Afif

Tanjung Mas, an urban village located in the northern part of Semarang city, has been facing a major impact of coastal inundation occurring along North Java Coastline. This…

Abstract

Purpose

Tanjung Mas, an urban village located in the northern part of Semarang city, has been facing a major impact of coastal inundation occurring along North Java Coastline. This by-product of global climate change is also affecting a 37-hectares slum, one of the largest slums in Semarang city. As the coastal flood tends to escalate every year, the affected areas must have a coping ability to reduce its impact, while also having adequate resources to recover. Considering Tanjung Mas’ dense demographic condition and its function as the city’s seaport, social vulnerability and capability play a significant role in mitigating and recovering flood impacts, in supplement to local government’s effort of strengthening the Northern Java Seawall. Therefore, this study aims to scored and correlated Tanjung Mas’ social vulnerability index (SoVI) and community capability index to assess how well its population can recover from the tidal flood in the future.

Design/methodology/approach

This study used the SoVI framework analysis to synthesize relevant social vulnerability indicators and community capability indicators in Tanjung Mas. The two sets of indicators were correlated with Pearson R-squared correlation method to seek a possible non-causal relation. Bivariate indices mapping method exhibit the SoVI and community capability index spatially to show every area’s vulnerability and capability level.

Findings

The vulnerability and capability level in Tanjung Mas vary within its smaller area, as six combinations of social vulnerability and community capability level were found. The worst combination was found on areas closer to the coastline, with high social vulnerability and low community capability level. These areas need to be strengthened in both its capability and coping ability toward coastal flood to realize a resilient community.

Originality/value

This study will be useful for local governments as a supplement to the strategic spatial plan, predominantly in prioritizing vulnerable area treatment prior to the completion of Northern Java Seawall in 2025. This study provides information and a simplified quantitative scoring result of vulnerability and capability level in slum area that has been customized according to Indonesia’s demographic characteristic. These results and framework might be relevant to SoVI and capability scoring in developing countries.

Details

Journal of Financial Management of Property and Construction , vol. 28 no. 2
Type: Research Article
ISSN: 1366-4387

Keywords

Article
Publication date: 7 November 2008

David M. Simpson

This paper sets out to develop disaster preparedness measurement methodology using a small test case of two communities. It is aimed at furthering discussion of the issues and…

2571

Abstract

Purpose

This paper sets out to develop disaster preparedness measurement methodology using a small test case of two communities. It is aimed at furthering discussion of the issues and complexities of developing measurement of preparedness indicators for application and utilization.

Design/methodology/approach

The study used a multi‐modal approach, utilizing several data sources, including: a survey of essential facility managers in the two communities; document data extracted from the two city's Comprehensive Plans, Budgets, and the Emergency Operation Plans; and key informant interviews. Data collected from these sources formed the basis of the model construction and testing.

Findings

The primary conclusion is that a preparedness measurement model, while inherently difficult to construct and execute, has the potential to assist in the comparison and evaluation of community preparedness. Further such development requires additional refinement, calibration, and applied testing.

Research limitations/implications

In terms of future research, this type of effort is preliminary, and needs to be tested across a larger number of communities to gauge its accuracy, and would most benefit from the creation of consistent baseline scores for a larger cross‐section of communities. Baseline scores could be examined for disasters that affect multiple communities, and comparison and evaluations of the preparedness measures can be applied. Future research should calibrate the model using expert and community feedback.

Practical implications

Should a standardized measurement and indicator system be developed with wide application, there would be effects in the insurance, regulatory and management sectors.

Originality/value

The paper creates a measurement and indexing process for discussion and evaluation in the hazards research community.

Details

Disaster Prevention and Management: An International Journal, vol. 17 no. 5
Type: Research Article
ISSN: 0965-3562

Keywords

Article
Publication date: 27 July 2022

Svetlozar Nestorov, Dinko Bačić, Nenad Jukić and Mary Malliaris

The purpose of this paper is to propose an extensible framework for extracting data set usage from research articles.

Abstract

Purpose

The purpose of this paper is to propose an extensible framework for extracting data set usage from research articles.

Design/methodology/approach

The framework uses a training set of manually labeled examples to identify word features surrounding data set usage references. Using the word features and general entity identifiers, candidate data sets are extracted and scored separately at the sentence and document levels. Finally, the extracted data set references can be verified by the authors using a web-based verification module.

Findings

This paper successfully addresses a significant gap in entity extraction literature by focusing on data set extraction. In the process, this paper: identified an entity-extraction scenario with specific characteristics that enable a multiphase approach, including a feasible author-verification step; defined the search space for word feature identification; defined scoring functions for sentences and documents; and designed a simple web-based author verification step. The framework is successfully tested on 178 articles authored by researchers from a large research organization.

Originality/value

Whereas previous approaches focused on completely automated large-scale entity recognition from text snippets, the proposed framework is designed for a longer, high-quality text, such as a research publication. The framework includes a verification module that enables the request validation of the discovered entities by the authors of the research publications. This module shares some similarities with general crowdsourcing approaches, but the target scenario increases the likelihood of meaningful author participation.

Article
Publication date: 20 July 2012

Paul Laughton

The purpose of this paper is to develop a test for data centres, repositories and archives to determine OAIS functional model conformance. The test developed was carried out among…

Abstract

Purpose

The purpose of this paper is to develop a test for data centres, repositories and archives to determine OAIS functional model conformance. The test developed was carried out among the World Data Centre (WDC) member data centres. The method used to develop the OAIS functional model conformance test is discussed, along with the test results.

Design/methodology/approach

To conduct the OAIS functional model conformance test, a quantitative approach in the format of an online survey was used. This was part of a mixed methods research project.

Findings

The test developed did produce a means for quantifying OAIS functional model conformance. The mean score for the 26 WDC member data centres that completed the test was 62.08 out of a possible 92. The highest scoring WDC member data centre obtained a score of 90, while the lowest score obtained was 27.

Research limitations/implications

This test was only conducted among a relatively small sample, making it difficult to generalise the results obtained and determine how effectively the OAIS functional model conformance test measured conformance. It remains a challenge to quantify data curation practices with regard to the OAIS functional model.

Originality/value

The OAIS functional model conformance test is the first attempt at quantifying OAIS functional model compliance.

Details

Program, vol. 46 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 4 April 2016

Margam Madhusudhan and Vikas Singh

The purpose of this paper is to analyze the various features and functions of Koha, Libsys, NewGenLib and Virtua with the help of specially designed evaluation checklist and rank…

5636

Abstract

Purpose

The purpose of this paper is to analyze the various features and functions of Koha, Libsys, NewGenLib and Virtua with the help of specially designed evaluation checklist and rank them based on features/functions of integrated library management system (ILMS).

Design/methodology/approach

The evaluation approach taken in this paper is similar to that of Singh and Sanaman (2012) and Madhusudhan and Shalini (2014) with minor modifications, comprising 306 features/functions and categorized as ten broad categories.

Findings

The paper explores different features of open source (OS) and commercial ILMS, which reveals that Virtua got the highest total score of 218 (77.86 per cent), followed by Koha ILMS with 204 score (72.86 per cent). Interestingly, NewGenLib got the lowest total score, that is, 163 (58.21 per cent). ILMS under study are lagging behind in exploiting the full potential of the Web 2.0 features, including cloud computing features, and needs to be addressed in their future development.

Practical implications

It is hoped that both the OS and commercial software will attend to the lacunae and soon develop fully functional Web 2.0/3.0 and cloud-based technologies.

Originality/value

The findings of this paper will not only guide the librarians in the selection of a good ILMS, which can cater to the needs of their libraries, but also abreast the knowledge of evaluation of ILMS for the students of Library and Information Science. And the findings will help the ILMS vendors to know the limitations of their ILMS, so that they can overcome the limitations faced by users and improve their products.

Details

The Electronic Library, vol. 34 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 28 September 2007

Andreas Langegger and Wolfram Wöß

There is still little support for the consumer decision‐making process on the web, especially when prices are not the primary property of a product. Reasons for that are complex…

Abstract

Purpose

There is still little support for the consumer decision‐making process on the web, especially when prices are not the primary property of a product. Reasons for that are complex product specifications as well as often volitional weak interoperability between e‐commerce sites. This paper aims to address this issue.

Design/methodology/approach

The semantic web is supposed to make product information more interoperable between different sites. Additionally, some products with limited time frames of availability, like real estates or second‐hand cars, require periodical searches over several days, weeks, or even months. For those kinds of products existing systems cannot be applied. Instant information about new offers on the market is therefore crucial. Wireless access to the web enables services to become instantaneous and to provide up‐to‐date information to users.

Findings

This paper presents a framework which is based on multivariate product comparison allowing users to delegate search requests to an agent. The success of the agent depends heavily on the matching algorithm. Fuzzy utility functions and the analytical hierarchy process are a very feasible combination for the scoring of offers.

Originality/value

The proposed system supports users finding products on the web matching specific user preferences and instantly informs them when new items become available on the virtual market. As a specific use case the framework is being applied to the real estate sector, because especially for this sector several shortcomings of the current support have been identified.

Details

International Journal of Web Information Systems, vol. 3 no. 1/2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 31000