Search results

1 – 10 of 87
Article
Publication date: 1 July 2014

Byung-Won On, Gyu Sang Choi and Soo-Mok Jung

The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case…

Abstract

Purpose

The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case study of the name authority control problem in DLs.

Design/methodology/approach

To find a sample of name variants across DLs (e.g. DBLP and ACM) and in a single DL (e.g. ACM), the approach is based on two bipartite matching algorithms: Maximum Weighted Bipartite Matching and Maximum Cardinality Bipartite Matching.

Findings

First, the authors validated the effectiveness and efficiency of the bipartite matching algorithms. The authors also studied the nature of real cases of author name variants that had been found across DLs (e.g. ACM, CiteSeer and DBLP) and in a single DL.

Originality/value

To the best of the authors knowledge, there is less research effort to understand the nature of author name variants shown in DLs. A thorough analysis can help focus research effort on real problems that arise when the authors perform duplicate detection methods.

Details

Program, vol. 48 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 4 September 2009

Jianjun Yang and Zongming Fei

Wireless mesh networks (WMNs) have evolved quickly during the last several years. They are widely used in a lot of fields. Channel allocation provides basic means to guarantee…

Abstract

Purpose

Wireless mesh networks (WMNs) have evolved quickly during the last several years. They are widely used in a lot of fields. Channel allocation provides basic means to guarantee mesh networks’ good performance such as efficient routing. The purpose of this paper is to study channel allocation in mesh networks.

Design/methodology/approach

First, the papers in channel allocation fields are surveyed, and then the limitations in existing methods noted. Graph theory is used to find a better model to represent the problem and algorithms are proposed based on this model. Simulation proved that algorithms are better than the previous conflict graph‐based approaches.

Findings

The paper analyzes the conflict graph‐based model and finds its limitations, then proposes a bipartite graph‐based model. Algorithms were devised based on this model. Simulation results illustrate that the algorithms can reduce the starvation ratio and improve the bandwidth utilization, compared with previous conflict graph‐based algorithms.

Research limitations/implications

The research of this paper is based on an ideal network environment without interference or noises. It will be better if the noises are considered in future work.

Practical implications

To study the routing strategies of WMNs, it is not sufficient to only consider path length as routing metric since the nodes are heterogeneous. The routing metrics should include the channel bandwidths which are the results of channel allocation.

Originality/value

This paper presents a new bipartite graph‐based model to represent the channel allocation problem in mesh networks. This model is more efficient and includes more information compared with conflict graph model, and it also proposes channel allocation algorithms based on bipartite graph‐based model. The algorithms can reduce starvation ratio and improve the bandwidth utilization.

Details

International Journal of Pervasive Computing and Communications, vol. 5 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 6 March 2017

Jihua Wang and Huayu Wang

This study aims to compute 3D model similarity by extracting and comparing shape features from the neutral files.

Abstract

Purpose

This study aims to compute 3D model similarity by extracting and comparing shape features from the neutral files.

Design/methodology/approach

In this work, the clear text encoding document STEP (Standard for The Exchange of Product model data) of 3D models was analysed, and the models were characterized by two-depth trees consisting of both surface and shell nodes. All surfaces in the STEP files can be subdivided into three kinds, namely, free, analytical and loop surfaces. Surface similarity is defined by the variation coefficients of distances between data points on two surfaces, and subsequently, the shell similarity and 3D model similarity are determined using an optimal algorithm for bipartite graph matching.

Findings

This approach is used to experimentally verify the effectiveness of the 3D model similarity algorithm.

Originality/value

The novelty of this study research lies in the computation of 3D model similarity by comparison of all surfaces. In addition, the study makes several key observations: surfaces reflect the most information concerning the functions and attributes of a 3D model and so the similarity between surfaces generates more comprehensive content (both external and internal); semantic-based 3D retrieval can be obtained under the premise of comparison of surface semantics; and more accurate similarity of 3D models can be obtained using the optimal algorithm of bipartite graph matching for all surfaces.

Details

Engineering Computations, vol. 34 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 August 2008

Wilma Penzo

The semantic and structural heterogeneity of large Extensible Markup Language (XML) digital libraries emphasizes the need of supporting approximate queries, i.e. queries where the…

Abstract

Purpose

The semantic and structural heterogeneity of large Extensible Markup Language (XML) digital libraries emphasizes the need of supporting approximate queries, i.e. queries where the matching conditions are relaxed so as to retrieve results that possibly partially satisfy the user's requests. The paper aims to propose a flexible query answering framework which efficiently supports complex approximate queries on XML data.

Design/methodology/approach

To reduce the number of relaxations applicable to a query, the paper relies on the specification of user preferences about the types of approximations allowed. A specifically devised index structure which efficiently supports both semantic and structural approximations, according to the specified user preferences, is proposed. Also, a ranking model to quantify approximations in the results is presented.

Findings

Personalized queries, on one hand, effectively narrow the space of query reformulations, on the other hand, enhance the user query capabilities with a great deal of flexibility and control over requests. As to the quality of results, the retrieval process considerably benefits because of the presence of user preferences in the queries. Experiments demonstrate the effectiveness and the efficiency of the proposal, as well as its scalability.

Research limitations/implications

Future developments concern the evaluation of the effectiveness of personalization on queries through additional examinations of the effects of the variability of parameters expressing user preferences.

Originality/value

The paper is intended for the research community and proposes a novel query model which incorporates user preferences about query relaxations on large heterogeneous XML data collections.

Details

International Journal of Web Information Systems, vol. 4 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 July 2014

Wen-Feng Hsiao, Te-Min Chang and Erwin Thomas

The purpose of this paper is to propose an automatic metadata extraction and retrieval system to extract bibliographical information from digital academic documents in portable…

Abstract

Purpose

The purpose of this paper is to propose an automatic metadata extraction and retrieval system to extract bibliographical information from digital academic documents in portable document formats (PDFs).

Design/methodology/approach

The authors use PDFBox to extract text and font size information, a rule-based method to identify titles, and an Hidden Markov Model (HMM) to extract the titles and authors. Finally, the extracted titles and authors (possibly incorrect or incomplete) are sent as query strings to digital libraries (e.g. ACM, IEEE, CiteSeerX, SDOS, and Google Scholar) to retrieve the rest of metadata.

Findings

Four experiments are conducted to examine the feasibility of the proposed system. The first experiment compares two different HMM models: multi-state model and one state model (the proposed model). The result shows that one state model can have a comparable performance with multi-state model, but is more suitable to deal with real-world unknown states. The second experiment shows that our proposed model (without the aid of online query) can achieve as good performance as other researcher's model on Cora paper header dataset. In the third experiment the paper examines the performance of our system on a small dataset of 43 real PDF research papers. The result shows that our proposed system (with online query) can perform pretty well on bibliographical data extraction and even outperform the free citation management tool Zotero 3.0. Finally, the paper conducts the fourth experiment with a larger dataset of 103 papers to compare our system with Zotero 4.0. The result shows that our system significantly outperforms Zotero 4.0. The feasibility of the proposed model is thus justified.

Research limitations/implications

For academic implication, the system is unique in two folds: first, the system only uses Cora header set for HMM training, without using other tagged datasets or gazetteers resources, which means the system is light and scalable. Second, the system is workable and can be applied to extracting metadata of real-world PDF files. The extracted bibliographical data can then be imported into citation software such as endnote or refworks to increase researchers’ productivity.

Practical implications

For practical implication, the system can outperform the existing tool, Zotero v4.0. This provides practitioners good chances to develop similar products in real applications; though it might require some knowledge about HMM implementation.

Originality/value

The HMM implementation is not novel. What is innovative is that it actually combines two HMM models. The main model is adapted from Freitag and Mccallum (1999) and the authors add word features of the Nymble HMM (Bikel et al, 1997) to it. The system is workable even without manually tagging the datasets before training the model (the authors just use cora dataset to train and test on real-world PDF papers), as this is significantly different from what other works have done so far. The experimental results have shown sufficient evidence about the feasibility of our proposed method in this aspect.

Details

Program, vol. 48 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 1 February 1983

G.D. HACHTEL and S.W. DIRECTOR

Results are given which establish a computational foundation for simplicial approximation and design centering of a convex body. A simplicial polyhedron is used to approximate the…

Abstract

Results are given which establish a computational foundation for simplicial approximation and design centering of a convex body. A simplicial polyhedron is used to approximate the convex body and the “design center”, i.e. the point inside the body furthest in some norm from its exterior, is approximated by the point in the polyhedron furthest from its exterior. A point representation of the polyhedron is used, so that there is no necessity for computing or storing the faces of the approximation. Since in N space there can be factorially more faces than points, we are able to achieve significant efficiencies in both operation count and storage requirements, compared to previously reported methods. We give results for the 2 norm and the max norm, and demonstrate that our new method is operable in the nonconvex case, and can handle a mixed basis of faces and points as well.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 2 no. 2
Type: Research Article
ISSN: 0332-1649

Article
Publication date: 27 July 2010

Hassan Naderi and Beatrice Rumpler

This paper aims to discuss and test the claim that utilization of the personalization techniques can be valuable to improve the efficiency of collaborative information retrieval…

Abstract

Purpose

This paper aims to discuss and test the claim that utilization of the personalization techniques can be valuable to improve the efficiency of collaborative information retrieval (CIR) systems.

Design/methodology/approach

A new personalized CIR system, called PERCIRS, is presented based on the user profile similarity calculation (UPSC) formulas. To this aim, the paper proposes several UPSC formulas as well as two techniques to evaluate them. As the proposed CIR system is personalized, it could not be evaluated by Cranfield, like evaluation techniques (e.g. TREC). Hence, this paper proposes a new user‐centric mechanism, which enables PERCIRS to be evaluated. This mechanism is generic and can be used to evaluate any other personalized IR system.

Findings

The results show that among the proposed UPSC formulas in this paper, the (query‐document)‐graph based formula is the most effective. After integrating this formula into PERCIRS and comparing it with nine other IR systems, it is concluded that the results of the system are better than the other IR systems. In addition, the paper shows that the complexity of the system is less that the complexity of the other CIR systems.

Research limitations/implications

This system asks the users to explicitly rank the returned documents, while explicit ranking is still not widespread enough. However it believes that the users should actively participate in the IR process in order to aptly satisfy their needs to information.

Originality/value

The value of this paper lies in combining collaborative and personalized IR, as well as introducing a mechanism which enables the personalized IR system to be evaluated. The proposed evaluation mechanism is very valuable for developers of personalized IR systems. The paper also introduces some significant user profile similarity calculation formulas, and two techniques to evaluate them. These formulas can also be used to find the user's community in the social networks.

Details

Journal of Documentation, vol. 66 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 14 June 2013

Edgardo Molina, Alpha Diallo and Zhigang Zhu

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer…

Abstract

Propose

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer localization information to a blind or low‐vision user.

Design/methodology/approach

The authors consider three types of “visual noun” features: signage, visual‐text, and visual‐icons that are proposed as a low‐cost method for augmenting environments. These are used in combination with an RGB‐D sensor and a simplified SLAM algorithm to develop a framework for navigation assistance suitable for the blind and low‐vision users.

Findings

It was found that signage detection cannot only help a blind user to find a location, but can also be used to give accurate orientation and location information to guide the user navigating a complex environment. The combination of visual nouns for orientation and RGB‐D sensing for traversable path finding can be one of the cost‐effective solutions for navigation assistance for blind and low‐vision users.

Research limitations/implications

This is the first step for a new approach in self‐localization and local navigation of a blind user using both signs and 3D data. The approach is meant to be cost‐effective but it only works in man‐made scenes where a lot of signs exist or can be placed and are relatively permanent in their appearances and locations.

Social implications

Based on 2012 World Health Organization, 285 million people are visually impaired, of which 39 million are blind. This project will have a direct impact on this community.

Originality/value

Signage detection has been widely studied for assisting visually impaired people in finding locations, but this paper provides the first attempt to use visual nouns as visual features to accurately locate and orient a blind user. The combination of visual nouns with 3D data from an RGB‐D sensor is also new.

Details

Journal of Assistive Technologies, vol. 7 no. 2
Type: Research Article
ISSN: 1754-9450

Keywords

Book part
Publication date: 25 July 2008

Martin J. Conyon and Mark R. Muldoon

In this chapter we investigate the ownership and control of UK firms using contemporary methods from computational graph theory. Specifically, we analyze a ‘small-world’ model of…

Abstract

In this chapter we investigate the ownership and control of UK firms using contemporary methods from computational graph theory. Specifically, we analyze a ‘small-world’ model of ownership and control. A small-world is a network whose actors are linked by a short chain of acquaintances (short path lengths), but at the same time have a strongly overlapping circle of friends (high clustering). We simulate a set of corporate worlds using an ensemble of random graphs introduced by Chung and Lu (2002a, 2002b). We find that the corporate governance network structures analyzed here are more clustered (‘clubby’) than would be predicted by the random-graph model. Path lengths, though, are generally not shorter than expected. In addition, we investigate the role of financial institutions: potentially important conduits creating connectivity in corporate networks. We find such institutions give rise to systematically different network topologies.

Details

Network Strategy
Type: Book
ISBN: 978-0-7623-1442-3

Article
Publication date: 26 September 2019

Asma Ayari and Sadok Bouamama

The multi-robot task allocation (MRTA) problem is a challenging issue in the robotics area with plentiful practical applications. Expanding the number of tasks and robots…

Abstract

Purpose

The multi-robot task allocation (MRTA) problem is a challenging issue in the robotics area with plentiful practical applications. Expanding the number of tasks and robots increases the size of the state space significantly and influences the performance of the MRTA. As this process requires high computational time, this paper aims to describe a technique that minimizes the size of the explored state space, by partitioning the tasks into clusters. In this paper, the authors address the problem of MRTA by putting forward a new automatic clustering algorithm of the robots' tasks based on a dynamic-distributed double-guided particle swarm optimization, namely, ACD3GPSO.

Design/methodology/approach

This approach is made out of two phases: phase I groups the tasks into clusters using the ACD3GPSO algorithm and phase II allocates the robots to the clusters. Four factors are introduced in ACD3GPSO for better results. First, ACD3GPSO uses the k-means algorithm as a means to improve the initial generation of particles. The second factor is the distribution using the multi-agent approach to reduce the run time. The third one is the diversification introduced by two local optimum detectors LODpBest and LODgBest. The last one is based on the concept of templates and guidance probability Pguid.

Findings

Computational experiments were carried out to prove the effectiveness of this approach. It is compared against two state-of-the-art solutions of the MRTA and against two evolutionary methods under five different numerical simulations. The simulation results confirm that the proposed method is highly competitive in terms of the clustering time, clustering cost and MRTA time.

Practical implications

The proposed algorithm is quite useful for real-world applications, especially the scenarios involving a high number of robots and tasks.

Originality/value

In this methodology, owing to the ACD3GPSO algorithm, task allocation's run time has diminished. Therefore, the proposed method can be considered as a vital alternative in the field of MRTA with growing numbers of both robots and tasks. In PSO, stagnation and local optima issues are avoided by adding assorted variety to the population, without losing its fast convergence.

Details

Assembly Automation, vol. 40 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 87