Search results
1 – 10 of over 8000Jihong Guan, Jiaogen Zhou and Shuigeng Zhou
The rapidly emerging of Mobile Internet and the constantly increasing of wireless subscribers' number bring new opportunities and challenges to geographic information sharing and…
Abstract
The rapidly emerging of Mobile Internet and the constantly increasing of wireless subscribers' number bring new opportunities and challenges to geographic information sharing and accessing. Current Web GISs, which are accessed by using connection based approaches, are very inefficient in fulfilling the requirements of GIS applications under open, dynamic, heterogeneous and distributed computing environments such as (Mobile) Internet. In this paper, we propose a new system for accessing and sharing distributed geographic information by using mobile agent and GML technologies, in which mobile agents are used to overcome the limitations of traditional distributed computing paradigms in (mobile) Internet context and GML is adopted as the common format for spatial information wrapping and mediation, while SVG is used as a web‐map publishing format that can be processed and displayed in Web browser. A prototype is implemented, which demonstrates the effectiveness and feasibility of the proposed method.
Details
Keywords
Zongda Wu, Jian Xie, Xinze Lian and Jun Pan
The security of archival privacy data in the cloud has become the main obstacle to the application of cloud computing in archives management. To this end, aiming at XML archives…
Abstract
Purpose
The security of archival privacy data in the cloud has become the main obstacle to the application of cloud computing in archives management. To this end, aiming at XML archives, this paper aims to present a privacy protection approach that can ensure the security of privacy data in the untrusted cloud, without compromising the system availability.
Design/methodology/approach
The basic idea of the approach is as follows. First, the privacy data before being submitted to the cloud should be strictly encrypted on a trusted client to ensure the security. Then, to query the encrypted data efficiently, the approach constructs some key feature data for the encrypted data, so that each XML query defined on the privacy data can be executed correctly in the cloud.
Findings
Finally, both theoretical analysis and experimental evaluation demonstrate the overall performance of the approach in terms of security, efficiency and accuracy.
Originality/value
This paper presents a valuable study attempting to protect privacy for the management of XML archives in a cloud environment, so it has a positive significance to promote the application of cloud computing in a digital archive system.
Details
Keywords
Ana Maria de Carvalho Moura, Fabio Porto, Vania Vidal, Regis Pires Magalhães, Macedo Maia, Maira Poltosi and Daniele Palazzi
The purpose of this paper is to present a four-level architecture that aims at integrating, publishing and retrieving ecological data making use of linked data (LD). It allows…
Abstract
Purpose
The purpose of this paper is to present a four-level architecture that aims at integrating, publishing and retrieving ecological data making use of linked data (LD). It allows scientists to explore taxonomical, spatial and temporal ecological information, access trophic chain relations between species and complement this information with other data sets published on the Web of data. The development of ecological information repositories is a crucial step to organize and catalog natural reserves. However, they present some challenges regarding their effectiveness to provide a shared and global view of biodiversity data, such as data heterogeneity, lack of metadata standardization and data interoperability. LD rose as an interesting technology to solve some of these challenges.
Design/methodology/approach
Ecological data, which is produced and collected from different media resources, is stored in distinct relational databases and published as RDF triples, using a relational-Resource Description Format mapping language. An application ontology reflects a global view of these datasets and share with them the same vocabulary. Scientists specify their data views by selecting their objects of interest in a friendly way. A data view is internally represented as an algebraic scientific workflow that applies data transformation operations to integrate data sources.
Findings
Despite of years of investment, data integration continues offering scientists challenges in obtaining consolidated data views of a large number of heterogeneous scientific data sources. The semantic integration approach presented in this paper simplifies this process both in terms of mappings and query answering through data views.
Social implications
This work provides knowledge about the Guanabara Bay ecosystem, as well as to be a source of answers to the anthropic and climatic impacts on the bay ecosystem. Additionally, this work will enable evaluating the adequacy of actions that are being taken to clean up Guanabara Bay, regarding the marine ecology.
Originality/value
Mapping complexity is traded by the process of generating the exported ontology. The approach reduces the problem of integration to that of mappings between homogeneous ontologies. As a byproduct, data views are easily rewritten into queries over data sources. The architecture is general and although applied to the ecological context, it can be extended to other domains.
Details
Keywords
The purpose of this paper is to examine the way in which end user searching on the web has become the primary method of locating digital images for many people. This paper seeks…
Abstract
Purpose
The purpose of this paper is to examine the way in which end user searching on the web has become the primary method of locating digital images for many people. This paper seeks to investigate how users structure these image queries.
Design/methodology/approach
This study investigates the structure and formation of image queries on the web by mapping a sample of web queries to three known query classification schemes for image searching (i.e. Enser and McGregor, Jörgensen, and Chen).
Findings
The results indicate that the features and attributes of web image queries differ relative to image queries utilized on other information retrieval systems and by other user populations. This research points to the need for five additional attributes (i.e. collections, pornography, presentation, URL, and cost) in order to classify web image queries, which were not present in any of the three prior classification schemes.
Research limitations/implications
Patterns in web searching for image content do emerge that inform the design of web‐based multimedia systems, namely, that there is a high interest in locating image collections by web searchers. Objects and people images are the predominant interest for web searchers. Cost is a factor for web searching. This knowledge of the structure of web image queries has implications for the design of image information retrieval systems and repositories, especially in the area of automatic tagging of images with metadata.
Originality/value
This is the first research that examines whether or not one can apply image query classifications schemes to web image queries.
Details
Keywords
Daifeng Li, Andrew Madden, Chaochun Liu, Ying Ding, Liwei Qian and Enguo Zhou
Internet technology allows millions of people to find high quality medical resources online, with the result that personal healthcare and medical services have become one of the…
Abstract
Purpose
Internet technology allows millions of people to find high quality medical resources online, with the result that personal healthcare and medical services have become one of the fastest growing markets in China. Data relating to healthcare search behavior may provide insights that could lead to better provision of healthcare services. However, discrepancies often arise between terminologies derived from professional medical domain knowledge and the more colloquial terms that users adopt when searching for information about ailments. This can make it difficult to match healthcare queries with doctors’ keywords in online medical searches. The paper aims to discuss these issues.
Design/methodology/approach
To help address this problem, the authors propose a transfer learning using latent factor graph (TLLFG), which can learn the descriptions of ailments used in internet searches and match them to the most appropriate formal medical keywords.
Findings
Experiments show that the TLLFG outperforms competing algorithms in incorporating both medical domain knowledge and patient-doctor Q&A data from online services into a unified latent layer capable of bridging the gap between lay enquiries and professionally expressed information sources, and make more accurate analysis of online users’ symptom descriptions. The authors conclude with a brief discussion of some of the ways in which the model may support online applications and connect offline medical services.
Practical implications
The authors used an online medical searching application to verify the proposed model. The model can bridge users’ long-tailed description with doctors’ formal medical keywords. Online experiments show that TLLFG can significantly improve the searching experience of both users and medical service providers compared with traditional machine learning methods. The research provides a helpful example of the use of domain knowledge to optimize searching or recommendation experiences.
Originality/value
The authors use transfer learning to map online users’ long-tail queries onto medical domain knowledge, significantly improving the relevance of queries and keywords in a search system reliant on sponsored links.
Details
Keywords
Michelle Dalmau, Randall Floyd, Dazhi Jiao and Jenn Riley
Seeks to share with digital library practitioners the development process of an online image collection that integrates the syndetic structure of a controlled vocabulary to…
Abstract
Purpose
Seeks to share with digital library practitioners the development process of an online image collection that integrates the syndetic structure of a controlled vocabulary to improve end‐user search and browse functionality.
Design/methodology/approach
Surveys controlled vocabulary structures and their utility for catalogers and end‐users. Reviews research literature and usability findings that informed the specifications for integration of the controlled vocabulary structure into search and browse functionality. Discusses database functions facilitating query expansion using a controlled vocabulary structure, and web application handling of user queries and results display. Concludes with a discussion of open‐source alternatives and reuse of database and application components in other environments.
Findings
Affirms that structured forms of browse and search can be successfully integrated into digital collections to significantly improve the user's discovery experience. Establishes ways in which the technologies used in implementing enhanced search and browse functionality can be abstracted to work in other digital collection environments.
Originality/value
Significant amounts of research on integrating thesauri structures into search and browse functionalities exist, but examples of online resources that have implemented this approach are few in comparison. The online image collection surveyed in this paper can serve as a model to other designers of digital library resources for integrating controlled vocabularies and metadata structures into more dynamic search and browse functionality for end‐users.
Details
Keywords
Many professional searchers and end‐users do not use the controlled vocabulary terms when formulating their queries, despite there being software solutions in various online…
Abstract
Many professional searchers and end‐users do not use the controlled vocabulary terms when formulating their queries, despite there being software solutions in various online information retrieval services to guide users to the most appropriate terms. Two search software packages (KnowledgeFinder and PubMed) have been developed to allow users natural language access to their queries. These are discussed and evaluated.
Details
Keywords
Search engines and web applications have evolved to be more tailored toward individual user’s needs, including the individual’s personal preferences and geographic location. By…
Abstract
Purpose
Search engines and web applications have evolved to be more tailored toward individual user’s needs, including the individual’s personal preferences and geographic location. By integrating the free Google Maps Application Program Interface with locally stored metadata, the author created an interactive map search for users to locate, and navigate to, destinations on the University of New Mexico (UNM) campus. The purpose of this paper is to identify the characteristics of UNM map search queries, the options and prioritization of the metadata augmentation, and the usefulness and possible improvement of the interface.
Design/methodology/approach
Queries, search date/time, and the number of results found were logged and examined. Queries’ search frequency and characteristics were analyzed and categorized.
Findings
From November 1, 2012 to September 15, 2013, the author had a total 14,097 visits to the SearchUNM Maps page (http://search.unm.edu/maps/). There were total 5,868 searches (41 percent of all the page visits), and out of all the search instances, 2,297 of them (39 percent) did not retrieve any results. By analyzing the failed queries, the author was able to develop a strategy to increase successful searches.
Originality/value
Many academic institutions have implemented interactive map searches for users to find locations and navigate on campus. However, to date there is no related research on how users conduct their searches in such a scope. Based on the query analysis, this paper identifies user’s search behavior and discusses the strategies of improving searches results of campus interactive maps.
Details
Keywords
MARK STEWART and PETER WILLETT
This paper describes the simulation of a nearest neighbour searching algorithm for document retrieval using a pool of microprocessors. The documents in a database are organised in…
Abstract
This paper describes the simulation of a nearest neighbour searching algorithm for document retrieval using a pool of microprocessors. The documents in a database are organised in a multi‐dimensional binary search tree, and the algorithm identifies the nearest neighbour for a query by a backtracking search of this tree. Three techniques are described which allow parallel searching of the tree. A PASCAL‐based, general purpose simulation system is used to simulate these techniques, using a pool of Transputer‐like microprocessors with three standard document test collections. The degree of speed‐up and processor utilisation obtained is shown to be strongly dependent upon the characteristics of the documents and queries used. The results support the use of pooled microprocessor systems for searching applications in information retrieval.
Jyri Saarikoski, Jorma Laurikkala, Kalervo Järvelin and Martti Juhola
The aim of this paper is to explore the possibility of retrieving information with Kohonen self‐organising maps, which are known to be effective to group objects according to…
Abstract
Purpose
The aim of this paper is to explore the possibility of retrieving information with Kohonen self‐organising maps, which are known to be effective to group objects according to their similarity or dissimilarity.
Design/methodology/approach
After conventional preprocessing, such as transforming into vector space, documents from a German document collection were trained for a neural network of Kohonen self‐organising map type. Such an unsupervised network forms a document map from which relevant objects can be found according to queries.
Findings
Self‐organising maps ordered documents to groups from which it was possible to find relevant targets.
Research limitations/implications
The number of documents used was moderate due to the limited number of documents associated to test topics. The training of self‐organising maps entails rather long running times, which is their practical limitation. In future, the aim will be to build larger networks by compressing document matrices, and to develop document searching in them.
Practical implications
With self‐organising maps the distribution of documents can be visualised and relevant documents found in document collections of limited size.
Originality/value
The paper reports on an approach that can be especially used to group documents and also for information search. So far self‐organising maps have rarely been studied for information retrieval. Instead, they have been applied to document grouping tasks.
Details