Search results

1 – 10 of over 3000
To view the access options for this content please click here
Article
Publication date: 1 March 1999

Norman Paskin

The management of intellectual content in a digital environment (Internet) requires the existence of persistent, reliable unique identifiers for each distinguishable piece…

Abstract

The management of intellectual content in a digital environment (Internet) requires the existence of persistent, reliable unique identifiers for each distinguishable piece of content, and associated services activated by these identifiers to manage access and other rights. The digital object identifier (DOI) is a major initiative from the content industries which is now being implemented widely. The DOI is a unique identifier of any piece of intellectual content (in any form), together with a system for using that identifier to locate digital services (on the Internet) associated with that content. This paper describes as separate strands the approach of the technology and the content communities, and how these have been brought together in the DOI initial implementation (as a reliable location tool) and future implementations of other services. The DOI has strong support from many quarters, and is funded by a not‐for‐profit independent foundation.

Details

Interlending & Document Supply, vol. 27 no. 1
Type: Research Article
ISSN: 0264-1615

Keywords

To view the access options for this content please click here
Book part
Publication date: 17 December 2004

John A. Pandiani, Steven M. Banks and Monica M. Simon

The relationship between employment services and employment outcomes has been the subject of research for a number of years (Bond et al., 2001; Drake et al., 1996). More…

Abstract

The relationship between employment services and employment outcomes has been the subject of research for a number of years (Bond et al., 2001; Drake et al., 1996). More recently, the competitive employment of service recipients has become an important indicator of community mental health program and service system performance. The National Association of State Mental Health Program Directors’ President’s Task Force on Performance Measures, for instance, recognized the importance of monitoring employment rates for adults with serious mental illness: “For payers, this is the payoff…Monitoring this outcome for populations with mental illness…is critical. This was considered a critical outcome to track.” For similar reasons, the new federal Performance Partnership (Block) Grant program (Federal Register, 2002) requires annual reporting by all states of employment rates for recipients of publicly funded mental health services.

Details

Research on Employment for Persons with Severe Mental Illness
Type: Book
ISBN: 978-1-84950-286-3

To view the access options for this content please click here
Article
Publication date: 1 March 1999

Michael Day, Rachel Heery and Andy Powell

This paper reviews BIBLINK, an EC funded project that is attempting to create links between national bibliographic agencies and the publishers of electronic resources. The…

Abstract

This paper reviews BIBLINK, an EC funded project that is attempting to create links between national bibliographic agencies and the publishers of electronic resources. The project focuses on the flow of information, primarily in the form of metadata, between publishers and national libraries. The paper argues that in the digital information environment, the role of national bibliographic agencies will become increasingly dependent upon the generation of electronic links between publishers and other agents in the bibliographic chain. Related work carried out by the Library of Congress with regard to its Electronic CIP Program is described. The core of the paper outlines studies produced by the BIBLINK project as background to the production of a demonstrator that will attempt to establish some of these links. This research includes studies of metadata formats in use and an investigation of the potential for format conversion, including an outline of the BIBLINK Core metadata elements and comments on their potential conversion into UNIMARC. BIBLINK studies on digital identifiers and authentication are also outlined.

Details

Journal of Documentation, vol. 55 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 30 March 2012

José L. Navarro‐Galindo and José Samos

Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical…

Abstract

Purpose

Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue, as this enables the benefits of the semantic web to be extended. The purpose of this paper is to present a FLERSA (Flexible Range Semantic Annotation) for flexible range semantic annotation.

Design/methodology/approach

A FLERSA is presented as a user‐centred annotation tool for Web content expressed in natural language. The tool has been built in order to illustrate how a WCMS called Joomla! can be converted into its semantic equivalent.

Findings

The development of the tool shows that it is possible to build a semantic WCMS through a combination of semantic components and other resources such as ontologies and emergence technologies, including XML, RDF, RDFa and OWL.

Practical implications

The paper provides a starting‐point for further research in which the principles and techniques of the FLERSA tool can be applied to any WCMS.

Originality/value

The tool allows both manual and automatic semantic annotations, as well as providing enhanced search capabilities. For manual annotation, a new flexible range markup technique is used, based on the RDFa standard, to support the evolution of annotated Web documents more effectively than XPointer. For automatic annotation, a hybrid approach based on machine learning techniques (Vector‐Space Model + n‐grams) is used to determine the concepts that the content of a Web document deals with (from an ontology which provides a taxonomy), based on previous annotations that are used as a training corpus.

To view the access options for this content please click here
Article
Publication date: 11 November 2014

Joachim Schopfel, Stéphane Chaudiron, Bernard Jacquemin, Hélène Prost, Marta Severo and Florence Thiault

Print theses and dissertations have regularly been submitted together with complementary material, such as maps, tables, speech samples, photos or videos, in various…

Abstract

Purpose

Print theses and dissertations have regularly been submitted together with complementary material, such as maps, tables, speech samples, photos or videos, in various formats and on different supports. In the digital environment of open repositories and open data, these research results could become a rich source of research results and data sets, for reuse and other exploitation. The paper aims to discuss these issues.

Design/methodology/approach

After introducing electronic theses and dissertations (ETD) into the context of eScience, the paper investigates some aspects that impact the availability and openness of data sets and other supplemental files related to ETD (system architecture, metadata and data retrieval, legal aspects).

Findings

These items are part of the so-called “small data” of eScience, with a wide range of contents and formats. Their heterogeneity and their link to ETD need specific approaches to data curation and management, with specific metadata and identifiers and with specific services, workflows and systems. One size may not fit for all but it seems appropriate to separate text and data files. Regarding copyright and licensing, data sets must be evaluated carefully but should not be processed and disseminated under the same conditions as the related PhD theses. Some examples are presented.

Research limitations/implications

The paper concludes with recommendations for further investigation and development to foster open access to research results produced along with PhD theses.

Originality/value

ETDs are an important part of the content of open repositories. Yet, their potential as a gateway to underlying research results has not really been explored so far.

To view the access options for this content please click here
Article
Publication date: 1 February 1975

D. Diane Beale and Michael F. Lynch

Ayers’ recent suggestions for a Universal Standard Book Number, logically generated from a catalogue entry, and therefore applicable restrospectively to bibliographic…

Abstract

Ayers’ recent suggestions for a Universal Standard Book Number, logically generated from a catalogue entry, and therefore applicable restrospectively to bibliographic files, have been implemented and tested on two one‐year cumulations of BNB MARC files. The proportion of unique entries provided by the USBN was found to be about 91%. Revisions to the coding tables were made on the basis of a detailed analysis of the results and of determinations of the frequencies of characters in the data elements used. These resulted in improvements to the method, giving an increase in the proportion of unique entries to approximately 96%.

Details

Program, vol. 9 no. 2
Type: Research Article
ISSN: 0033-0337

Content available
Article
Publication date: 1 April 2006

Abstract

Details

Assembly Automation, vol. 26 no. 2
Type: Research Article
ISSN: 0144-5154

To view the access options for this content please click here
Article
Publication date: 4 February 2014

Carly C. Dearborn, Amy J. Barton and Neal A. Harmeyer

The purpose of this case study is to discuss the creation of robust preservation functionality within PURR. The study seeks to discuss the customization of the HUBzero…

Abstract

Purpose

The purpose of this case study is to discuss the creation of robust preservation functionality within PURR. The study seeks to discuss the customization of the HUBzero platform, composition of digital preservation policies, and the creation of a novel, machine-actionable metadata model for PURR's unique digital content. Additionally, the study will trace the implementation of the Open Archival Information System (OAIS) model and track PURR's progress towards Trustworthy Digital Repository certification.

Design/methodology/approach

This case study discusses the use of the Center for Research Libraries Trusted Repository Audit Checklist (TRAC) certification process and ISO 16363 as a rubric to build an OAIS institutional repository for the publication, preservation, and description of unique datasets.

Findings

ISO 16363 continues to serve as a rubric, barometer and set of goals for PURR as development continues. To become a trustworthy repository, the PURR project team has consistently worked to build a robust, secure, and long-term home for collaborative research. In order to fulfill its mandate, the project team constructed policies, strategies, and activities designed to guide a systematic digital preservation environment. PURR expects to undertake the full ISO 16363 audit process at a future date in expectation of being certified as a Trustworthy Digital Repository. Through its efforts in digital preservation, the Purdue University Research Repository expects to better serve Purdue researchers, their collaborators, and move scholarly research efforts forward world-wide.

Originality/value

PURR is a customized instance of HUBzero®, an open source software platform that supports scientific discovery, learning, and collaboration. HUBzero was a research project funded by the United States National Science Foundation (NSF) and is a product of the Network for Computation Nanotechnology (NCN), a multi-university initiative of eight member institutions. PURR is only one instance of a HUBzero's customization; versions have been implemented in many disciplines nation-wide. PURR maintains the core functionality of HUBzero, but has been modified to publish datasets and to support their preservation. Long-term access to published data are an essential component of PURR services and Purdue University Libraries' mission. Preservation in PURR is not only vital to the Purdue University research community, but to the larger digital preservation issues surrounding dynamic datasets and their long-term usability.

Details

OCLC Systems & Services, vol. 30 no. 1
Type: Research Article
ISSN: 1065-075X

Keywords

To view the access options for this content please click here
Article
Publication date: 7 March 2008

Sooyong Lee

The purpose of this paper is to present a novel localization scheme using infrared identification (IRID) fused with encoder information.

Abstract

Purpose

The purpose of this paper is to present a novel localization scheme using infrared identification (IRID) fused with encoder information.

Design/methodology/approach

IRID emitters are mounted on the ceiling in order to divide the floor workspace into sectors. Encoding the IRID signal then allows the mobile robot to identify which sector it is in. The sector information is fused with the dead‐reckoning results for estimation of the robot configuration based on the fact that IRID has highly deterministic characteristics.

Findings

Fusing the dead‐reckoning result and the IRID information bounds and in some cases reduces the size of the uncertainty. This enables one to more accurately estimate the robot's configuration.

Originality/value

A new artificial landmark, IRID is developed for mobile robot localization. This paper also demonstrates a framework that fuses the IRID information (deterministic) and dead‐reckoning result (stochastic).

Details

Industrial Robot: An International Journal, vol. 35 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 7 April 2015

Ebrahim Karan, Javier Irizarry and John Haymaker

This paper aims to develop a framework to represent semantic web query results as Industry Foundation Class (IFC) building models. The subject of interoperability has…

Abstract

Purpose

This paper aims to develop a framework to represent semantic web query results as Industry Foundation Class (IFC) building models. The subject of interoperability has received considerable attention in the construction literature in recent years. Given the distributed, semantically heterogeneous data sources, the problem is to retrieve information accurately and with minimal human intervention by considering their semantic descriptions.

Design/methodology/approach

This paper provides a framework to translate semantic web query results into the XML representations of IFC schema and data. Using the concepts and relationships in an IFC schema, the authors first develop an ontology to specify an equivalent IFC entity in the query results. Then, a mapping structure is defined and used to translate and fill all query results into an ifcXML document. For query processing, the proposed framework implements a set of predefined query mappings between the source schema and a corresponding IFC output schema. The resulting ifcXML document is validated with an XML schema validating parser and then loaded into a building information modeling (BIM) authoring tool.

Findings

The research findings indicate that semantic web technology can be used, accurately and with minimal human intervention, to maintain semantic-level information when transforming information between web-based and BIM formats. The developed framework for representing IFC-compatible outputs allows BIM users to query and access building data at any time over the web from data providers.

Originality/value

Currently, the results of semantic web queries are not supported by BIM authoring tools. Thus, the proposed framework utilizes the capabilities of semantic web and query technologies to transform the query results to an XML representation of IFC data.

Details

Construction Innovation, vol. 15 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

1 – 10 of over 3000