Search results
1 – 10 of over 1000The International Conference on Dublin Core and Metadata Applications (DC‐2008) is being held this year in Berlin. The purpose of this paper is to describe the evolution of the…
Abstract
Purpose
The International Conference on Dublin Core and Metadata Applications (DC‐2008) is being held this year in Berlin. The purpose of this paper is to describe the evolution of the Dublin Core effort from an initial focus on “core” elements for resource description towards a more comprehensive framework for developing application profiles that use multiple vocabularies on basis of the W3C resource description framework (RDF) model.
Design/methodology/approach
A Dublin Core application profile describes a metadata application, from functional requirements, via a domain model of entities to be described, to the formal specification of constraints on the basis of the DCMI Abstract Model.
Findings
Dublin Core application profiles are designed to be interoperable on the basis of W3C's RDF model and principles of Web architecture, such as consistent use of URIs, in order to facilitate the integration of metadata from multiple sources – a common requirement in today's Web.
Originality/value
The paper offers insights into the evolution of the Dublin Core.
Details
Keywords
An e‐mail survey was conducted by the Dublin Core Libraries Working Group to collect examples of Dublin Core use in libraries, and to provide input for the development of a Dublin…
Abstract
An e‐mail survey was conducted by the Dublin Core Libraries Working Group to collect examples of Dublin Core use in libraries, and to provide input for the development of a Dublin Core application profile for libraries. A total of 29 responses were received from nine countries, describing 33 separate implementations of Dublin Core. The most commonly cited reasons for selecting Dublin Core were its international acceptance, flexibility and likelihood of future interoperability. Each of the 15 core elements was in use by between 59 percent and 97 percent of the projects in the survey. There was a high incidence (73 percent) of projects that use metadata elements in addition to the DC elements and approved qualifiers. The two most widely reported challenges involved in implementing Dublin Core were that there are too few elements and qualifiers, and the lack of usage guidelines.
Details
Keywords
Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This…
Abstract
Purpose
Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This article aims to look at how issues around the Dublin Core, a small “metadata element set,” exemplify issues that must be resolved in order to ensure that library data meet traditional standards for quality and consistency while remaining broadly interoperable with other data sources in the linked data environment.
Design/methodology/approach
The article focuses on how the Dublin Core – originally seen, in traditional terms, as a simple record format – came increasingly to be seen as an RDF vocabulary for use in metadata based on a “statement” model, and how new approaches to metadata evolved to bridge the gap between these models.
Findings
The translation of library standards into RDF involves the separation of languages of description, per se, from the specific data formats into which they have for so long been embedded. When defined with “minimal ontological commitment,” languages of description lend themselves to the sort of adaptation that is inevitably a part of any human linguistic activity. With description set profiles, the quality and consistency of data traditionally required for sharing records among libraries can be ensured by placing precise constraints on the content of data records – without compromising the interoperability of the underlying vocabularies in the wider linked data context.
Practical implications
In today's environment, library data must continue to meet high standards of consistency and quality, yet it must be possible to link or merge the data with sources that follow other standards. Placing constraints on the data created, more than on the underlying vocabularies, allows both requirements to be met.
Originality/value
This paper examines how issues around the Dublin Core exemplify issues that must be resolved to ensure library data meet quality and consistency standards while remaining interoperable with other data sources.
Details
Keywords
Access to educational material has become an important issue for many stakeholders and the focus of much research worldwide. Resource discovery in educational gateways is usually…
Abstract
Access to educational material has become an important issue for many stakeholders and the focus of much research worldwide. Resource discovery in educational gateways is usually based on metadata and this is an area of important developments. Resource metadata has a central role in the management of educational material and as a result there are several important metadata standards in use in the educational domain. One of the most widely used general metadata standards for learning material is the Dublin Core Metadata Element Set. The application of this general purpose, metadata standard for complex and heterogeneous educational material is not straightforward. This paper will give an overview of some practical issues and necessary steps in deploying Dublin Core based on the LITC experience in the EASEL (Educators Access to Services in the Electronic Landscape) project.
Chariya Nonthakarn and Vilas Wuwongse
The purpose of this paper is to design an application profile that will enable interoperability among research management systems, support research collaboration, and facilitate…
Abstract
Purpose
The purpose of this paper is to design an application profile that will enable interoperability among research management systems, support research collaboration, and facilitate the management of research information.
Design/methodology/approach
The approach is based on the Singapore Framework for Dublin Core Application Profile, a framework for designing metadata schemas for maximum interoperability. The application profile is built from gathering stakeholders’ requirements in research community and integrates four types of research information, i.e., information on researchers, research projects, research outputs, and research reports, which benefits researchers, research managers, and funding agencies.
Findings
The resultant application profile is evaluated against widely used similar metadata schemas and requirements; and is found to be more comprehensive than the existing schemas and meets the collected requirements. Furthermore, the application profile is deployed with a prototype of research management system and is found works appropriately.
Practical implications
The designed application profile has implications for further development of research management systems that would lead to the enhancement of research collaboration and the efficiency of research information management.
Originality/value
The proposed application profile covers information entire the research development lifecycle. Both schema and information can be represented in Resource Description Framework format for reusing purpose and linking with other information. This enables users to share research information, co-operate with others, funding agencies and the community at large, thereby allowing a research management system to increase collaboration and the efficiency of research management. Furthermore, researchers and research information can be linked by means of Linked Open Data technology.
Details
Keywords
Chunqiu Li and Shigeo Sugimoto
Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track…
Abstract
Purpose
Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas.
Design/methodology/approach
The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English.
Findings
Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time.
Research limitations/implications
The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered.
Originality/value
This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
Details
Keywords
The aim of this study is to assess the metadata element sets of electronic theses and dissertations that are currently used at Canadian academic institutional repositories, and to…
Abstract
Purpose
The aim of this study is to assess the metadata element sets of electronic theses and dissertations that are currently used at Canadian academic institutional repositories, and to discuss issues related to variations and inconsistencies in Dublin Core data used by participating repositories.
Design/methodology/approach
The formats and usage patterns of metadata elements at ten participating institutional repositories are identified and analyzed. Additionally, metadata element variations are grouped by different types.
Findings
Current metadata elements have a significant level of inconsistency and variation.
Research limitations/implications
The observations drawn from this study are limited to Canadian cases only. However, the results provide insights into developing a metadata framework for institutional repositories in other countries.
Originality/value
This study examines empirical data collected from data providers among Canadian institutional repositories. The result of this study may be beneficial to the achievement of interoperability across institutional repositories and to the development of a standardized application profile for Canadian institutional repositories.
Details
Keywords
Thomas Baker, Pierre-Yves Vandenbussche and Bernard Vatant
The paper seeks to analyze the health of the vocabulary ecosystem in terms of requirements, addressing its various stakeholders such as maintainers of linked open vocabularies…
Abstract
Purpose
The paper seeks to analyze the health of the vocabulary ecosystem in terms of requirements, addressing its various stakeholders such as maintainers of linked open vocabularies, linked data providers who use those vocabularies in their data and memory institutions which, it is hoped, will eventually provide for the long-term preservation of vocabularies.
Design/methodology/approach
This paper builds on requirements formulated more tersely in the DCMI generic namespace policy for RDF vocabularies. The examination of requirements for linked open vocabularies focuses primarily on property-and-class vocabularies in RDFS or OWL (sometimes called metadata element sets), with some consideration of SKOS concept schemes and Dublin Core application profiles. It also discusses lessons learned through two years of development of the linked open vocabularies (LOV), of which main features and key findings are described.
Findings
Key findings about the current practices of vocabulary managers regarding metadata, policy and versioning are presented, as well as how such practices can be improved to ensure better discoverability and usability of RDF vocabularies. The paper presents new ways to assess the links and dependencies between vocabularies. It also stresses the necessity and importance of a global governance of the ecosystem in which vocabulary managers, standard bodies, and memory institutions should engage.
Research limitations/implications
The current paper is focused on requirements related to a single type of vocabulary but could and should be extended to other types such as thesauri, classifications, and other semantic assets.
Practical implications
Practical technical guidelines and social good practices are proposed for promotion in the Vocabulary Ecosystem (for example, by vocabulary managers).
Originality/value
This paper brings together the research and action of several important actors in the vocabulary management and governance field, and is intended to be the basis of a roadmap for action presented at the Dublin Core conference of September 2013 in Lisbon (DC 2013).
Details
Keywords
Emad Khazraee, Saeed Moaddeli, Azadeh Sanjari and Shadi Shakeri
The purpose of this paper is to provide a clear image of the information architecture used in the Encyclopedia of Iranian Architectural History (EIAH) and to show how it was…
Abstract
Purpose
The purpose of this paper is to provide a clear image of the information architecture used in the Encyclopedia of Iranian Architectural History (EIAH) and to show how it was crafted to meet the need for accessibility, expressiveness and interoperability.
Design/methodology/approach
In order to assess the level of interoperability in the system, two essential concepts of the system are identified and traced in every level of the three‐layer information architecture. Federated repositories are studied for the level of accessibility that they can offer. Knowledge representation level, mediator level and the semantic portal are studied for expressiveness capabilities.
Findings
EIAH information architecture is capable of establishing links among resources available in the information pools connected to the system by using EIAH metadata application profile (EMAP). Different modules in this architecture, which are localized for the Persian language, can work on similar environments for other languages, for example Arabic.
Originality/value
EIAH is the first example of a digital encyclopedia for the history of Iranian architecture, which is basically different from other digital encyclopedias in the way that it offers information to users. EIAH is aimed at domain experts and provides them not with pre‐written and quality articles but with a wide range of resources and documents relative to what they are seeking.
Details
Keywords
To explore the impact of using metadata in finding and ranking web pages through search engines.
Abstract
Purpose
To explore the impact of using metadata in finding and ranking web pages through search engines.
Design/methodology/approach
The study has been divided into two phases. In phase one, the use of metadata schemes and the impact of overlapped documents have been examined by employing the usability technique. Phase two examined the impact of adding metadata elements to web pages in their original rank order, using the experimental method. This study focuses on indexing web pages using metadata and its impact on search engine's rankings.
Findings
Meta tags are more widely used than Dublin Core. The overlapped pages tend to include metadata. The second phase shows that by adding metadata elements to web pages, it raises its rank order. However, this depends on the quality of the description and the metadata schemes. The study shows no great difference in page ranking between adding meta tags and Dublin Core.
Practical implications
To maximize the impact of metadata, more attention should be given to keyword and descriptive fields.
Originality/value
The hypothetical relationship between overlapped pages and the inclusion of metadata and indexing by search engines had not been previously examined.
Details