Search results1 – 3 of 3
Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It…
Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources.
After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the “dimensions” included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file.
The paper also includes an evaluation based on the comparison – by means of calculations of recall and precision – of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information.
The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.
The purpose of this paper is to look into the latest advances in ontology-based text summarization systems, with emphasis on the methodologies of a socio-cognitive…
The purpose of this paper is to look into the latest advances in ontology-based text summarization systems, with emphasis on the methodologies of a socio-cognitive approach, the structural discourse models and the ontology-based text summarization systems.
The paper analyzes the main literature in this field and presents the structure and features of Texminer, a software that facilitates summarization of texts on Port and Coastal Engineering. Texminer entails a combination of several techniques, including: socio-cognitive user models, Natural Language Processing, disambiguation and ontologies. After processing a corpus, the system was evaluated using as a reference various clustering evaluation experiments conducted by Arco (2008) and Hennig et al. (2008). The results were checked with a support vector machine, Rouge metrics, the F-measure and calculation of precision and recall.
The experiment illustrates the superiority of abstracts obtained through the assistance of ontology-based techniques.
The authors were able to corroborate that the summaries obtained using Texminer are more efficient than those derived through other systems whose summarization models do not use ontologies to summarize texts. Thanks to ontologies, main sentences can be selected with a broad rhetorical structure, especially for a specific knowledge domain.
The purpose of this paper is to propose a tool that generates authority files to be integrated with linked data by means of learning rules. AUTHORIS is software developed…
The purpose of this paper is to propose a tool that generates authority files to be integrated with linked data by means of learning rules. AUTHORIS is software developed to enhance authority control and information exchange among bibliographic and non-bibliographic entities.
The article analyzes different methods previously developed for authority control as well as IFLA and ALA standards for managing bibliographic records. Semantic Web technologies are also evaluated. AUTHORIS relies on Drupal and incorporates the protocols of Dublin Core, SIOC, SKOS and FOAF. The tool has also taken into account the obsolescence of MARC and its substitution by FRBR and RDA. Its effectiveness was evaluated applying a learning test proposed by RDA. Over 80 percent of the actions were carried out correctly.
The use of learning rules and the facilities of linked data make it easier for information organizations to reutilize products for authority control and distribute them in a fair and efficient manner.
The ISAD-G records were the ones presenting most errors. EAD was found to be second in the number of errors produced. The rest of the formats – MARC 21, Dublin Core, FRAD, RDF, OWL, XBRL and FOAF – showed fewer than 20 errors in total.
AUTHORIS offers institutions the means of sharing data with a high level of stability, helping to detect records that are duplicated and contributing to lexical disambiguation and data enrichment.
The software combines the facilities of linked data, the potency of the algorithms for converting bibliographic data, and the precision of learning rules.