Search results
21 – 30 of over 2000José L. Navarro‐Galindo and José Samos
Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue…
Abstract
Purpose
Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue, as this enables the benefits of the semantic web to be extended. The purpose of this paper is to present a FLERSA (Flexible Range Semantic Annotation) for flexible range semantic annotation.
Design/methodology/approach
A FLERSA is presented as a user‐centred annotation tool for Web content expressed in natural language. The tool has been built in order to illustrate how a WCMS called Joomla! can be converted into its semantic equivalent.
Findings
The development of the tool shows that it is possible to build a semantic WCMS through a combination of semantic components and other resources such as ontologies and emergence technologies, including XML, RDF, RDFa and OWL.
Practical implications
The paper provides a starting‐point for further research in which the principles and techniques of the FLERSA tool can be applied to any WCMS.
Originality/value
The tool allows both manual and automatic semantic annotations, as well as providing enhanced search capabilities. For manual annotation, a new flexible range markup technique is used, based on the RDFa standard, to support the evolution of annotated Web documents more effectively than XPointer. For automatic annotation, a hybrid approach based on machine learning techniques (Vector‐Space Model + n‐grams) is used to determine the concepts that the content of a Web document deals with (from an ontology which provides a taxonomy), based on previous annotations that are used as a training corpus.
Details
Keywords
Chao Wang, Jie Lu and Guangquan Zhang
Matching relevant ontology data for integration is vitally important as the amount of ontology data increases along with the evolving Semantic web, in which data are published…
Abstract
Purpose
Matching relevant ontology data for integration is vitally important as the amount of ontology data increases along with the evolving Semantic web, in which data are published from different individuals or organizations in a decentralized environment. For any domain that has developed a suitable ontology, its ontology annotated data (or simply ontology data) from different sources often overlaps and needs to be integrated. The purpose of this paper is to develop intelligent web ontology data matching method and framework for data integration.
Design/methodology/approach
This paper develops an intelligent matching method to solve the issue of ontology data matching. Based on the matching method, it also proposes a flexible peer‐to‐peer framework to address the issue of ontology data integration in a distributed Semantic web environment.
Findings
The proposed matching method is different from existing data matching or merging methods applied to data warehouse in that it employs a machine learning approach and more similarity measurements by exploring ontology features.
Research limitations/implications
The proposed method and framework will be further tested for some more complicated real cases in the future.
Originality/value
The experiments show that this proposed intelligent matching method increases ontology data matching accuracy.
Details
Keywords
This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…
Abstract
Purpose
This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.
Design/methodology/approach
This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.
Findings
Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.
Research limitations/implications
Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.
Practical implications
A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.
Social implications
Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.
Originality/value
The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.
Details
Keywords
Paul A. Watters and Malti Patel
The Internet has the potential to facilitate understanding across cultures and languages by removing the physical barriers to intercultural communication. One possible contributor…
Abstract
The Internet has the potential to facilitate understanding across cultures and languages by removing the physical barriers to intercultural communication. One possible contributor to this development has been the recent release of freely‐available automated direct machine translation systems, such as AltaVista with SYSTRAN, which translates from English to five other European languages (French, German, Italian, Spanish and Portuguese), and vice versa. However, concerns have recently been raised over the performance of these systems, and the potential for confusion that can be created when the intended meaning of sentences is not correctly translated (i.e. semantic processing errors). In this paper, we use an iterative paradigm to examine errors associated with interlingual divergence in meaning arising from the automated machine translation of English proverbs. The need for the development of Web‐based translation systems, which have an explicit cross‐linguistic representation of meaning for successful intercultural communication, is discussed.
Details
Keywords
Jinju Chen and Shiyan Ou
The purpose of this paper is to semantically annotate the content of digital images with the use of Semantic Web technologies and thus facilitate retrieval, integration and…
Abstract
Purpose
The purpose of this paper is to semantically annotate the content of digital images with the use of Semantic Web technologies and thus facilitate retrieval, integration and knowledge discovery.
Design/Methodology/Approach
After a review and comparison of the existing semantic annotation models for images and a deep analysis of the characteristics of the content of images, a multi-dimensional and hierarchical general semantic annotation framework for digital images was proposed. On this basis, taking histories images, advertising images and biomedical images as examples, by integrating the characteristics of images in these specific domains with related domain knowledge, the general semantic annotation framework for digital images was customized to form a domain annotation ontology for the images in a specific domain. The application of semantic annotation of digital images, such as semantic retrieval, visual analysis and semantic reuse, were also explored.
Findings
The results showed that the semantic annotation framework for digital images constructed in this paper provided a solution for the semantic organization of the content of images. On this basis, deep knowledge services such as semantic retrieval, visual analysis can be provided.
Originality/Value
The semantic annotation framework for digital images can reveal the fine-grained semantics in a multi-dimensional and hierarchical way, which can thus meet the demand for enrichment and retrieval of digital images.
Details
Keywords
Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of…
Abstract
Purpose
Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of memory materials, encourage personalized sense-making and extract, manage and share the ever-growing surrounding knowledge. Audiovisual (AV) content, with its growing importance and popularity, is less explored on that end than texts and images. This paper examines the trend of datafication in AV archives and answers the critical question, “What to extract from AV materials and why?”.
Design/methodology/approach
This study roots in a comprehensive state-of-the-art review of digital methods and curatorial practices in AV archives. The thinking model for mapping AV archive data to purposes is based on pre-existing models for understanding multimedia content and metadata standards.
Findings
The thinking model connects AV content descriptors (data perspective) and purposes (curatorial perspective) and provides a theoretical map of how information extracted from AV archives should be fused and embedded for memory institutions. The model is constructed by looking into the three broad dimensions of audiovisual content – archival, affective and aesthetic, social and historical.
Originality/value
This paper contributes uniquely to the intersection of computational archives, audiovisual content and public sense-making experiences. It provides updates and insights to work towards datafied AV archives and cope with the increasing needs in the sense-making end using AV archives.
Details
Keywords
This paper aims to present an epistemological perspective of how web mining can be performed by human agents, and a technical overview of a possible approach to achieve this.
Abstract
Purpose
This paper aims to present an epistemological perspective of how web mining can be performed by human agents, and a technical overview of a possible approach to achieve this.
Design/methodology/approach
The concept of visual mining of information is based on the principle of creating a fused information space that could be navigated using visual perception, just as is possible within a natural environment. It is argued that, as the information available through human‐created artefacts increases, conventional methods of information acquisition will fail to circumvent the bottle‐neck created by the associated information overflow.
Findings
Four types of human‐created artefacts are identified: cognitive externalisations, artistic expressions, communicative accounts, and factual records. It is the communicative artefacts that are responsible for the information overflow. As a possible way forward, it is suggested that an information space could be constructed in two layers: a perceptual layer and a cognitive layer. The information in the perceptual layer could be encoded in iconic cues to create an information landscape that could be navigated visually. The cognitive layer, on the other hand, will operate on the information‐engendering data structures of the communicative artefacts.
Practical implications
When developed, the perceptual layer could provide a subject domain landscape that induces social familiarity with frequently traversed information environments. This should be particularly helpful for learning where frequent traversal provides opportunities for repeated rehearsal, a necessary condition for long‐term retention.
Originality/value
The objective of this paper is to facilitate a shift in cognitive loading associated with knowledge acquisition from post‐perceptual to perceptual operations. The appealing nature of perceptual processing underpins the claim about enhancing long‐term retention.
Details
Keywords
Tomás Lopes and Sérgio Guerreiro
Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error…
Abstract
Purpose
Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error while also providing improvement insights for the business process modeling activity. The primary purposes of this paper are to conduct a literature review of Business Process Model and Notation (BPMN) testing and formal verification and to propose the Business Process Evaluation and Research Framework for Enhancement and Continuous Testing (bPERFECT) framework, which aims to guide business process testing (BPT) research and implementation. Secondary objectives include (1) eliciting the existing types of testing, (2) evaluating their impact on efficiency and (3) assessing the formal verification techniques that complement testing.
Design/methodology/approach
The methodology used is based on Kitchenham's (2004) original procedures for conducting systematic literature reviews.
Findings
Results of this study indicate that three distinct business process model testing types can be found in the literature: black/gray-box, regression and integration. Testing and verification approaches differ in aspects such as awareness of test data, coverage criteria and auxiliary representations used. However, most solutions pose notable hindrances, such as BPMN element limitations, that lead to limited practicality.
Research limitations/implications
The databases selected in the review protocol may have excluded relevant studies on this topic. More databases and gray literature could also be considered for inclusion in this review.
Originality/value
Three main originality aspects are identified in this study as follows: (1) the classification of process model testing types, (2) the future trends foreseen for BPMN model testing and verification and (3) the bPERFECT framework for testing business processes.
Details
Keywords
Ziming Zeng, Shouqiang Sun, Jingjing Sun, Jie Yin and Yueyan Shen
Dunhuang murals are rich in cultural and artistic value. The purpose of this paper is to construct a novel mobile visual search (MVS) framework for Dunhuang murals, enabling users…
Abstract
Purpose
Dunhuang murals are rich in cultural and artistic value. The purpose of this paper is to construct a novel mobile visual search (MVS) framework for Dunhuang murals, enabling users to efficiently search for similar, relevant and diversified images.
Design/methodology/approach
The convolutional neural network (CNN) model is fine-tuned in the data set of Dunhuang murals. Image features are extracted through the fine-tuned CNN model, and the similarities between different candidate images and the query image are calculated by the dot product. Then, the candidate images are sorted by similarity, and semantic labels are extracted from the most similar image. Ontology semantic distance (OSD) is proposed to match relevant images using semantic labels. Furthermore, the improved DivScore is introduced to diversify search results.
Findings
The results illustrate that the fine-tuned ResNet152 is the best choice to search for similar images at the visual feature level, and OSD is the effective method to search for the relevant images at the semantic level. After re-ranking based on DivScore, the diversification of search results is improved.
Originality/value
This study collects and builds the Dunhuang mural data set and proposes an effective MVS framework for Dunhuang murals to protect and inherit Dunhuang cultural heritage. Similar, relevant and diversified Dunhuang murals are searched to meet different demands.
Details
Keywords
Elham Mahmoudi, Marcel Stepien and Markus König
A principle prerequisite for designing and constructing an underground structure is to estimate the subsurface's properties and obtain a realistic picture of stratigraphy…
Abstract
Purpose
A principle prerequisite for designing and constructing an underground structure is to estimate the subsurface's properties and obtain a realistic picture of stratigraphy. Obtaining direct measure of these values in any location of the built environment is not affordable. Therefore, any evaluation is afflicted with uncertainty, and we need to combine all available measurements, observations and previous knowledge to achieve an informed estimate and quantify the involved uncertainties. This study aims to enhance the geotechnical surveys based on a spatial estimation of subsoil to customised data structures and integrating the ground models into digital design environments.
Design/methodology/approach
The present study's objective is to enhance the geotechnical surveys based on a spatial estimation of subsoil to customised data structures and integrating the ground models into digital design environments. A ground model consisting of voxels is developed via Revit-Dynamo to represent spatial uncertainties employing the kriging interpolation method. The local arrangement of new surveys are evaluated to be optimised.
Findings
The visualisation model's computational performance is modified by using an octree structure. The results show that it adapts the structure to be modelled more efficiently. The proposed concept can identify the geological models' risky locations for further geological investigations and reveal an optimised experimental design. The modifications criteria are defined in global and local considerations.
Originality/value
It provides a transparent and repeatable approach to construct a spatial ground model for subsequent experimental or numerical analysis. In the first attempt, the ground model was discretised by a grid of voxels. In general, the required computing time primarily depends on the size of the voxels. This issue is addressed by implementing octree voxels to reduce the computational efforts. This applies especially to the cases that a higher resolution is required. The investigations using a synthetic soil model showed that the developed methodology fulfilled the kriging method's requirements. The effects of variogram parameters, such as the range and the covariance function, were investigated based on some parameter studies. Moreover, a synthetic model is used to demonstrate the optimal experimental design concept. Through the implementation, alternative locations for new boreholes are generated, and their uncertainties are quantified. The impact of the new borehole on the uncertainty measures are quantified based on local and global approaches. For further research to identify the geological models' risky spots, the development of this approach with additional criteria regarding the search neighbourhood and consideration of barriers and trends in real cases (by employing different interpolation methodologies) should be considered.
Details