Search results
1 – 10 of over 30000The definition of modeling languages is a key‐prerequisite for model‐driven engineering. In this respect, Domain‐Specific Modeling Languages (DSMLs) defined from scratch in terms…
Abstract
Purpose
The definition of modeling languages is a key‐prerequisite for model‐driven engineering. In this respect, Domain‐Specific Modeling Languages (DSMLs) defined from scratch in terms of metamodels and the extension of Unified Modeling Language (UML) by profiles are the proposed options. For interoperability reasons, however, the need arises to bridge modeling languages originally defined as DSMLs to UML. Therefore, the paper aims to propose a semi‐automatic approach for bridging DSMLs and UML by employing model‐driven techniques.
Design/methodology/approach
The paper discusses problems of the ad hoc integration of DSMLs and UML and from this discussion a systematic and semi‐automatic integration approach consisting of two phases is derived. In the first phase, the correspondences between the modeling concepts of the DSML and UML are defined manually. In the second phase, these correspondences are used for automatically producing UML profiles to represent the domain‐specific modeling concepts in UML and model transformations for transforming DSML models to UML models and vice versa. The paper presents the ideas within a case study for bridging ComputerAssociate's DSML of the AllFusion Gen CASE tool with IBM's Rational Software Modeler for UML.
Findings
The ad hoc definition of UML profiles and model transformations for achieving interoperability is typically a tedious and error‐prone task. By employing a semi‐automatic approach one gains several advantages. First, the integrator only has to deal with the correspondences between the DSML and UML on a conceptual level. Second, all repetitive integration tasks are automated by using model transformations. Third, well‐defined guidelines support the systematic and comprehensible integration.
Research limitations/implications
The paper focuses on the integrating direction DSMLs to UML, but not on how to derive a DSML defined in terms of a metamodel from a UML profile.
Originality/value
Although, DSMLs defined as metamodels and UML profiles are frequently applied in practice, only few attempts have been made to provide interoperability between these two worlds. The contribution of this paper is to integrate the so far competing worlds of DSMLs and UML by proposing a semi‐automatic approach, which allows exchanging models between these two worlds without loss of information.
Details
Keywords
Ademar Crotti Junior, Christophe Debruyne, Rob Brennan and Declan O’Sullivan
This paper aims to evaluate the state-of-the-art in CSV uplift tools. Based on this evaluation, a method that incorporates data transformations into uplift mapping languages by…
Abstract
Purpose
This paper aims to evaluate the state-of-the-art in CSV uplift tools. Based on this evaluation, a method that incorporates data transformations into uplift mapping languages by means of functions is proposed and evaluated. Typically, tools that map non-resource description framework (RDF) data into RDF format rely on the technology native to the source of the data when data transformation is required. Depending on the data format, data manipulation can be performed using underlying technology, such as relational database management system (RDBMS) for relational databases or XPath for XML. For CSV/Tabular data, there is no such underlying technology, and instead, it requires either a transformation of source data into another format or pre/post-processing techniques.
Design/methodology/approach
To evaluate the state-of-the-art in CSV uplift tools, the authors present a comparison framework and have applied it to such tools. A key feature evaluated in the comparison framework is data transformation functions. They argue that existing approaches for transformation functions are complex – in that a number of steps and tools are required. The proposed method, FunUL, in contrast, defines functions independent of the source data being mapped into RDF, as resources within the mapping itself.
Findings
The approach was evaluated using two typical real-world use cases. The authors have compared how well our approach and others (that include transformation functions as part of the uplift mapping) could implement an uplift mapping from CSV/Tabular into RDF. This comparison indicates that the authors’ approach performs well for these use cases.
Originality/value
This paper presents a comparison framework and applies it to the state-of-the-art in CSV uplift tools. Furthermore, the authors describe FunUL, which, unlike other related work, defines functions as resources within the uplift mapping itself, integrating data transformation functions and mapping definitions. This makes the generation of RDF from source data transparent and traceable. Moreover, as functions are defined as resources, these can be reused multiple times within mappings.
Details
Keywords
Kenneth H. Baldwin and Howard L. Schreyer
A finite element mesh generator is described based on a conformal mapping which requires little more than the definition of boundary geometry to generate a mesh. The method is…
Abstract
A finite element mesh generator is described based on a conformal mapping which requires little more than the definition of boundary geometry to generate a mesh. The method is restricted to plane regions which are simply connected. The interior of a region containing a uniform mesh of regularly shaped 8‐node quadrilateral elements is mapped conformally to the physical domain with the result that bandwidth is automatically minimized and that smooth transitions are made between large and small elements. Although the procedure is not satisfactory for general applications, most common geometrical shapes can be modelled with meshes of good quality. The method is based primarily on boundary data but the user can specify a region of high mesh density. Examples are given to illustrate typical results.
Melissa Cheung and Jan Hidders
This paper aims to present how iterative round‐trip modelling between two different business process modelling tools can be enabled on a conceptual level. Iterative round‐trip…
Abstract
Purpose
This paper aims to present how iterative round‐trip modelling between two different business process modelling tools can be enabled on a conceptual level. Iterative round‐trip modelling addresses model transformations between high‐level business and executable process models, and how to maintain these transformations in change time. Currently, the development of these process models is supported by different tools. To the authors' best knowledge, no coherent collaborative tool environment exists that supports iterative round‐trip modelling.
Design/methodology/approach
This paper is primarily based on a literature review of state‐of‐the‐art business to IT transformations regarding business process modelling. The architecture of integrated information systems (ARIS) and Cordys tools are used as an example case in this research. ARIS is a business process analysis (BPA) tool suited for analyzing and designing business processes, while the execution and monitoring of these processes is allowed by Cordys, a business process management suite (BPMS). The theory is used for transforming between ARIS event‐driven process chains from the business perspective and business process modelling notation in Cordys from the IT perspective.
Findings
A conceptual framework is proposed to couple a BPA and BPMS tool for round‐trip business process modelling. The framework utilizes concepts from the model‐driven architecture for structurally addressing interoperability and model transformations. Ensuring iterative development with two tools requires traceability of model transformations.
Practical implications
In many organizations, BPA and BPMS tools are used for business process modelling. These are in practice often two different worlds, while they concern around the same business processes. Maintaining multiple versions of the same process models across two tools is a considerable task, as they often are subject to design changes. Interoperability between a BPA and BPMS tool will minimize redundant activities, and reduce business to IT deployment time.
Originality/value
This research provides a theoretical base for coupling a BPA and BPMS tool regarding iterative round‐trip modelling. It provides an overview of the current state‐of‐the‐art literature of business process modelling transformations, and what is necessary for maintaining interoperability between tools. The findings indicate what is expected in tool support for iterative development in business process modelling from analysis and design to execution.
Details
Keywords
Sai Deng and Terry Reese
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic…
Abstract
Purpose
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic Theses and Dissertations (ETD) work flow at libraries using DSpace to store theses and dissertations by automating the process of generating MARC records from Dublin Core (DC) metadata in DSpace and exporting them to OCLC.
Design/methodology/approach
This paper discusses how the Shocker Open Access Repository (SOAR) at Wichita State University (WSU) Libraries and ScholarsArchive at Oregon State University (OSU) Libraries harvest theses data from the DSpace platform using the Metadata Harvester in MarcEdit developed by Terry Reese at OSU Libraries. It analyzes certain challenges in transformation of harvested data including handling of authorized data, dealing with data ambiguity and string processing. It addresses how these two institutions customize Library of Congress's XSLT (eXtensible Stylesheet Language Transformations) mapping to transfer DC metadata to MarcXML metadata and how they export MARC data to OCLC and Voyager.
Findings
The customized mapping and data transformation for ETD data can be standardized while also requiring a case‐by‐case analysis. By offering two institutions' experiences, it provides information on the benefits and limitations for those institutions that are interested in using MarcEdit and customized XSLT to transform their ETDs from DSpace to OCLC and Voyager.
Originality/value
The new method described in the paper can eliminate the need for double entry in DSpace and OCLC, meet local needs and significantly improve ETD work flow. It offers perspectives on repurposing and managing metadata in a standard and customizable way.
Details
Keywords
Toni Ahlqvist, Asta Bäck, Sirkka Heinonen and Minna Halonen
This paper seeks to discuss the outcomes of a road‐mapping research on social media project completed at VTT Technical Research Centre of Finland. Social media refer to a…
Abstract
Purpose
This paper seeks to discuss the outcomes of a road‐mapping research on social media project completed at VTT Technical Research Centre of Finland. Social media refer to a combination of three elements: content, user communities, and Web 2.0 technologies.
Design/methodology/approach
The paper utilizes socio‐technical road‐mapping to study the potential transformations of social media in the virtual and physical spheres.
Findings
Road‐maps were constructed in three thematic areas: society, companies, and local environment. The results were crystallized into five development lines. The first development line is transparency and its increasing role in society. The second development line is the rise of a ubiquitous participatory communication model. The third development is reflexive empowerment citizens. The fourth development line is the duality of personalization/fragmentation vs mass effects/integration. The fifth development line is the new relations of physical and virtual worlds.
Originality/value
The study of social media has been focusing mainly on its technological aspects from the current perspective. This paper forms a future‐oriented perspective to social media in a wider societal context.
Details
Keywords
Jia‐Lang Seng, Yu Lin, Jessie Wang and Jing Yu
XML emerges and evolves quick and fast as Web and wireless technology penetrates more into the consumer marketplace. Database technology faces new challenges. It has to change to…
Abstract
XML emerges and evolves quick and fast as Web and wireless technology penetrates more into the consumer marketplace. Database technology faces new challenges. It has to change to play the supportive role. Web and wireless applications master the technology paradigm shift. XML and database connectivity and transformation become critical. Heterogeneity and interoperability must be distinctly tackled. In this paper, we provide an in‐depth and technical review of XML and XML database technology. An analytic and comparative framework is developed. Storage method, mapping technique, and transformation paradigm formulate the framework. We collect and compile the IBM, Oracle, Sybase, and Microsoft XML database products. We use the framework and analyze each of these XML database techniques. The comparison and contrast aims to provide an insight into the structural and methodological paradigm shift in XML database technology.
Details
Keywords
Nikolaos Lagos, Adrian Mos and Mario Cortes-cornax
Domain-specific process modeling has been proposed in the literature as a solution to several problems in business process management. The problems arise when using only the…
Abstract
Purpose
Domain-specific process modeling has been proposed in the literature as a solution to several problems in business process management. The problems arise when using only the generic Business Process Model and Notation (BPMN) standard for modeling. This language includes domain ambiguity and difficult long-term model evolution. Domain-specific modeling involves developing concept definitions, domain-specific processes and eventually industry-standard BPMN models. This entails a multi-layered modeling approach, where any of these artifacts can be modified by various stakeholders and changes done by one person may influence models used by others. There is therefore a need for tool support to keep track of changes done and their potential impacts. The paper aims to discuss these issues.
Design/methodology/approach
The authors use a multi-context systems-based approach to infer the impacts that changes may cause in the models; and alsothe authors incrementally map components of business process models to ontologies.
Findings
Advantages of the framework include: identifying conflicts/inconsistencies across different business modeling layers; expressing rich information on the relations between two layers; calculating the impact of changes taking place in one layer to the rest of the layers; and selecting incrementally the most appropriate semantic models on which the transformations can be based.
Research limitations/implications
The authors consider this work as one of the foundational bricks that will enable further advances toward the governance of multi-layer business process modeling systems. Extensive usability tests would enable to further confirm the findings of the paper.
Practical implications
The approach described here should improve the maintainability, reuse and clarity of business process models and in extension improve data governance in large organizations. The approaches described here should improve the maintainability, reuse and clarity of business process models. This can improve data governance in large organizations and for large collections of processes by aiding various stakeholders to understand problems with process evolutions, changes and inconsistencies with business goals.
Originality/value
This paper fulfills an identified gap to enabling semantically aided domain–specific process modeling.
Details
Keywords
Zongda Wu, Jian Xie, Xinze Lian and Jun Pan
The security of archival privacy data in the cloud has become the main obstacle to the application of cloud computing in archives management. To this end, aiming at XML archives…
Abstract
Purpose
The security of archival privacy data in the cloud has become the main obstacle to the application of cloud computing in archives management. To this end, aiming at XML archives, this paper aims to present a privacy protection approach that can ensure the security of privacy data in the untrusted cloud, without compromising the system availability.
Design/methodology/approach
The basic idea of the approach is as follows. First, the privacy data before being submitted to the cloud should be strictly encrypted on a trusted client to ensure the security. Then, to query the encrypted data efficiently, the approach constructs some key feature data for the encrypted data, so that each XML query defined on the privacy data can be executed correctly in the cloud.
Findings
Finally, both theoretical analysis and experimental evaluation demonstrate the overall performance of the approach in terms of security, efficiency and accuracy.
Originality/value
This paper presents a valuable study attempting to protect privacy for the management of XML archives in a cloud environment, so it has a positive significance to promote the application of cloud computing in a digital archive system.
Details
Keywords
Iván Manuel De la Vega Hernández and Juan Diáz Amorin
The purpose of this study is to analyze the technological change under development linked to the convergence of the Internet of Things (IoT) and digital transformation (DT) from…
Abstract
Purpose
The purpose of this study is to analyze the technological change under development linked to the convergence of the Internet of Things (IoT) and digital transformation (DT) from the perspective of a scientific mapping in a context marked by the occurrence of an unexpected event that accelerated this process such as the SARS-CoV-2 pandemic and its variants.
Design/methodology/approach
The study was developed under the longitudinal scientific mapping approach and considered the period 1990–2021 using as a basis the descriptors DT and IoT. The steps followed were identification and selection of keywords; design and application of an algorithm to identify these selected keywords in titles, abstracts and keywords using terms in Web of Science (WoS) to contrast them; and performing a data processing based on the journals in the Journal Citation Report during 2022. The longitudinal study uses scientific mapping to analyze the evolution of the scientific literature that seeks to understand the acceleration in the integration of technology and its impact on the human factor, processes and organizational culture.
Findings
This study showed that the technologies converging around IoT form the basis of the main DT processes being experienced on a global scale; furthermore, it was shown that the pandemic accelerated the convergence and application of new technologies to support the major changes required for a world with new needs. Finally, China and the USA differ significantly in the production of scientific knowledge with respect to the first eight followers.
Originality/value
The knowledge gap addressed by this study is to identify the production of scientific knowledge related to IoT and its impact on DT processes at the scale of individuals, organizations and the new way of delivering value to society. This knowledge about researchers, institutions, countries and the derivation is multiple indicators allows improving decision-making at multiple scales on these issues.
Details