Search results
1 – 10 of 186Mu‐Huan Chiang and Gregory T. Byrd
Data‐centric storage is an efficient scheme to store and retrieve event data in sensor networks, but with the multi‐hop routing nature of sensor networks, the communication cost…
Abstract
Data‐centric storage is an efficient scheme to store and retrieve event data in sensor networks, but with the multi‐hop routing nature of sensor networks, the communication cost of the home nodes and their neighboring nodes tends to be much higher than the other nodes. These hot‐spots can adversely impact system lifetime by draining off their limited energy rapidly. In this paper, we present Zone‐Repartitioning, a load‐balancing mechanism that reduces the energy consumption of the hot‐spots by distributing their communication load while event frequency is high. The trade‐off between event storage cost and query cost makes Zone Repartitioning a competitive approach in different kinds of applications. We compare the performance of Zone Repartitioning against GHT and show that Zone Repartitioning provides better adaptability in various sensor network scenarios.
Details
Keywords
Wirat Jareevongpiboon and Paul Janecek
The purpose of this paper is to propose a solution to the problem of a lack of machine processable semantics in business process management.
Abstract
Purpose
The purpose of this paper is to propose a solution to the problem of a lack of machine processable semantics in business process management.
Design/methodology/approach
The paper introduces a methodology that combines domain and company‐specific ontologies and databases to obtain multiple levels of abstraction for process mining and analysis. The authors valuated this approach with a real case study from the apparel domain, using a prototype system and techniques developed in the Process Mining Framework (ProM). The results of this approach are compared with similar research.
Findings
Semantically enriching process execution data can successfully raise analysis from the syntactic to the semantic level, and enable multiple perspectives of analysis on business processes. Combining this approach with complementary research in semantic business process management (SBPM) can provide results comparable to multidimensional analysis in data warehouse and on line analytical processing (OLAP) technologies.
Originality/value
The approach and prototype described in this paper improve the richness of semantics available for open‐source process mining and analysis tools like ProM, and the richness and detail of the resulting analysis.
Details
Keywords
- Semantics
- Process analysis
- Business process
- Process mining and analysis
- Semantic process mining and analysis
- Semantic business process management
- Ontological approach
- Multi‐perspective process analysis
- Multidimensional analysis
- Semantic enhancement
- Semantic annotation log
- Ontology‐database mapping
- Ontology layers
Tassos Dimitriou and Ioannis Krontiris
Nodes in sensor networks do not have enough topology information to make efficient routing decisions. To relay messages through intermediate sensors, geographic routing has been…
Abstract
Nodes in sensor networks do not have enough topology information to make efficient routing decisions. To relay messages through intermediate sensors, geographic routing has been proposed as such a solution. Its greedy nature, however, makes routing inefficient especially in the presence of topology voids or holes. In this paper we present GRAViTy (Geographic Routing Around Voids In any TopologY of sensor networks), a simple greedy forwarding algorithm that combines compass routing along with a mechanism that allows packets to explore the area around voids and bypass them without significant communication overhead. Using extended simulation results we show that our mechanism outperforms the right‐hand rule for bypassing voids and that the resulting paths found well approximate the corresponding shortest paths. GRAViTy uses a cross‐layered approach to improve routing paths for subsequent packets based on experience gained by former routing decisions. Furthermore, our protocol responds to topology changes, i.e. failure of nodes, and efficiently adjusts routing paths towards the destination.
Details
Keywords
Tsung Teng Chen and David C. Yen
The paper's aim is to document the development of a novel tool to address the inadequacies of existing cocitation visualization tools.
Abstract
Purpose
The paper's aim is to document the development of a novel tool to address the inadequacies of existing cocitation visualization tools.
Design/methodology/approach
The paper demonstrates the visualized effects of this tool and supplements the results with a case study that utilizes a large data set to explore the cross‐field studies among different computer science fields.
Findings
The tool displays cocitation graphs with latent visual cues and allows direct manipulation of the visualized graphs. The tool also facilitates the exploration of the relationships between articles in the graphs.
Research limitations/implications
The indirect cocitation relationships are vividly visualized by the citation network itself. The context lost by the conventional cocitation network may be preserved. Instead of being linked by explicit lines, the implicit cocitation relationships are shown by the closeness among the cocited nodes.
Practical implications
The preserved context of a cocitation network may facilitate the exploration of latent cross‐field studies. The cocitation visualization tool demonstrates that the context of the cocitation graph is preserved by using the citation network itself to reveal cocitation relationships.
Originality/value
The cocitation relationships are implied by the closeness among cocited nodes in a citation graph. The paper documents this novel approach which has not been seen before.
Details
Keywords
Aidan Jungo, Mengmeng Zhang, Jan B. Vos and Arthur Rizzi
The purpose of this paper is to present the status of the on-going development of the new computerized environment for aircraft synthesis and integrated optimization methods…
Abstract
Purpose
The purpose of this paper is to present the status of the on-going development of the new computerized environment for aircraft synthesis and integrated optimization methods (CEASIOM) and to compare results of different aerodynamic tools. The concurrent design of aircraft is an extremely interdisciplinary activity incorporating simultaneous consideration of complex, tightly coupled systems, functions and requirements. The design task is to achieve an optimal integration of all components into an efficient, robust and reliable aircraft with high performance that can be manufactured with low technical and financial risks, and has an affordable life-cycle cost.
Design/methodology/approach
CEASIOM (www.ceasiom.com) is a framework that integrates discipline-specific tools like computer-aided design, mesh generation, computational fluid dynamics (CFD), stability and control analysis and structural analysis, all for the purpose of aircraft conceptual design.
Findings
A new CEASIOM version is under development within EU Project AGILE (www.agile-project.eu), by adopting the CPACS XML data-format for representation of all design data pertaining to the aircraft under development.
Research limitations/implications
Results obtained from different methods have been compared and analyzed. Some differences have been observed; however, they are mainly due to the different physical modelizations that are used by each of these methods.
Originality/value
This paper summarizes the current status of the development of the new CEASIOM software, in particular for the following modules: CPACS file visualizer and editor CPACSupdater (Matlab) Automatic unstructured (Euler) & hybrid (RANS) mesh generation by sumo Multi-fidelity CFD solvers: Digital Datcom (Empirical), Tornado (VLM), Edge-Euler & SU2-Euler, Edge-RANS & SU2-RANS Data fusion tool: aerodynamic coefficients fusion from variable fidelity CFD tools above to compile complete aero-table for flight analysis and simulation.
Details
Keywords
Hooran MahmoudiNasab and Sherif Sakr
The purpose of this paper is to present a two‐phase approach for designing an efficient tailored but flexible storage solution for resource description framework (RDF) data based…
Abstract
Purpose
The purpose of this paper is to present a two‐phase approach for designing an efficient tailored but flexible storage solution for resource description framework (RDF) data based on its query workload characteristics.
Design/methodology/approach
The approach consists of two phases. The vertical partitioning phase which aims of reducing the number of join operations in the query evaluation process, while the adjustment phase aims to maintain the efficiency of the performance of the query processing by adapting the underlying schema to cope with the dynamic nature of the query workloads.
Findings
The authors perform comprehensive experiments on two real‐world RDF datasets to demonstrate that the approach is superior to the state‐of‐the‐art techniques in this domain.
Originality/value
The main motivation behind the authors' approach is that several benchmarking studies have recently shown that each RDF dataset requires a tailored table schema in order to achieve efficient performance during query processing. None of the previous approaches have considered this limitation.
Details
Keywords
Rene Kaiser, Stefan Thalmann and Viktoria Pammer-Schindler
This paper aims to report an interview study investigating knowledge protection practices in a collaborative research and innovation project centred around the semi-conductor…
Abstract
Purpose
This paper aims to report an interview study investigating knowledge protection practices in a collaborative research and innovation project centred around the semi-conductor industry. The authors explore which and how knowledge protection practices are applied and zoom in on a particular one to investigate the perspective of three stakeholders which collaborate: the SUPPLIER of a specialised machine, the APPLIER of this machine and a SCHOLAR who collaborates with both, in an effort to develop a grey-box model of the machine and its operation.
Design/methodology/approach
A total of 33 interviews have been conducted in two rounds: 30 interviews explore knowledge protection practices applied across a large project. Qualitative content analysis is applied to determine practices not well covered by the research community. A total of three follow-up interviews inspect one specific collaboration case of three partners. Quotes from all interviews are used to illustrate the participants’ viewpoints and motivation.
Findings
SCHOLAR and APPLIER communicate using a data-centric knowledge protection practice, in that concrete parameter values are sensitive and hidden by communicating data within a wider parameter range. This practice balances the benefit that all three stakeholders have from communicating about specifics of machine design and operations. The grey-box model combines engineering knowledge of both SUPPLIER and APPLIER.
Practical implications
The line of thought described in this study is applicable to comparable collaboration constellations of a SUPPLIER of a machine, an APPLIER of a machine and a SCHOLAR who analyses and draws insights out of data.
Originality/value
The paper fills a research gap by reporting on applied knowledge protection practices and characterising a data-centric knowledge protection practice around a grey-box model.
Details
Keywords
Francesco Leoni, Martina Carraro, Erin McAuliffe and Stefano Maffei
The purpose of this paper is three-fold. Firstly, through selected case studies, to provide an overview of how non-traditional data from digital public services were used as a…
Abstract
Purpose
The purpose of this paper is three-fold. Firstly, through selected case studies, to provide an overview of how non-traditional data from digital public services were used as a source of knowledge for policymaking. Secondly, to argue for a design for policy approach to support the successful integration of non-traditional data into policymaking practice, thus supporting data-driven innovation for policymaking. Thirdly, to encourage a vision of the relation between data-driven innovation and public policy that considers policymaking outside the authoritative instrumental logic perspective.
Design/methodology/approach
A qualitative small-N case study analysis based on desk research data was developed to provide an overview of how data-centric public services could become a source of knowledge for policymaking. The analysis was based on an original theoretical-conceptual framework that merges the policy cycle model and the policy capacity framework.
Findings
This paper identifies three potential areas of contribution of a design for policy approach in a scenario of data-driven innovation for policymaking practice: the development of sensemaking and prefiguring activities to shape a shared rationale behind intra-/inter-organisational data sharing and data collaboratives; the realisation of collaborative experimentations for enhancing the systemic policy analytical capacity of a governing body, e.g. by integrating non-traditional data into new and trusted indicators for policy evaluation; and service design as approach for data-centric public services that connects policy decisions to the socio-technical context in which data are collected.
Research limitations/implications
The small-N sample (four cases) selected is not representative of a broader population but isolates exemplary initiatives. Moreover, the analysis was based on secondary sources, limiting the assessment quality of the real use of non-traditional data for policymaking. This level of empirical understanding is considered sufficient for an explorative analysis that supports the original perspective proposed here. Future research will need to collect primary data about the potential and dynamics of how data from data-centric public services can inform policymaking and substantiate the proposed areas of a design for policy contribution with practical experimentations and cases.
Originality/value
This paper proposes a convergence, yet largely underexplored, between the two emerging perspectives on innovation in policymaking: data for policy and design for policy. This convergence helps to address the designing of data-driven innovations for policymaking, while considering pragmatic indications of socially acceptable practices in this space for practitioners.
Details
Keywords
Deepjyoti Kalita and Dipen Deka
The purpose of this paper is to make a systematic review of the library metadata development history listing out the most significant landmarks and influencing events from Thomas…
Abstract
Purpose
The purpose of this paper is to make a systematic review of the library metadata development history listing out the most significant landmarks and influencing events from Thomas Bodley's rules to the latest BIBFRAME architecture, compare their significance and suitability in the modern-day Web environment.
Design/methodology/approach
Four time divisions were identified, namely pre-1900 era, 1900–1950, post-1950 to pre-Web era and post-Web era based on pre-set information available to the authors regarding catalogue rules. Under these four divisions, relevant information sources regarding the purpose of the study were identified; various metadata standards released at different times were consulted.
Findings
Library catalogue standards have undergone transitive changes from one form to another primarily influenced by the changing work environment and different forms of resource availability in libraries. Modern-day metadata standards are influenced by the opportunities provided by the World Wide Web towards libraries and work as a suitable base for data organisation at par with Semantic Web standards.
Research limitations/implications
Information organisation processes have gone towards a more data-centric approach than earlier document-centric nature in current Semantic Web environment. Libraries had to make a move in this process, and modern-day guidelines in this regard bring the possibility of large-scale discovery services through curated information resources.
Originality/value
The study discovers relationships between key events in the course of development of metadata standards and provides suggestions and predictions regarding it's future developments.
Details