Search results
1 – 10 of over 8000Aymen Gammoudi, Allel Hadjali and Boutheina Ben Yaghlane
Time modeling is a crucial feature in many application domains. However, temporal information often is not crisp, but is subjective and fuzzy. The purpose of this paper is to…
Abstract
Purpose
Time modeling is a crucial feature in many application domains. However, temporal information often is not crisp, but is subjective and fuzzy. The purpose of this paper is to address the issue related to the modeling and handling of imperfection inherent to both temporal relations and intervals.
Design/methodology/approach
On the one hand, fuzzy extensions of Allen temporal relations are investigated and, on the other hand, extended temporal relations to define the positions of two fuzzy time intervals are introduced. Then, a database system, called Fuzzy Temporal Information Management and Exploitation (Fuzz-TIME), is developed for the purpose of processing fuzzy temporal queries.
Findings
To evaluate the proposal, the authors have implemented a Fuzz-TIME system and created a fuzzy historical database for the querying purpose. Some demonstrative scenarios from history domain are proposed and discussed.
Research limitations/implications
The authors have conducted some experiments on archaeological data to show the effectiveness of the Fuzz-TIME system. However, thorough experiments on large-scale databases are highly desirable to show the behavior of the tool with respect to the performance and time execution criteria.
Practical implications
The tool developed (Fuzz-TIME) can have many practical applications where time information has to be dealt with. In particular, in several real-world applications like history, medicine, criminal and financial domains, where time is often perceived or expressed in an imprecise/fuzzy manner.
Social implications
The social implications of this work can be expected, more particularly, in two domains: in the museum to manage, exploit and analysis the piece of information related to archives and historic data; and in the hospitals/medical organizations to deal with time information inherent to data about patients and diseases.
Originality/value
This paper presents the design and characterization of a novel and intelligent database system to process and manage the imperfection inherent to both temporal relations and intervals.
Details
Keywords
Hind Hamrouni, Fabio Grandi and Zouhaier Brahmia
A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing…
Abstract
Purpose
A temporal XML database could become an inconsistent model of the represented reality after a retroactive update. Such an inconsistency state must be repaired by performing corrective actions (e.g. payment of arrears after a retroactive salary increase) either immediately (i.e. at inconsistency detection time) or in a deferred manner, at one or several chosen repair times according to application requirements. The purpose of this work is to deal with deferred and multi-step repair of detected data inconsistencies.
Design/methodology/approach
A general approach for deferred and stepwise repair of inconsistencies that result from retroactive updates of currency data (e.g. the salary of an employee) in a valid-time or bitemporal XML database is proposed. The approach separates the inconsistency repairs from the inconsistency detection phase and deals with the execution of corrective actions, which also take into account enterprise’s business rules that define some relationships between data.
Findings
Algorithms, methods and support data structures for deferred and multi-step inconsistency repair of currency data are presented. The feasibility of the approach has been shown through the development and testing of a system prototype, named Deferred-Repair Manager.
Originality/value
The proposed approach implements a new general and flexible strategy for repairing detected inconsistencies in a deferred manner and possibly in multiple steps, according to varying user’s requirements and to specifications which are customary in the real world.
Details
Keywords
Abstract
Details
Keywords
Xu Rui, Cui Ping‐yuan, Xu Xiao‐fei and Cui Hu‐tao
Because of indeterminateness of the environment and delay of the communication, deep space spacecraft is required to be autonomous. Planning technology is studied in order to…
Abstract
Because of indeterminateness of the environment and delay of the communication, deep space spacecraft is required to be autonomous. Planning technology is studied in order to realize the spacecraft autonomy. First, a multi‐agent planning system (MAPS) based on temporal constraint satisfaction is proposed for concurrency and distribution of spacecraft system. Second, timeline concept is used to describe simultaneous activity, continue time, resource and temporal constraints. Third, for every planning agent in the MAPS, its layered architecture is designed and planning algorithm based on the temporal constraint satisfaction is given in detail. Finally, taking some key subsystems of deep space explorer as an example, the prototype system of MAPS is implemented. The results show that with the communication and cooperation of the planning agents, the MAPS is able to produce complete plan for explorer mission quickly under the complex constraints of time and resource.
Details
Keywords
Isabelle Boydens and Seth van Hooland
This paper seeks to present a conceptual framework to analyze and improve the quality of empirical databases throughout time – with operational results which are measurable in…
Abstract
Purpose
This paper seeks to present a conceptual framework to analyze and improve the quality of empirical databases throughout time – with operational results which are measurable in terms of cost‐benefit.
Design/methodology/approach
Basing themselves on the general approach of hermeneutics and, more specifically, on Fernand Braudel's concept of “temporalités étagées” and Norbert Elias's “evolutive continuum”, the authors develop a temporal framework consisting of three stratified time levels in order to interpret shifts in the quality of databases. The soundness of the framework and its capability of delivering operational results are demonstrated by the development of a case study focusing on social security databases. A second case study in the context of digital cultural heritage is also developed to illustrate the general applicability of this interdisciplinary approach in the context of empirical information systems.
Findings
Contrary to the assertions of common theories that postulate a permanent bijective relationship between records in a database and the corresponding reality, this paper provides insights which demonstrate that a database evolves over time along with the interpretation of the values that it allows one to determine. These interdisciplinary insights, when applied practically to concrete case studies, give rise to original operational results in the ICT field of data quality.
Practical implications
The framework helps both the managers and the users of empirical databases to understand the necessity to integrate unforeseen observations, neglected a priori by virtue of the closed world assumption, and to develop operational recommendations to enhance the quality of databases.
Originality/value
This paper is the first to show the potential of hermeneutics for the task of understanding the evolution of an empirical information system, and also the first to deliver operational outcomes.
Details
Keywords
With the rapid development of the indoor spaces positioning technologies such as the radio-frequency identification (RFID), Bluetooth and WI-FI, the locations of indoor spatial…
Abstract
Purpose
With the rapid development of the indoor spaces positioning technologies such as the radio-frequency identification (RFID), Bluetooth and WI-FI, the locations of indoor spatial objects (static or moving) constitute an important foundation for a variety of applications. However, there are many challenges and limitations associated with the structuring and querying of spatial objects in indoor spaces. The purpose of this study is to address the current trends, limitations and future challenges associated with the structuring and querying of spatial objects in indoor spaces. Also it addresses the related features of indoor spaces such as indoor structures, positioning technologies and others.
Design/methodology/approach
In this paper, the author focuses on understanding the aspects and challenges of spatial database managements in indoor spaces. The author explains the differences between indoor spaces and outdoor spaces. Also examines the issues pertaining to indoor spaces positioning and the impact of different shapes and structures within these spaces. In addition, the author considers the varieties of spatial queries that relate specifically to indoor spaces.
Findings
Most of the research on data management in indoor spaces does not consider the issues and the challenges associated with indoor positioning such as the overlapping of Wi-Fi. The future trend of the indoor spaces includes included different shapes of indoors beside the current 2D indoor spaces on which the majority of the data structures and query processing for spatial objects have focused on. The diversities of the indoor environments features such as directed floors, multi-floors cases should be considered and studied. Furthermore, indoor environments include many special queries besides the common ones queries that used in outdoor spaces such as KNN, range and temporal queries. These special queries need to be considered in data management and querying of indoor environments.
Originality/value
To the best of the author’s knowledge, this paper successfully addresses the current trends, limitations and future challenges associated with the structuring and querying of spatial objects in indoor spaces.
Details
Keywords
A wide number of technologies are currently in store to harness the challenges posed by pandemic situations. As such diseases transmit by way of person-to-person contact or by any…
Abstract
Purpose
A wide number of technologies are currently in store to harness the challenges posed by pandemic situations. As such diseases transmit by way of person-to-person contact or by any other means, the World Health Organization had recommended location tracking and tracing of people either infected or contacted with the patients as one of the standard operating procedures and has also outlined protocols for incident management. Government agencies use different inputs such as smartphone signals and details from the respondent to prepare the travel log of patients. Each and every event of their trace such as stay points, revisit locations and meeting points is important. More trained staffs and tools are required under the traditional system of contact tracing. At the time of the spiralling patient count, the time-bound tracing of primary and secondary contacts may not be possible, and there are chances of human errors as well. In this context, the purpose of this paper is to propose an algorithm called SemTraClus-Tracer, an efficient approach for computing the movement of individuals and analysing the possibility of pandemic spread and vulnerability of the locations.
Design/methodology/approach
Pandemic situations push the world into existential crises. In this context, this paper proposes an algorithm called SemTraClus-Tracer, an efficient approach for computing the movement of individuals and analysing the possibility of pandemic spread and vulnerability of the locations. By exploring the daily mobility and activities of the general public, the system identifies multiple levels of contacts with respect to an infected person and extracts semantic information by considering vital factors that can induce virus spread. It grades different geographic locations according to a measure called weightage of participation so that vulnerable locations can be easily identified. This paper gives directions on the advantages of using spatio-temporal aggregate queries for extracting general characteristics of social mobility. The system also facilitates room for the generation of various information by combing through the medical reports of the patients.
Findings
It is identified that context of movement is important; hence, the existing SemTraClus algorithm is modified by accounting for four important factors such as stay point, contact presence, stay time of primary contacts and waypoint severity. The priority level can be reconfigured according to the interest of authority. This approach reduces the overwhelming task of contact tracing. Different functionalities provided by the system are also explained. As the real data set is not available, experiments are conducted with similar data and results are shown for different types of journeys in different geographical locations. The proposed method efficiently handles computational movement and activity analysis by incorporating various relevant semantics of trajectories. The incorporation of cluster-based aggregate queries in the model do away with the computational headache of processing the entire mobility data.
Research limitations/implications
As the trajectory of patients is not available, the authors have used the standard data sets for experimentation, which serve the purpose.
Originality/value
This paper proposes a framework infrastructure that allows the emergency response team to grab multiple information based on the tracked mobility details of a patient and facilitates room for various activities for the mitigation of pandemics such as the prediction of hotspots, identification of stay locations and suggestion of possible locations of primary and secondary contacts, creation of clusters of hotspots and identification of nearby medical assistance. The system provides an efficient way of activity analysis by computing the mobility of people and identifying features of geographical locations where people travelled. While formulating the framework, the authors have reviewed many different implementation plans and protocols and arrived at the conclusion that the core strategy followed is more or less the same. For the sake of a reference model, the Indian scenario is adopted for defining the concepts.
Details
Keywords
The purpose of this paper is to explore the data connection, spatial distribution characteristics and trends in genealogical information. First, it implements a spatial-temporal…
Abstract
Purpose
The purpose of this paper is to explore the data connection, spatial distribution characteristics and trends in genealogical information. First, it implements a spatial-temporal visualization of the Hakka genealogical information system that makes these individual family pedigree charts appear as one seamless genealogy to family and researchers seeking connections and family history all over the world. Second, this study applies migration analysis by applying big data technologies to Hakka genealogies to investigate the migration patterns of the Hakka ethnic group in Taiwan between 1954 and 2014. This innovative library service enhances the Hakka genealogical migration analysis using big data.
Design/methodology/approach
The platform is designed for the exchange of genealogical data to be used in big data analysis. This study integrates big data and geographic information systems (GIS) to map the population distribution themes. The general procedure included collecting genealogical big data, geographic encoding, gathering the map information, GIS layer integration and migration map production.
Findings
The analytical results demonstrate that big data technology is highly appropriate for family migration history analysis, given the increasing volume, velocity and variety of genealogical data. The spatial-temporal visualization of the genealogical research platform can follow family history and migration paths, and dynamically generate roadmaps to simplify the cartographic steps.
Practical implications
Technology that combines big data and GIS is suitable for performing migration analysis based on genealogy. A web-based application for spatial-temporal genealogical information also demonstrates the contribution of innovative library services.
Social implications
Big data play a dominant role in library services, and in turn, provide an active library service. These findings indicate that big data technology can provide a suitable tool for improving library services.
Originality/value
Online genealogy and family trees are linked with large-volume, growing data sets that are complex and have multiple, autonomous sources. The migration analysis using big data has the potential to help genealogy researchers to construct minority ethnic history.
Details
Keywords
Linked data technologies promise different ways of querying and retrieving information that enable individuals to have search experiences that are broader and more coordinated…
Abstract
Purpose
Linked data technologies promise different ways of querying and retrieving information that enable individuals to have search experiences that are broader and more coordinated than those common in current library technologies. It is vital that information technologies be able to incorporate temporal capabilities or reasoning to allow for the more nuanced interactions with resources, particularly as they change over time. The purpose of this paper is to assess methods currently in use that allow for temporal querying of resources serialized as linked data.
Design/methodology/approach
This paper examines philosophical models, experimental approaches and common standards to identify areas of alignment and divergence in their orientations toward serializing time and change as linked data. By framing approaches and standards within the context of philosophical theories, a clear preference for certain models of time emerge.
Findings
While there have been several approaches to serializing time as linked data, none have found their way into a full implementation by standards in common use. Further, approaches to the issue are largely rooted in one model of philosophical thought that is particularly oriented to computational approaches. As such there is a gap between methods and standards, and a large room for further investigation into temporal models that may be applicable for different contexts. A call for investigation into a model that can cascade in to different temporal approaches is provided.
Originality/value
While there are many papers concerning serializing time as linked data, none have tried to thoroughly align these to philosophical theories of time and further to standards currently in use.
Details