Search results
1 – 10 of 22Xiu Susie Fang, Quan Z. Sheng, Xianzhi Wang, Anne H.H. Ngu and Yihong Zhang
This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.
Abstract
Purpose
This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.
Design/methodology/approach
In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.
Findings
Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.
Originality/value
To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.
Details
Keywords
Nikitas N. Karanikolas and Michael Vassilakopoulos
The purpose of this paper is to compare the use of two Object-Relational models against the use of a post-Relational model for a realistic application. Although real-world…
Abstract
Purpose
The purpose of this paper is to compare the use of two Object-Relational models against the use of a post-Relational model for a realistic application. Although real-world applications, in most cases, can be adequately modeled by the Entity-Relationship (ER) model, the transformation to the popular Relational model alters the representation of structures common in reality, like multi-valued and composite fields. Alternative database models have been developed to overcome these shortcomings.
Design/methodology/approach
Based on the ER model of a medical application, this paper compares the information representation, manipulation and enforcement of integrity constraints through PostgreSQL and Oracle, against the use of a post-Relational model composed of the Conceptual Universal Database Language (CUDL) and the Conceptual Universal Database Language Abstraction Level (CAL).
Findings
The CAL/CUDL pair, although more periphrastic for data definition, is simpler for data insertions, does not require the use of procedural code for data updates, produces clearer output for retrieval of attributes, can accomplish retrieval of rows based on conditions that address composite data with declarative statements and supports data validation for relationships between composite data without the need for procedural code.
Research limitations/implications
To verify, in practice, the conclusions of the paper, complete implementation of a CAL/CUDL system is needed.
Practical implications
The use of the CAL/CUDL pair would advance the productivity of database application development.
Originality/value
This paper highlights the properties of realistic database-applications modelling and management that are desirable by developers and shows that these properties are better satisfied by the CAL/CUDL pair.
Details
Keywords
Zeno Toffano and François Dubois
The purpose of this paper is to apply the quantum “eigenlogic” formulation to behavioural analysis. Agents, represented by Braitenberg vehicles, are investigated in the context of…
Abstract
Purpose
The purpose of this paper is to apply the quantum “eigenlogic” formulation to behavioural analysis. Agents, represented by Braitenberg vehicles, are investigated in the context of the quantum robot paradigm. The agents are processed through quantum logical gates with fuzzy and multivalued inputs; this permits to enlarge the behavioural possibilities and the associated decisions for these simple vehicles.
Design/methodology/approach
In eigenlogic, the eigenvalues of the observables are the truth values and the associated eigenvectors are the logical interpretations of the propositional system. Logical observables belong to families of commuting observables for binary logic and many-valued logic. By extension, a fuzzy logic interpretation is proposed by using vectors outside the eigensystem of the logical connective observables. The fuzzy membership function is calculated by the quantum mean value (Born rule) of the logical projection operators and is associated to a quantum probability. The methodology of this paper is based on quantum measurement theory.
Findings
Fuzziness arises naturally when considering systems described by state vectors not in the considered logical eigensystem. These states correspond to incompatible and complementary systems outside the realm of classical logic. Considering these states allows the detection of new Braitenberg vehicle behaviours related to identified emotions; these are linked to quantum-like effects.
Research limitations/implications
The method does not deal at this stage with first-order logic and is limited to different families of commuting logical observables. An extension to families of logical non-commuting operators associated to predicate quantifiers could profit of the “quantum advantage” due to effects such as superposition, parallelism, non-commutativity and entanglement. This direction of research has a variety of applications, including robotics.
Practical implications
The goal of this research is to show the multiplicity of behaviours obtained by using fuzzy logic along with quantum logical gates in the control of simple Braitenberg vehicle agents. By changing and combining different quantum control gates, one can tune small changes in the vehicle’s behaviour and hence get specific features around the main basic robot’s emotions.
Originality/value
New mathematical formulation for propositional logic based on linear algebra. This methodology demonstrates the potentiality of this formalism for behavioural agent models (quantum robots).
Details
Keywords
Robert Gaizauskas and Yorick Wilks
In this paper we give a synoptic view of the growth of the text processing technology of information extraction (IE) whose function is to extract information about a pre‐specified…
Abstract
In this paper we give a synoptic view of the growth of the text processing technology of information extraction (IE) whose function is to extract information about a pre‐specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960s and 70s till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining.
Details
Keywords
José Luis Usó Doménech, Josué Antonio Nescolarde-Selva, Hugh Gash and Lorena Segura-Abad
The distinction between essence and existence cannot be a distinction in God: in the actual infinite, essence and existence coincide and are one. In it, maximum and minimum…
Abstract
Purpose
The distinction between essence and existence cannot be a distinction in God: in the actual infinite, essence and existence coincide and are one. In it, maximum and minimum coincide. Coincidentia oppositorum is a Latin phrase meaning coincidence of opposites. It is a neo-Platonic term, attributed to the fifteenth-century German scholar Nicholas of Cusa in his essay, Docta Ignorantia. God (coincidentia oppositorum) is the synthesis of opposites in a unique and absolutely infinite being. God transcends all distinctions and oppositions that are found in creatures. The purpose of this paper is to study Cusanus’s thought in respect to infinity (actual and potential), Spinoza’s relationship with Cusanus, and present a mathematical theory of coincidentia oppositorum based on complex numbers.
Design/methodology/approach
Mathematical development of a dialectical logic is carried out with truth values in a complex field.
Findings
The conclusion is the same as has been made by thinkers and mystics throughout time: the inability to know and understand the idea of God.
Originality/value
The history of the Infinite thus reveals in both mathematics and philosophy a development of increasingly subtle thought in the form of a dialectical dance around the ineffable and incomprehensible Infinite. First, the authors step toward it, reaching with their intuition beyond the limits of rationality and thought into the realm of the paradoxical. Then, they step back, struggling to express their insight within the limited scope of reason. But the Absolute Infinite remains, at the border of comprehensibility, inviting them with its paradoxes, to once again step forward and transcend the apparent division between finite and Infinite.
Details
Keywords
Identifying the fundamental characteristics of meaning and deriving an automated meaning‐analysis procedure for machine intelligence.
Abstract
Purpose
Identifying the fundamental characteristics of meaning and deriving an automated meaning‐analysis procedure for machine intelligence.
Design/methodology/approach
Semantic category theory (SCT) is an original testable scientific theory, based on readily available data: not assumptions or axioms. SCT can therefore be refuted by irreconcilable data: not opinion.
Findings
Human language involves four totally independent semantic categories (SC), each of which has its own distinctive form of “Truth”. Any sentence that assigns the characteristics of one SC to another SC involves what is termed here “Semantic Intertwine”. Semantic intertwine often lies at the core of semantic ambiguity, sophistry and paradox: problems that have plagued human reason since antiquity.
Research limitations/implications
SCT is applicable to any endeavour involving human language. Research applications are therefore somewhat extensive. For example, identifying metaphors posing as science, or natural language processing/translation, or solving disparate paradox types, as illustrated by worked examples from: The Liar Group, Sorites Inductive, Russell's Set Theoretic and Zeno's Paradoxes.
Practical implications
To interact successfully with human language, behaviour, and belief systems, as well as their own environment, intelligent machines will need to resolve the semantic component/intertwines of any sentence. Semantic category analysis (SCA), derived from SCT, and also described here, can be used to analyse any sentence or argument, however complex.
Originality/value
Both SCT and SCA are original. Whilst “category error” is an intuitive notion, the observably precise nature, number and modes of interaction of such categories have never previously been presented. With SCT/SCA the rigorous analysis of any argument, whether foisted, valid, or obfuscating, is now possible: by man or machine.
Details
Keywords
The recent scientific observation that human information processing involves four independent data types, has pinpointed a source of fallacious arguments within many domains of…
Abstract
Purpose
The recent scientific observation that human information processing involves four independent data types, has pinpointed a source of fallacious arguments within many domains of human thought. The species-unique ability to assign observable characteristics to purely conceptual entities has created beautiful poetry and literature. However, this ability to generate “Semantic Intertwine” has also created the most incomprehensible paradoxes and conundrums. The paper aims to discuss these issues.
Design/methodology/approach
Semantic Intertwine can be created between, or within, Semantic Categories; and in either case it then lies at the heart of fallacious, yet often very persuasive, reasoning.
Findings
This paper describes how to detect mathematically related Semantic Intertwine in erroneous arguments involving: operands (VIII), mathematical induction (VIIA), orthogonal axiom sets (VIIB), continuous functions (VIIA), exclusive disjunction (VIIA), propositional calculus (VI) and the hitherto thorny problems arising from ambiguous intra-category use of “infinity” (VIIB).
Originality/value
The applications of Semantic Category Analysis (SCA) are manifold. Determine the Semantic Categories involved in an argument and their modes of combination, and any Semantic Intertwine revealed pinpoints erroneous reasoning. SCA can be applied to any domain of human thought.
Details
Keywords
Abstract
Details
Keywords
In this paper a new approach is proposed for dealing with uncertainty in reasoning. A numerical quantification based on possibility theory is used in the representation of…
Abstract
In this paper a new approach is proposed for dealing with uncertainty in reasoning. A numerical quantification based on possibility theory is used in the representation of uncertain facts or rules. The chaining of uncertain rules and the combination of results obtained from different chains of inference, are discussed at length. Only min or max operations are used in the chaining and in the combining processes for computing the possibility degrees corresponding to the different alternatives. Partial similarities with other approaches are pointed out.