Search results

1 – 10 of over 15000
Article
Publication date: 7 April 2020

Sivasankari S, Dinah Punnoose and Krishnamoorthy D

Erythemato-squamous disease (ESD) is one of the complex diseases related to the dermatology field. Due to common morphological features, the diagnosis of ESDs become stringent and…

Abstract

Purpose

Erythemato-squamous disease (ESD) is one of the complex diseases related to the dermatology field. Due to common morphological features, the diagnosis of ESDs become stringent and leads to inconsistency. Besides, diagnosis has been done on the basis of inculcated visible symptoms pertinent with the expertise of the physician. Hence, ontology construction for ESD is essential to ensure credibility, consistency, to resolve lack of time, labor and competence and to diminish human error.

Design/methodology/approach

This paper presents the design of an automatic ontology framework through data mining techniques and subsequently depicts the diagnosis of ESD using the available knowledge- and rule-based system.

Findings

The rule language (Semantic Web Rule Language) and rule engine (Jess and Drools) have been integrated to explore the severity of the ESD and foresee the most appropriate class to be suggested.

Social implications

In this paper, the authors identify the efficiency of the rule engine and investigate the performance of the computational techniques in predicting ESD using three different measures.

Originality/value

Primarily, the approach assesses transfer time for total number of axioms exported to rule engine (Jess and Drools) while the other approach measures the number of inferred axioms (process time) using the rule engine while the third measure calculates the time to translate the inferred axioms to OWL knowledge (execution time).

Details

International Journal of Intelligent Unmanned Systems, vol. 8 no. 4
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 5 December 2018

Christian Janiesch and Jörn Kuhlenkamp

Changes in workflow relevant data of business processes at run-time can hinder their completion or impact their profitability as they have been instantiated under different…

Abstract

Purpose

Changes in workflow relevant data of business processes at run-time can hinder their completion or impact their profitability as they have been instantiated under different circumstances. The purpose of this paper is to propose a context engine to enhance a business process management (BPM) system’s context-awareness. The generic architecture provides the flexibility to configure processes during initialization as well as to adapt running instances at decision gates or during execution due to significant context change.

Design/methodology/approach

The paper discusses context-awareness as the conceptual background. The technological capabilities of business rules and complex event processing (CEP) are outlined in an architecture design. A reference process is proposed and discussed in an exemplary application.

Findings

The results provide an improvement over the current situation of static variable instantiation of business processes with local information. The proposed architecture extends the well-known combination of business rules and BPM systems with a context engine based on CEP.

Research limitations/implications

The resulting architecture for a BPM system using a context engine is generic in nature and, hence, requires to be contextualized for situated implementations. Implementation success is dependent on the availability of context information and process compensation options.

Practical implications

Practitioners receive advice on a reference architecture and technology choices for implementing systems, which can provide and monitor context information for business processes as well as intervene and adapt the execution.

Originality/value

Currently, there is no multi-purpose non-proprietary context engine based on CEP or any other technology available for BPM, which facilitates the adaptation of processes at run-time due to changes in context variables. This paper will stimulate a debate between research and practice on suitable design and technology.

Details

Business Process Management Journal, vol. 25 no. 6
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 1 January 2006

Dan Eigeles

Aims to present intelligent authentication, authorization, and administration (I3A), a new concept that enables trust and information security between involved sides by agreement…

1398

Abstract

Purpose

Aims to present intelligent authentication, authorization, and administration (I3A), a new concept that enables trust and information security between involved sides by agreement, rather than by over‐exercised enforcement. In order to understand the needs and motivators for the concept, seeks to discuss the areas of technology, policies, law, and human mindsets.

Design/methodology/approach

Discussing two examples of possible solutions that would use the concept in e‐commerce.

Findings

Offers an open platform for enabling I3A of cryptographic keys, certificates, and privileges and integrating the use of such with secured applications on a wide variety of devices and environments.

Originality/value

Probably the first real exposition of the new concept I3A.

Details

Information Management & Computer Security, vol. 14 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 23 August 2013

Changhyun Byun, Hyeoncheol Lee, Yanggon Kim and Kwangmi Ko Kim

It is difficult to build our own social data set because data in social media is generally too vast and noisy. The aim of this study is to specify design and implementation…

Abstract

Purpose

It is difficult to build our own social data set because data in social media is generally too vast and noisy. The aim of this study is to specify design and implementation details of the Twitter data collecting tool with a rule‐based filtering module. Additionally, the paper aims to see how people communicate with each other through social networks in a case study with rule‐based analysis.

Design/methodology/approach

The authors developed a java‐based data gathering tool with a rule‐based filtering module for collecting data from Twitter. This paper introduces the design specifications and explain the implementation details of the Twitter Data Collecting Tool with detailed Unified Modeling Language (UML) diagrams. The Model View Controller (MVC) framework is applied in this system to support various types of user interfaces.

Findings

The Twitter Data Collecting Tool is able to gather a huge amount of data from Twitter and filter the data with modest rules for complex logic. This case study shows that a historical event creates buzz on Twitter and people's interests on the event are reflected in their Twitter activity.

Research limitations/implications

Applying data‐mining techniques to the social network data has so much potential. A possible improvement to the Twitter Data Collecting Tool would be an adaptation of a built‐in data‐mining module.

Originality/value

This paper focuses on designing a system handling massive amounts of Twitter Data. This is the first approach to embed a rule engine for filtering and analyzing social data. This paper will be valuable to those who may want to build their own Twitter dataset, apply customized filtering options to get rid of unnecessary, noisy data, and analyze social data to discover new knowledge.

Details

International Journal of Web Information Systems, vol. 9 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 September 2022

Ronald Ojino, Luisa Mich and Nerey Mvungi

The increasingly competitive hotel industry and emerging customer trends where guests are more discerning and want a personalized experience has led to the need of innovative…

Abstract

Purpose

The increasingly competitive hotel industry and emerging customer trends where guests are more discerning and want a personalized experience has led to the need of innovative applications. Personalization is much more important for hotels, especially now in the post-COVID lockdown era, as it challenges their business model. However, personalization is difficult to design and realize due to the variety of factors and requirements to be considered. Differences are both in the offer (hotels and their rooms) and demand (customers’ profiles and needs) in the accommodation domain. As for the implementation, critical issues are in hardware-dependent and vendor-specific Internet of Things devices which are difficult to program. Additionally, there is complexity in realizing applications that consider varying customer needs and context via existing personalization options. This paper aims to propose an ontological framework to enhance the capabilities of hotels in offering their accommodation and personalization options based on a guest’s characteristics, activities and needs.

Design/methodology/approach

A research approach combining both quantitative and qualitative methods was used to develop a hotel room personalization framework. The core of the framework is a hotel room ontology (HoROnt) that supports well-defined machine-readable descriptions of hotel rooms and guest profiles. Hotel guest profiles are modeled via logical rules into an inference engine exploiting reasoning functionalities used to recommend hotel room services and features.

Findings

Both the ontology and the inference engine module have been validated with promising results which demonstrate high accuracy. The framework leverages user characteristics, and dynamic contextual data to satisfy guests’ needs for personalized service provision. The semantic rules provide recommendations to both new and returning guests, thereby also addressing the cold start issue.

Originality/value

This paper extends HoROnt in two ways, to be able to add: instances of the concepts (room characteristics and services; guest profiles), i.e. to create a knowledge base, and logical rules into an inference engine, to model guests’ profiles and to be used to offer personalized hotel rooms. Thanks to the standards adopted to implement personalization, this framework can be integrated into existing reservation systems. It can also be adapted for any type of accommodation since it is broad-based and personalizes varying features and amenities in the rooms.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 27 May 2014

Sylva Girtelschmid, Matthias Steinbauer, Vikash Kumar, Anna Fensel and Gabriele Kotsis

The purpose of this article is to propose and evaluate a novel system architecture for Smart City applications which uses ontology reasoning and a distributed stream processing…

1745

Abstract

Purpose

The purpose of this article is to propose and evaluate a novel system architecture for Smart City applications which uses ontology reasoning and a distributed stream processing framework on the cloud. In the domain of Smart City, often methodologies of semantic modeling and automated inference are applied. However, semantic models often face performance problems when applied in large scale.

Design/methodology/approach

The problem domain is addressed by using methods from Big Data processing in combination with semantic models. The architecture is designed in a way that for the Smart City model still traditional semantic models and rule engines can be used. However, sensor data occurring at such Smart Cities are pre-processed by a Big Data streaming platform to lower the workload to be processed by the rule engine.

Findings

By creating a real-world implementation of the proposed architecture and running simulations of Smart Cities of different sizes, on top of this implementation, the authors found that the combination of Big Data streaming platforms with semantic reasoning is a valid approach to the problem.

Research limitations/implications

In this article, real-world sensor data from only two buildings were extrapolated for the simulations. Obviously, real-world scenarios will have a more complex set of sensor input values, which needs to be addressed in future work.

Originality/value

The simulations show that merely using a streaming platform as a buffer for sensor input values already increases the sensor data throughput and that by applying intelligent filtering in the streaming platform, the actual number of rule executions can be limited to a minimum.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 8 October 2020

Kan Ngamakeur and Sira Yongchareon

The paper aims to study realization requirements for the flexible enactment of artifact-centric business processes in a dynamic, collaborative environment and to develop a…

Abstract

Purpose

The paper aims to study realization requirements for the flexible enactment of artifact-centric business processes in a dynamic, collaborative environment and to develop a workflow execution framework that can effectively address those requirements.

Design/methodology/approach

This study proposed a framework and contract-based, event-driven architecture design and implementation that can directly realize collaborative artifact-centric business processes in service-oriented architecture (SOA) without any model conversion.

Findings

The results show that the approach is feasible in presenting several key benefits over the use of existing workflow systems to run artifact-centric processes.

Originality/value

Most of the existing approaches require an artifact-centric model to be transformed into executable workflow languages to run on existing workflow management systems. This study argues that the model conversion can incur losses of information and affect traceability and monitoring ability of workflows, especially in an SOA where a workflow can span across multiple inter-business entities.

Details

International Journal of Web Information Systems, vol. 16 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 May 2005

Hao Ding

To propose methods for expressing semantics and operating semantics in largely distributed environment, such as peer‐to‐peer (P2P) based digital libraries (DLs) where…

Abstract

Purpose

To propose methods for expressing semantics and operating semantics in largely distributed environment, such as peer‐to‐peer (P2P) based digital libraries (DLs) where heterogeneous schemas may exist and the relationships among them must be explicated for better performance in information searching.

Design/methodology/approach

In conventional solutions, a mediator is adopted to create and maintain the matching between relevant terms such that distinct but relevant metadata schemas can be integrated according to the mapping relationships in the mediator. However, such solutions suffer some problems originated from the static matching in mediator. This paper proposes to use facts to express the relationships among heterogeneous schemas and conduct the reasoning dynamically by using inference engines.

Findings

It is justified to use facts and inference engines to express and operate the semantics among heterogeneous but relevant information resources. The user can choose to convert only part of the XML document into facts if she can unpeel deeply nested XML tags. Additionally, it is possible for the user to manually edit (assert, update or retract) the facts as needed in the reasoning.

Research limitations/implications

The study assumes that peers are clustered according to shared topics or interest. An exhaust evaluation has not been conducted.

Practical implications

Each node can publish its schema to the involved peer community such that other peers can automatically discover the specific schema. A local matchmaking engine is adopted as well in order to automatically generate the relations between its own schema and the retrieved ones.

Originality/value

This paper provides a framework for semantic data integration in P2P networks.

Details

Library Management, vol. 26 no. 4/5
Type: Research Article
ISSN: 0143-5124

Keywords

Article
Publication date: 7 August 2009

Yu‐Liang Chi and Hsiao‐Chi Chen

The purpose of this paper is to demonstrate how the semantic rules in conjunction with ontology can be applied for inferring new facts to dispatch news into corresponding…

Abstract

Purpose

The purpose of this paper is to demonstrate how the semantic rules in conjunction with ontology can be applied for inferring new facts to dispatch news into corresponding departments.

Design/methodology/approach

Under a specific task domain, the proposed design comprises finding a glossary from electronic resources, gathering organization functions as controlled vocabularies, and linking relationships between the glossary and controlled vocabularies. Web ontology language is employed to represent this knowledge as ontology, and semantic web rule language is utilized to infer implicit facts among instances.

Findings

Document dispatching is highly domain dependent. Human perspectives being adopted as predefined knowledge in understanding document meanings are important. Knowledge‐intensive approaches such as ontology can model and represent expertise as reusable components. Ontology and rules together extend inference capabilities in semantic relationships between instances.

Practical implications

Empirical lessons reveal that ontology with semantic rules can be utilized to model human subjective judgement as knowledge bases. An example, including ontology and rules, based on news dispatching is provided.

Originality/value

An organization can classify and deliver documents to corresponding departments based on known facts by following the described procedure.

Details

The Electronic Library, vol. 27 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 December 1994

A.D. Kwok and Douglas H. Norrie

The intelligent agent object (IAO) system is a multi‐paradigmdevelopment environment which can be used to create intelligent agentsystems for manufacturing or other domains. The…

603

Abstract

The intelligent agent object (IAO) system is a multi‐paradigm development environment which can be used to create intelligent agent systems for manufacturing or other domains. The IAO system was developed from the rule‐based object (RBO) system which is a programming environment integrating both the rule‐based and object‐oriented paradigms. Propagation‐oriented programming, access‐oriented programming and group‐oriented programming are among the extensions included in the IAO system. Its most unusual contribution is the propagation‐oriented programming paradigm which is not found in most systems. A key application is the messenger inferencing structure which is a user‐extendable framework supporting multiple knowledge representation, meta‐inference control, and distributed inference. This allows the IAO system to go beyond predicate logic based production rule programming. New developments are also introduced for access‐oriented programming. The IAO system can be used to develop integrated manufacturing systems such as the prototype automated guided vehicle planning and control system, which is briefly described.

Details

Integrated Manufacturing Systems, vol. 5 no. 4/5
Type: Research Article
ISSN: 0957-6061

Keywords

1 – 10 of over 15000