Search results

1 – 10 of over 15000
Article
Publication date: 19 June 2017

Khai Tan Huynh, Tho Thanh Quan and Thang Hoai Bui

Service-oriented architecture is an emerging software architecture, in which web service (WS) plays a crucial role. In this architecture, the task of WS composition and…

Abstract

Purpose

Service-oriented architecture is an emerging software architecture, in which web service (WS) plays a crucial role. In this architecture, the task of WS composition and verification is required when handling complex requirement of services from users. When the number of WS becomes very huge in practice, the complexity of the composition and verification is also correspondingly high. In this paper, the authors aim to propose a logic-based clustering approach to solve this problem by separating the original repository of WS into clusters. Moreover, they also propose a so-called quality-controlled clustering approach to ensure the quality of generated clusters in a reasonable execution time.

Design/methodology/approach

The approach represents WSs as logical formulas on which the authors conduct the clustering task. They also combine two most popular clustering approaches of hierarchical agglomerative clustering (HAC) and k-means to ensure the quality of generated clusters.

Findings

This logic-based clustering approach really helps to increase the performance of the WS composition and verification significantly. Furthermore, the logic-based approach helps us to maintain the soundness and completeness of the composition solution. Eventually, the quality-controlled strategy can ensure the quality of generated clusters in low complexity time.

Research limitations/implications

The work discussed in this paper is just implemented as a research tool known as WSCOVER. More work is needed to make it a practical and usable system for real life applications.

Originality/value

In this paper, the authors propose a logic-based paradigm to represent and cluster WSs. Moreover, they also propose an approach of quality-controlled clustering which combines and takes advantages of two most popular clustering approaches of HAC and k-means.

Article
Publication date: 18 June 2018

Efthimia Mavridou, Konstantinos M. Giannoutakis, Dionysios Kehagias, Dimitrios Tzovaras and George Hassapis

Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era…

Abstract

Purpose

Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era. However, to efficiently deal with the growing presence of Web services, more automated mechanisms are required. This paper aims to introduce an automatic Web service categorization mechanism, by exploiting various techniques that aim to increase the overall prediction accuracy.

Design/methodology/approach

The paper proposes the use of Error Correcting Output Codes on top of a Logistic Model Trees-based classifier, in conjunction with a data pre-processing technique that reduces the original feature-space dimension without affecting data integrity. The proposed technique is generalized so as to adhere to all Web services with a description file. A semantic matchmaking scheme is also proposed for enabling the semantic annotation of the input and output parameters of each operation.

Findings

The proposed Web service categorization framework was tested with the OWLS-TC v4.0, as well as a synthetic data set with a systematic evaluation procedure that enables comparison with well-known approaches. After conducting exhaustive evaluation experiments, categorization efficiency in terms of accuracy, precision, recall and F-measure was measured. The presented Web service categorization framework outperformed the other benchmark techniques, which comprise different variations of it and also third-party implementations.

Originality/value

The proposed three-level categorization approach is a significant contribution to the Web service community, as it allows the automatic semantic categorization of all functional elements of Web services that are equipped with a service description file.

Details

International Journal of Web Information Systems, vol. 14 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 February 2024

Lin Xue and Feng Zhang

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web

Abstract

Purpose

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web service classification approaches ignore the class overlap in Web services, resulting in poor accuracy of classification in practice. This paper aims to provide an approach to address this issue.

Design/methodology/approach

This paper proposes a label confusion and priori correction-based Web service classification approach. First, functional semantic representations of Web services descriptions are obtained based on BERT. Then, the ability of the model is enhanced to recognize and classify overlapping instances by using label confusion learning techniques; Finally, the predictive results are corrected based on the label prior distribution to further improve service classification effectiveness.

Findings

Experiments based on the ProgrammableWeb data set show that the proposed model demonstrates 4.3%, 3.2% and 1% improvement in Macro-F1 value compared to the ServeNet-BERT, BERT-DPCNN and CARL-NET, respectively.

Originality/value

This paper proposes a Web service classification approach for the overlapping categories of Web services and improve the accuracy of Web services classification.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 24 August 2012

Federica Paganelli, Terence Ambra and David Parlanti

The purpose of this paper is to propose a novel quality of service (QoS)‐aware service composition approach, called SEQOIA, capable of defining at run‐time a service composition…

Abstract

Purpose

The purpose of this paper is to propose a novel quality of service (QoS)‐aware service composition approach, called SEQOIA, capable of defining at run‐time a service composition plan meeting both functional and non‐functional constraints and optimizing the overall quality of service.

Design/methodology/approach

SEQOIA is a semantic‐driven QoS‐aware dynamic composition approach leveraging on an integer linear programming technique (ILP). It exploits the expressiveness of an ontology‐based service profile model handling structural and semantic properties of service descriptions. It represents the service composition problem as a set of functional and non‐functional constraints and an objective function.

Findings

The authors developed a proof of concept implementing SEQOIA, as well as an alternative composition solution based on state‐of‐the‐art AI planning and ILP techniques. Results of testing activities show that SEQOIA performs better than the alternative solution over a limited set of candidate services. This behaviour was expected, as SEQOIA guarantees to find the service composition providing the optimal QoS value, while the alternative approach does not provide this guarantee, as it handles separately the specification of the functional service composition flow and the QoS‐based service selection step.

Originality/value

SEQOIA leverages on semantic annotations in order to make service composition feasible by coping with syntactic and structural differences typically existing across different, even similar, service implementations. To ease the adoption of SEQOIA in real enterprise scenarios, the authors chose to leverage on an XML‐based message model of services interfaces (including but not strictly requiring the use of WSDL).

Open Access
Article
Publication date: 21 March 2022

Wei Xiong, Ziyi Xiong and Tina Tian

The performance of behavioral targeting (BT) mainly relies on the effectiveness of user classification since advertisers always want to target their advertisements to the most…

1412

Abstract

Purpose

The performance of behavioral targeting (BT) mainly relies on the effectiveness of user classification since advertisers always want to target their advertisements to the most relevant users. In this paper, the authors frame the BT as a user classification problem and describe a machine learning–based approach for solving it.

Design/methodology/approach

To perform such a study, two major research questions are investigated: the first question is how to represent a user’s online behavior. A good representation strategy should be able to effectively classify users based on their online activities. The second question is how different representation strategies affect the targeting performance. The authors propose three user behavior representation methods and compare them empirically using the area under the receiver operating characteristic curve (AUC) as a performance measure.

Findings

The experimental results indicate that ad campaign effectiveness can be significantly improved by combining user search queries, clicked URLs and clicked ads as a user profile. In addition, the authors also explore the temporal aspect of user behavior history by investigating the effect of history length on targeting performance. The authors note that an improvement of approximately 6.5% in AUC is achieved when user history is extended from 1 day to 14 days, which is substantial in targeting performance.

Originality/value

This paper confirms the effectiveness of BT on user classification and provides a validation of BT for Internet advertising.

Details

Journal of Internet and Digital Economics, vol. 2 no. 1
Type: Research Article
ISSN: 2752-6356

Keywords

Article
Publication date: 13 June 2008

Joanne Evans, Barbara Reed and Sue McKemmish

The ability to establish sustainable frameworks for creating and managing recordkeeping metadata is one of the key challenges for recordkeeping in digital and networked

2479

Abstract

Purpose

The ability to establish sustainable frameworks for creating and managing recordkeeping metadata is one of the key challenges for recordkeeping in digital and networked environments. The purpose of this article is to give an overview of the Clever Recordkeeping Metadata Project, an Australian research project which sought to investigate how the movement of recordkeeping metadata between systems could be automated.

Design/methodology/approach

The project adopted an action research approach to the research, utilising a systems development method within this framework to iteratively build a prototype demonstrating how recordkeeping metadata could be created once in particular application environments, then used many times to meet a range of business and recordkeeping purposes.

Findings

Recordkeeping metadata interoperability, like recordkeeping metadata itself, is complex and dynamic. The research identifies the need for standards and tools to reflect and have the capacity to handle this complexity.

Originality/value

This paper provides insights into the complex nature of recordkeeping metadata and the kind of infrastructure that needs to be developed to support its automated capture and re‐use in integrated systems environments.

Details

Records Management Journal, vol. 18 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

Article
Publication date: 12 March 2021

Milton Secundino de Souza-Júnior, Nelson Souto Rosa and Fernando Antônio Aires Lins

This paper aims to present Long4Cloud (long-running workflows execution environment for cloud), a distributed and adaptive LRW execution environment delivered “as a service”…

Abstract

Purpose

This paper aims to present Long4Cloud (long-running workflows execution environment for cloud), a distributed and adaptive LRW execution environment delivered “as a service” solution.

Design/methodology/approach

LRWs last for hours, days or even months and their duration open the possibility of changes in business rules, service interruptions or even alterations of formal regulations of the business before the workflow completion. These events can lead to problems such as loss of intermediary results or exhaustion of computational resources used to manage the workflow execution. Existing solutions face those problems by merely allowing the replacement (at runtime) of services associated with activities of the LRW.

Findings

LONG4Cloud extends the previous works in two main aspects, namely, the inclusion of dynamic reconfiguration capabilities and the adoption of an “as a service” delivery mode. The reconfiguration mechanism uses quiescence principles, data and state management and provides multiple adaptive strategies. Long4Cloud also adopts a scenario-based analysis to decide the adaptation to be performed. Events such as changes in business rules or service failures trigger reconfigurations supported by the environment. These features have been put together in a solution delivered “as a service” that takes advantage of cloud elasticity and allows to better allocate cloud resources to fit into the demands of LRWs.

Originality/value

The original contribution of Long4Cloud is to incorporate adaptive capabilities into the LRW execution environment as an effective way to handle the specificities of this kind of workflow. Experiments using current data of a Brazilian health insurance company were carried out to evaluate Long4Cloud and show performance gains in the execution of LRWs submitted to the proposed environment.

Article
Publication date: 5 June 2009

Antonio Ruiz‐Martínez, Óscar Cánovas and Antonio F. Gómez‐Skarmeta

This paper aims to present a viable approach for designing and implementing a generic per‐fee‐link framework. It also aims to design this framework to be used with any payment…

Abstract

Purpose

This paper aims to present a viable approach for designing and implementing a generic per‐fee‐link framework. It also aims to design this framework to be used with any payment protocol and test it with two existing ones.

Design/methodology/approach

The paper presents a per‐fee‐link framework based on several generic components. These components have been developed and tested in order to prove the viability of the proposed framework.

Findings

The results show that is possible to establish a per‐fee‐link framework. Four core components are defined: first, the different modules needed for browsers and web servers, second, an extended payment protocol (EPP), which negotiates the payment protocol to use and encapsulates its related messages, third, an API for e‐wallets, which is independent of the payment protocol, to incorporate the protocols to use with EPP and finally, the definition of a per‐fee‐link that associates payment information to a link.

Practical implications

The framework presented shows a uniform way of using payment protocols that can increase the trust of end users. Furthermore, it has been developed and tested.

Originality/value

The contribution describes the components needed for supporting the framework. Its feasibility has been checked through an implementation and it facilitates the payment for content on the web. Thus, content providers can obtain an alternative revenue source to advertisement or subscription. Furthermore, developers, vendors and customers can see that the incorporation of payment protocols to the system is facilitated. Finally, the users obtain a uniform way to make payments that increases the perception of trust.

Details

Internet Research, vol. 19 no. 3
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 8 February 2013

Stefan Dietze, Salvador Sanchez‐Alonso, Hannes Ebner, Hong Qing Yu, Daniela Giordano, Ivana Marenzi and Bernardo Pereira Nunes

Research in the area of technology‐enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This effort has…

1461

Abstract

Purpose

Research in the area of technology‐enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This effort has led to a fragmented landscape of competing metadata schemas, or interface mechanisms. More recently, semantic technologies were taken into account to improve interoperability. The linked data approach has emerged as the de facto standard for sharing data on the web. To this end, it is obvious that the application of linked data principles offers a large potential to solve interoperability issues in the field of TEL. This paper aims to address this issue.

Design/methodology/approach

In this paper, approaches are surveyed that are aimed towards a vision of linked education, i.e. education which exploits educational web data. It particularly considers the exploitation of the wealth of already existing TEL data on the web by allowing its exposure as linked data and by taking into account automated enrichment and interlinking techniques to provide rich and well‐interlinked data for the educational domain.

Findings

So far web‐scale integration of educational resources is not facilitated, mainly due to the lack of take‐up of shared principles, datasets and schemas. However, linked data principles increasingly are recognized by the TEL community. The paper provides a structured assessment and classification of existing challenges and approaches, serving as potential guideline for researchers and practitioners in the field.

Originality/value

Being one of the first comprehensive surveys on the topic of linked data for education, the paper has the potential to become a widely recognized reference publication in the area.

Article
Publication date: 26 June 2007

Q.T. Tho, A.C.M. Fong and S.C. Hui

The semantic web gives meaning to information so that humans and computers can work together better. Ontology is used to represent knowledge on the semantic web. Web services have…

1130

Abstract

Purpose

The semantic web gives meaning to information so that humans and computers can work together better. Ontology is used to represent knowledge on the semantic web. Web services have been introduced to make the knowledge conveyed by the ontology on the semantic web accessible across different applications. This paper seeks to present the use of these latest advances in the context of a scholarly semantic web (or SSWeb) system, which can support advanced search functions such as expert finding and trend detection in addition to basic functions such as document and author search as well as document and author clustering search.

Design/methodology/approach

A distributed architecture of the proposed SSWeb is described, as well as semantic web services that support scholarly information retrieval on the SSWeb.

Findings

Initial experimental results indicate that the proposed method is effective.

Research limitations/implications

The work reported is experimental in nature. More work is needed, but early results are encouraging and the authors wish to make their work known to the research community by publishing this paper so that further progress can be made in this area of research.

Originality/value

The work is presented in the context of scholarly document retrieval, but it could also be adapted to other types of documents, such as medical records, machine‐fault records and legal documents. This is because the basic principles are the same.

Details

Online Information Review, vol. 31 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 15000