Search results

1 – 10 of over 15000
Article
Publication date: 1 March 2013

Hu Xia, Yan Fu, Junlin Zhou and Qi Xia

The purpose of this paper is to provide an intelligent spam filtering method to meet the real‐time processing requirement of the massive short message stream and reduce manual…

Abstract

Purpose

The purpose of this paper is to provide an intelligent spam filtering method to meet the real‐time processing requirement of the massive short message stream and reduce manual operation of the system.

Design/methodology/approach

An integrated framework based on a series of algorithms is proposed. The framework consists of message filtering module, log analysis module and rules handling module, and dynamically filters the short message spam, while generating the filtering rules. Experiments using Java are used to execute the proposed work.

Findings

The experiments are carried out both on the simulation model (off‐line) and on the actual plant (on‐line). All experiment data are considered in both normal and spam real short messages. The results show that use of the integrated framework leads to a comparable accuracy and meet the real‐time filtration requirement.

Originality/value

The approach in the design of the filtering system is novel. In addition, implementation of the proposed integrated framework allows the method not only to reduce the computational cost which leads to a high processing speed but also to filter spam messages with a high accuracy.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 6 April 2021

Hesham I. AlMujamed

The purpose of this research was to examine the effectiveness of filter rules and investigate the weak form of the efficient market hypothesis (EMH) on sample shares of…

Abstract

Purpose

The purpose of this research was to examine the effectiveness of filter rules and investigate the weak form of the efficient market hypothesis (EMH) on sample shares of shariah-compliant vs. conventional banks listed on the Gulf Cooperation Council (GCC) stock market.

Design/methodology/approach

Nine trading filter strategies with different statistical analyses were used as defined in the literature (Fifield et al., 2005; Almujamed et al., 2018). Daily closing equity prices of a sample of twenty shariah-compliant banks and twenty conventional banks were recorded over the 18-year period ending 31 December 2017.

Findings

Shares of shariah-compliant banks in the GCC were not weak-form efficient since trading based on past information was predictable, profitable and outperformed the corresponding naïve buy-and-hold trading strategy. Shares of conventional GCC banks underperformed

Research limitations/implications

This paper’s findings should be useful for central banks and capital market authorities in GCC countries for evaluation when considering new regulations or process changes. Limitations include small sample numbers and need for more recent evaluations of accounting disclosure levels. A wider range of data, statistical analyses and other trading strategies is needed. Potential investors (Muslim and non-Muslim), shariah supervisory boards, and preparers of financial statements can benefit from this study.

Practical implications

The results suggest that selection of trading strategy affects the success of the rule and that mid-sized filters are the best.

Originality/value

This is an innovative study comparing performance of shariah-compliant and conventional banks under different filter rules.

Details

Journal of Investment Compliance, vol. 22 no. 1
Type: Research Article
ISSN: 1528-5812

Keywords

Article
Publication date: 28 December 2018

Hesham I. Almujamed

The purpose of this paper is to examine the predictability of nine filter rules and test the validity of the weak form of the efficient market hypothesis for the Qatar Stock…

Abstract

Purpose

The purpose of this paper is to examine the predictability of nine filter rules and test the validity of the weak form of the efficient market hypothesis for the Qatar Stock Exchange (QSE).

Design/methodology/approach

This study adopts the filter rule strategy employed by Fifield et al. (2005), which suggests that a buy signal occurs when a share’s price increases by X percent from the previous price. This strategy recommends that the share is held until its price declines by X percent from the subsequent high price. Any price changes below X percent are ignored. Additionally, using the theory of weak-form efficiency, this paper suggests that, if a stock market is efficient, an investor cannot achieve superior results by using these trading rules. However, if market inefficiencies are present, profitable opportunities may arise. To this end, the daily closing share prices of 44 companies listed on QSE are explored for 2004–2017.

Findings

The findings propose that QSE is not weak-form efficient because security prices are predictable. As such, investors who followed filter strategies based on past price information could have made profit. Sectoral analysis findings further suggest that firms in the consumer goods and services, industrial and insurance sectors are most efficiently priced amongst the QSE-traded companies.

Practical implications

The evidence may be plausibly helpful as supporting market participants and academics that suggest that selecting filter strategy is extremely important for determining the overall profitability of the trading strategy.

Originality/value

To the best of my knowledge, this is the first study on Qatar that examines the performance of filter rules relative to a passive investor in the context of trading rules with individual share prices for a new stock market. Furthermore, this study adds to the literature through the empirical finding that technical analysts using filter strategies could generate excess returns relative to the buy-and-hold strategy on new emerging stock markets. This study also suggests the levels of transparency and accounting disclosure are limited, which may help Qatari policy makers understand the QSE context. Therefore, it might lead them to introduce regulatory changes to improve the QSE’s efficiency level.

Details

International Journal of Productivity and Performance Management, vol. 68 no. 1
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 24 August 2019

Ling Xin, Kin Lam and Philip L.H. Yu

Filter trading is a technical trading rule that has been used extensively to test the efficient market hypothesis in the context of long-term trading. In this paper, the authors…

Abstract

Purpose

Filter trading is a technical trading rule that has been used extensively to test the efficient market hypothesis in the context of long-term trading. In this paper, the authors adopt the rule to analyze intraday trading, in which an open position is not left overnight. This paper aims to explore the relationship between intraday filter trading profitability and intraday realized volatilities. The bivariate thin plate spline (TPS) model is chosen to fit the predictor-response surface for high frequency data from the Hang Seng index futures (HSIF) market. The hypotheses follow the adaptive market hypothesis, arguing that intraday filter trading differs in profitability under different market conditions as measured by realized volatility, and furthermore, the optimal filter size for trading on each day is related to the realized volatility. The empirical results furnish new evidence that range-based realized volatilities (RaV) are more efficient in identifying trading profit than return-based volatilities (ReV). These results shed light on the efficiency of intraday high frequency trading in the HSIF market. Some trading suggestions are given based on the findings.

Design/methodology/approach

Among all the factors that affect the profit of filter trading, intraday realized volatility stands out as an important predictor. The authors explore several intraday volatilities measures using range-based or return-based methods of estimation. The authors then study how the filter trading profit will depend on realized volatility and how the optimal filter size is related to the realized volatility. The bivariate TPS model is used to model the predictor-response relationship.

Findings

The empirical results show that range-based realized volatility has a higher predictive power on filter rule trading profit than the return-based realized volatility.

Originality/value

First, the authors contribute to the literature by investigating the profitability of the filter trading rule on high frequency tick-by-tick data of HSIF market. Second, the authors test the assumption that the magnitude of the intraday momentum trading profit depends on the realized volatilities and aims to identify a relationship between them. Furthermore, the authors consider several intraday realized volatilities and find the RaV have the higher prediction power than ReV. Finally, the authors find some relationship between the optimal filter size and the realized volatilities. Based on the observations, the authors also give some trading suggestions to the intraday filter traders.

Details

Studies in Economics and Finance, vol. 38 no. 3
Type: Research Article
ISSN: 1086-7376

Keywords

Article
Publication date: 10 December 2019

Xiaoming Zhang, Mingming Meng, Xiaoling Sun and Yu Bai

With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the…

Abstract

Purpose

With the advent of the era of Big Data, the scale of knowledge graph (KG) in various domains is growing rapidly, which holds huge amount of knowledge surely benefiting the question answering (QA) research. However, the KG, which is always constituted of entities and relations, is structurally inconsistent with the natural language query. Thus, the QA system based on KG is still faced with difficulties. The purpose of this paper is to propose a method to answer the domain-specific questions based on KG, providing conveniences for the information query over domain KG.

Design/methodology/approach

The authors propose a method FactQA to answer the factual questions about specific domain. A series of logical rules are designed to transform the factual questions into the triples, in order to solve the structural inconsistency between the user’s question and the domain knowledge. Then, the query expansion strategies and filtering strategies are proposed from two levels (i.e. words and triples in the question). For matching the question with domain knowledge, not only the similarity values between the words in the question and the resources in the domain knowledge but also the tag information of these words is considered. And the tag information is obtained by parsing the question using Stanford CoreNLP. In this paper, the KG in metallic materials domain is used to illustrate the FactQA method.

Findings

The designed logical rules have time stability for transforming the factual questions into the triples. Additionally, after filtering the synonym expansion results of the words in the question, the expansion quality of the triple representation of the question is improved. The tag information of the words in the question is considered in the process of data matching, which could help to filter out the wrong matches.

Originality/value

Although the FactQA is proposed for domain-specific QA, it can also be applied to any other domain besides metallic materials domain. For a question that cannot be answered, FactQA would generate a new related question to answer, providing as much as possible the user with the information they probably need. The FactQA could facilitate the user’s information query based on the emerging KG.

Details

Data Technologies and Applications, vol. 54 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 23 August 2013

Changhyun Byun, Hyeoncheol Lee, Yanggon Kim and Kwangmi Ko Kim

It is difficult to build our own social data set because data in social media is generally too vast and noisy. The aim of this study is to specify design and implementation…

Abstract

Purpose

It is difficult to build our own social data set because data in social media is generally too vast and noisy. The aim of this study is to specify design and implementation details of the Twitter data collecting tool with a rule‐based filtering module. Additionally, the paper aims to see how people communicate with each other through social networks in a case study with rule‐based analysis.

Design/methodology/approach

The authors developed a java‐based data gathering tool with a rule‐based filtering module for collecting data from Twitter. This paper introduces the design specifications and explain the implementation details of the Twitter Data Collecting Tool with detailed Unified Modeling Language (UML) diagrams. The Model View Controller (MVC) framework is applied in this system to support various types of user interfaces.

Findings

The Twitter Data Collecting Tool is able to gather a huge amount of data from Twitter and filter the data with modest rules for complex logic. This case study shows that a historical event creates buzz on Twitter and people's interests on the event are reflected in their Twitter activity.

Research limitations/implications

Applying data‐mining techniques to the social network data has so much potential. A possible improvement to the Twitter Data Collecting Tool would be an adaptation of a built‐in data‐mining module.

Originality/value

This paper focuses on designing a system handling massive amounts of Twitter Data. This is the first approach to embed a rule engine for filtering and analyzing social data. This paper will be valuable to those who may want to build their own Twitter dataset, apply customized filtering options to get rid of unnecessary, noisy data, and analyze social data to discover new knowledge.

Details

International Journal of Web Information Systems, vol. 9 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 31 August 2005

Daniel Lemire, Harold Boley, Sean McGrath and Marcel Ball

Learning objects strive for reusability in e‐Learning to reduce cost and allow personalization of content. We show why learning objects require adapted Information Retrieval…

Abstract

Learning objects strive for reusability in e‐Learning to reduce cost and allow personalization of content. We show why learning objects require adapted Information Retrieval systems. In the spirit of the Semantic Web, we discuss the semantic description, discovery, and composition of learning objects. As part of our project, we tag learning objects with both objective (e.g., title, date, and author) and subjective (e.g., quality and relevance) metadata. We present the RACOFI (Rule‐Applying Collaborative Filtering) Composer prototype with its novel combination of two libraries and their associated engines: a collaborative filtering system and an inference rule system. We developed RACOFI to generate context‐aware recommendation lists. Context is handled by multidimensional predictions produced from a database‐driven scalable collaborative filtering algorithm. Rules are then applied to the predictions to customize the recommendations according to user profiles. The RACOFI Composer architecture has been developed into the contextaware music portal inDiscover.

Details

Interactive Technology and Smart Education, vol. 2 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 15 June 2015

Angela Carrillo-Ramos, Luis Guillermo Torres-Ribero, María Paula Arias-Báez, Alexandra Pomares Quimbaya, Enrique González, Julio Carreño, Juan Pablo Garzón Ruiz and Hervé Martin

This paper aims to present a detailed description of Agents for Enriching Services (AES), an agent-oriented framework that allows adapting a service in an information system. AES…

Abstract

Purpose

This paper aims to present a detailed description of Agents for Enriching Services (AES), an agent-oriented framework that allows adapting a service in an information system. AES provides an adaptation logic that can be instantiated and extended to be useful in different domains. In previous works, we presented the adaptation mechanism of AES, which considers context aspects such as location, infrastructure; user aspects such as preferences and interests; and device aspects such as hardware and software features.

Design/methodology/approach

The first step was the definition of different profiles, mainly user and context profiles. Then the adaptation mechanism was defined, which considers these profiles. With this mechanism, the adaptation filters to apply them to the initial queries was specified. Finally, feedback was provided, which included implicit and explicit information from the user and the system. AES is an agent-based framework implemented in Java, using the multi-agent platform BESA and a rule-based engine Drools.

Findings

AES can be used as the starting point to adapt services by enriching them considering different stimulus whether they come from the environment, devices or user preferences.

Research limitations/implications

This work was tested in an academic environment and was only applied to enhance queries by using keywords. AES uses the query mechanism implemented in the system that invokes it.

Originality/value

This paper focuses on: an integrated view of AES including its formal description and details about its implementation. Particularly, it includes an exhaustive and formal definition of the filters used to create the adaptation rules and three different scenarios of the application of AES to adapt content according to user and context features. Finally, a comparison analysis is presented to highlight the strengths of our framework, specially its capacity of integration with systems that require providing user- and context-oriented services.

Details

International Journal of Web Information Systems, vol. 11 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 February 2004

J. WANG, B.M. BURTON and G.M. HANNAH

This study examines differences in the extent of predictability in the pricing of the two main classes of equity traded in China, namely: A shares (available to Chinese investors…

Abstract

This study examines differences in the extent of predictability in the pricing of the two main classes of equity traded in China, namely: A shares (available to Chinese investors) and B shares (traditionally available only to non‐Chinese investors). The study extends previous work by conducting a wider range of analyses and extending the sample period until the relaxation of rules preventing domestic investors from purchasing B shares. The results suggest that earlier evidence of greater predictability in the pricing of B shares is not entirely robust to changes in the method of analysis, and may only partially explain why Chinese authorities have recently decided to widen participation in the B market.

Details

Studies in Economics and Finance, vol. 22 no. 2
Type: Research Article
ISSN: 1086-7376

Article
Publication date: 21 August 2007

Gregorio Martínez Pérez, Félix J. García Clemente and Antonio F. Gómez Skarmeta

The purpose of the paper is to provide a two‐tier framework for managing semantic‐aware distributed firewall policies to be applied to the devices existing in one administrative…

Abstract

Purpose

The purpose of the paper is to provide a two‐tier framework for managing semantic‐aware distributed firewall policies to be applied to the devices existing in one administrative domain.

Design/methodology/approach

Special attention is paid to the CIM‐based information model defined as the ontology to be used in this framework and the AI‐based reasoning mechanisms and components used to perform the conflict discovery tasks over the distributed firewall policies.

Findings

Mechanisms presented allow the solving some of the current issues of the network‐centric security model being used in the Internet. The two‐tier framework designed provides semantic‐aware mechanisms to perform conflict detection and automatic enforcement of policy rules in the distributed firewall scenario. This framework is based on the use of a standard information model and a semantic‐aware policy language to formally define (and then process) firewall policies.

Research limitations/implications

Ongoing work is focused on identifying all kind of conflicts and anomalies that may exist in firewall systems; in parallel to this task a semi‐automatic resolver of conflicting policies is currently under design.

Practical implications

Network and security administrators can specify firewall policies and validate them to find syntactic and semantic errors (i.e. policy conflicts). A framework for automated validation and distribution of policies at different levels is included. This ensures that firewall policies produce the desired effects, facilitating the creation and maintenance of firewall rules in one administrative domain.

Originality/value

A practical and novel two‐tier system that provides detection of conflicts in rules existing in a distributed firewall scenario and the automatic and secure deployment of these rules. A packet‐filtering model, which is simple and powerful enough for the conflict discovery and rule analysis processes, has been proposed. Moreover, ontology and rule reasoning are being proposed as techniques for the conflict detection problem in this particular scenario.

Details

Internet Research, vol. 17 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

1 – 10 of over 15000