Search results

1 – 10 of over 2000
To view the access options for this content please click here
Article
Publication date: 15 February 2008

Andrew Adamatzky

The purpose of this paper is to address the novel issues of executing graph optimization tasks on distributed simple growing biological systems.

Abstract

Purpose

The purpose of this paper is to address the novel issues of executing graph optimization tasks on distributed simple growing biological systems.

Design/methodology/approach

The author utilizes biological and physical processes to implement non‐classical, and in principle more powerful, computing devices. The author experimentally verifies his previously discovered techniques on approximating spanning trees during single cell ontogeny. Plasmodium, a vegetative stage of slime mold Physarum polycephalum, is used as experimental computing substrate to approximate spanning trees. Points of given data set are represented by positions of nutrient sources, then a plasmodium is placed on one of the data points. Plasmodium develops and span all sources of nutrients, connecting them by protoplasmic strands. The protoplasmic strands represent edges of the computed spanning tree.

Findings

Offers experimental implementation of plasmodium devices for approximation of spanning tree.

Practical implications

The techniques, discussed in the paper, can be used in design and development of soft bodied robotic devices, including gel‐based robots, reconfigurable massively robots, and hybrid wet‐hardware robots.

Originality/value

Discusses original ideas on growing spanning trees, and provide innovative experimental implementation.

Details

Kybernetes, vol. 37 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 10 January 2020

Khawla Asmi, Dounia Lotfi and Mohamed El Marraki

The state-of-the-art methods designed for overlapping community detection are limited by their high execution time as in CPM or the need to provide some parameters like…

Abstract

Purpose

The state-of-the-art methods designed for overlapping community detection are limited by their high execution time as in CPM or the need to provide some parameters like the number of communities in Bigclam and Nise_sph, which is a nontrivial information. Hence, there is a need to develop the accuracy that represents the primordial goal, where the actual state-of-the-art methods do not succeed to achieve high correspondence with the ground truth for many instances of networks. The paper aims to discuss this issue.

Design/methodology/approach

The authors offer a new method that explore the union of all maximum spanning trees (UMST) and models the strength of links between nodes. Also, each node in the UMST is linked with its most similar neighbor. From this model, the authors extract local community for each node, and then they combine the produced communities according to their number of shared nodes.

Findings

The experiments on eight real-world data sets and four sets of artificial networks show that the proposed method achieves obvious improvements over four state-of-the-art (BigClam, OSLOM, Demon, SE, DMST and ST) methods in terms of the F-score and ONMI for the networks with ground truth (Amazon, Youtube, LiveJournal and Orkut). Also, for the other networks, it provides communities with a good overlapping modularity.

Originality/value

In this paper, the authors investigate the UMST for the overlapping community detection.

To view the access options for this content please click here
Article
Publication date: 4 October 2011

Khaldoun Khashanah and Linyan Miao

This paper empirically investigates the structural evolution of the US financial systems. It particularly aims to explore if the structure of the financial systems changes…

Abstract

Purpose

This paper empirically investigates the structural evolution of the US financial systems. It particularly aims to explore if the structure of the financial systems changes when the economy enters a recession.

Design/methodology/approach

The empirical analysis is conducted through the statistical approach of principal components analysis (PCA) and the graph theoretic approach of minimum spanning trees (MSTs).

Findings

The PCA results suggest that the VIX was the dominant factor influencing the financial system prior to the recession; however, the monetary policy represented by the three‐month T‐bill yield became the leading factor in the system during the recession. By analyzing the MSTs, we find evidence that the structure of the financial system during the economic recession is substantially different from that during the period of economic expansion. Moreover, we discover that the financial markets are more integrated during the economic recession. The much stronger integration of the financial system was found to start right before the advent of the recession.

Practical implications

Research findings will help individuals, institutions, regulators, central bankers better understand the market structure under the economic turmoil, so more efficient strategies can be used to minimize the systemic risk.

Originality/value

This study compares the structure of the US financial markets in economic expansion and contraction periods. The structural dynamics of the financial system are explored, focusing on the recent economic recession triggered by the US subprime mortgage crisis. We introduce a new systemic risk measure.

Details

Studies in Economics and Finance, vol. 28 no. 4
Type: Research Article
ISSN: 1086-7376

Keywords

To view the access options for this content please click here
Article
Publication date: 23 October 2007

Matthias Wählisch and Thomas C. Schmidt

This paper aims to discuss problems, requirements and current trends for deploying group communication in real‐world scenarios from an integrated perspective.

Abstract

Purpose

This paper aims to discuss problems, requirements and current trends for deploying group communication in real‐world scenarios from an integrated perspective.

Design/methodology/approach

The Hybrid Shared Tree is introduced – a new architecture and routing approach to combine network – and subnetwork‐layer multicast services in end‐system domains with transparent, structured overlays on the inter‐domain level.

Findings

The paper finds that The Hybrid Shared Tree solution is highly scalable and robust and offers provider‐oriented features to stimulate deployment.

Originality/value

A straightforward perspective is indicated in the paper for a mobility‐agnostic routing layer for future use.

Details

Internet Research, vol. 17 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 1977

C.J. VAN RIJSBERGEN

This paper provides a foundation for a practical way of improving the effectiveness of an automatic retrieval system. Its main concern is with the weighting of index terms…

Abstract

This paper provides a foundation for a practical way of improving the effectiveness of an automatic retrieval system. Its main concern is with the weighting of index terms as a device for increasing retrieval effectiveness. Previously index terms have been assumed to be independent for the good reason that then a very simple weighting scheme can be used. In reality index terms are most unlikely to be independent. This paper explores one way of removing the independence assumption. Instead the extent of the dependence between index terms is measured and used to construct a non‐linear weighting function. In a practical situation the values of some of the parameters of such a function must be estimated from small samples of documents. So a number of estimation rules are discussed and one in particular is recommended. Finally the feasibility of the computations required for a non‐linear weighting scheme is examined.

Details

Journal of Documentation, vol. 33 no. 2
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Article
Publication date: 1 March 2001

Ching‐Yun Lee, Yi‐Shiung Yeh and Deng‐Jyi Chen

Information technologies have ushered in a new era for computer‐related communications. Use of the Internet for commercial applications and resource sharing has…

Abstract

Information technologies have ushered in a new era for computer‐related communications. Use of the Internet for commercial applications and resource sharing has accelerated in recent years as well. Owing to such developments, computer security has become a critical issue nowadays. In some applications, a critical message can be divided into pieces and allocated at several different sites over the Internet for security access concern. To access such an important message, one must access the divided pieces from different sites to reconstruct the message. In this paper, we propose model calculations to evaluate the probability of secret reconstruction and a weighted share assignment algorithm for assigning shares on hosts in such a way that the probability to be able to reconstruct the secret becomes the highest. Examples are presented to illustrate the feasibility of the proposed approach.

Details

Information Management & Computer Security, vol. 9 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 5 March 2018

Stéphane Brisset and Tuan-Vu Tran

This paper aims to propose a multiobjective branch and bound (MOBB) algorithm with a new criteria for the branching and discarding of nodes based on Pareto dominance and…

Abstract

Purpose

This paper aims to propose a multiobjective branch and bound (MOBB) algorithm with a new criteria for the branching and discarding of nodes based on Pareto dominance and contribution metric.

Design/methodology/approach

A multiobjective branch and bound (MOBB) method is presented and applied to the bi-objective combinatorial optimization of a safety transformer. A comparison with exhaustive enumeration and non-dominated sorting genetic algorithm (NSGA2) confirms the solutions.

Findings

It appears that MOBB and NSGA2 are both sensitive to their control parameters. The parameters for the MOBB algorithm are the number of starting points and the number of solutions on the relaxed Pareto front. The parameters of NSGA2 are the population size and the number of generations.

Originality/value

The comparison with exhaustive enumeration confirms that the proposed algorithm is able to find the complete set of non-dominated solutions in about 235 times fewer evaluations. As this last method is exact, its confidence level is higher.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1989

EDIE M. RASMUSSEN and PETER WILLETT

The implementation of hierarchic agglomerative methods of cluster anlaysis for large datasets is very demanding of computational resources when implemented on conventional…

Abstract

The implementation of hierarchic agglomerative methods of cluster anlaysis for large datasets is very demanding of computational resources when implemented on conventional computers. The ICL Distributed Array Processor (DAP) allows many of the scanning and matching operations required in clustering to be carried out in parallel. Experiments are described using the single linkage and Ward's hierarchical agglomerative clustering methods on both real and simulated datasets. Clustering runs on the DAP are compared with the most efficient algorithms currently available implemented on an IBM 3083 BX. The DAP is found to be 2.9–7.9 times as fast as the IBM, the exact degree of speed‐up depending on the size of the dataset, the clustering method, and the serial clustering algorithm that is used. An analysis of the cycle times of the two machines is presented which suggests that further, very substantial speed‐ups could be obtained from array processors of this type if they were to be based on more powerful processing elements.

Details

Journal of Documentation, vol. 45 no. 1
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Article
Publication date: 20 March 2019

Yanchao Sun, Liangliang Chen and Hongde Qin

This paper aims to investigate the distributed coordinated fuzzy tracking problems for multiple mechanical systems with nonlinear model uncertainties under a directed…

Abstract

Purpose

This paper aims to investigate the distributed coordinated fuzzy tracking problems for multiple mechanical systems with nonlinear model uncertainties under a directed communication topology.

Design/methodology/approach

The dynamic leader case is considered while only a subset of the follower mechanical systems can obtain the leader information. First, this paper approximates the system uncertainties with finite fuzzy rules and proposes a distributed adaptive tracking control scheme. Then, this paper makes a detailed classification of the system uncertainties and uses different fuzzy systems to approximate different kinds of uncertainties. Further, an improved distributed tracking strategy is proposed. Closed-loop systems are investigated using graph theory and Lyapunov theory. Numerical simulations are performed to verify the effectiveness of the proposed methods.

Findings

Based on fuzzy control and adaptive control theories, the desired distributed coordinated tracking control strategies for multiple uncertain mechanical systems are developed.

Originality/value

Compared with most existing literature, the proposed distributed tracking algorithms use fuzzy control and adaptive control techniques to cope with system nonlinear uncertainties of multiple mechanical systems. Moreover, the improved control strategy not only reduces fuzzy rules but also has higher control accuracy.

Details

Assembly Automation, vol. 39 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 1 April 2002

Mike Thelwall

Aggregates of links are of interest to information scientists in the same way as citation counts are: as potential sources of data from which new knowledge can be mined…

Abstract

Aggregates of links are of interest to information scientists in the same way as citation counts are: as potential sources of data from which new knowledge can be mined. Builds on the recent discovery of a correlation between a Web link count measure and the research quality of British universities by applying a range of multivariate statistical techniques to counts of links between pairs of universities. This represents an initial attempt at developing an understanding of this phenomenon. Extracts plausible results. Also identifies outliers in the data by the techniques, some of which were verified by being tracked down to identifiable Web phenomena. This is an important outcome because successful anomaly identification is a precondition to more effective analysis of this kind of data. The identification of groupings is encouraging evidence that Web links between universities can be mined for significant results, although it is clear that more methodological development is needed, if any but the simplest patterns are to be extracted. Finally, based upon the types of patterns extracted, argues that none of the methods used are capable of fully analysing link structures on their own.

Details

Aslib Proceedings, vol. 54 no. 2
Type: Research Article
ISSN: 0001-253X

Keywords

1 – 10 of over 2000