Search results

1 – 10 of over 1000
Content available
Article
Publication date: 5 September 2016

Qingyuan Wu, Changchen Zhan, Fu Lee Wang, Siyang Wang and Zeping Tang

The quick growth of web-based and mobile e-learning applications such as massive open online courses have created a large volume of online learning resources. Confronting…

Abstract

Purpose

The quick growth of web-based and mobile e-learning applications such as massive open online courses have created a large volume of online learning resources. Confronting such a large amount of learning data, it is important to develop effective clustering approaches for user group modeling and intelligent tutoring. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, a minimum spanning tree based approach is proposed for clustering of online learning resources. The novel clustering approach has two main stages, namely, elimination stage and construction stage. During the elimination stage, the Euclidean distance is adopted as a metrics formula to measure density of learning resources. Resources with quite low densities are identified as outliers and therefore removed. During the construction stage, a minimum spanning tree is built by initializing the centroids according to the degree of freedom of the resources. Online learning resources are subsequently partitioned into clusters by exploiting the structure of minimum spanning tree.

Findings

Conventional clustering algorithms have a number of shortcomings such that they cannot handle online learning resources effectively. On the one hand, extant partitional clustering methods use a randomly assigned centroid for each cluster, which usually cause the problem of ineffective clustering results. On the other hand, classical density-based clustering methods are very computationally expensive and time-consuming. Experimental results indicate that the algorithm proposed outperforms the traditional clustering algorithms for online learning resources.

Originality/value

The effectiveness of the proposed algorithms has been validated by using several data sets. Moreover, the proposed clustering algorithm has great potential in e-learning applications. It has been demonstrated how the novel technique can be integrated in various e-learning systems. For example, the clustering technique can classify learners into groups so that homogeneous grouping can improve the effectiveness of learning. Moreover, clustering of online learning resources is valuable to decision making in terms of tutorial strategies and instructional design for intelligent tutoring. Lastly, a number of directions for future research have been identified in the study.

Details

Asian Association of Open Universities Journal, vol. 11 no. 2
Type: Research Article
ISSN: 1858-3431

Keywords

To view the access options for this content please click here
Article
Publication date: 15 August 2016

Changqing Luo, Mengzhen Li and Zisheng Ouyang

– The purpose of this paper is to study the correlation structure of the credit spreads.

Abstract

Purpose

The purpose of this paper is to study the correlation structure of the credit spreads.

Design/methodology/approach

The minimal spanning tree is used to find the risk center node and the basic correlation structure of the credit spreads. The dynamic copula and pair copula models are applied to capture the dynamic and non-linear correlation structure.

Findings

The authors take the enterprise bond with trading data from January 2013 to December 2013 as the research sample. The empirical study of minimum spanning tree shows that the credit risk of corporate bonds forms a network structure with a center node. Meanwhile, the correlation between credit spreads shows dynamic characteristics. Under the framework of dynamic copula, the lower tail dependence is less than the upper tail dependence, thus, in economic boom period, the dynamic correlation is more significant than in recession period. The authors also find that the centrality of credit risk network is not significant according to the pair copula and Granger causality test. The empirical study shows that the goodness-of-fit of D vine is superior to Canonical vine, and the Granger causality test additionally proves that the center node has influence on few other nodes in the risk network, thus the center node captured by the minimum spanning tree is a weak center node, and this characteristic of credit risk network indicates that the risk network of credit spreads is generated mostly by the external shocks rather than the internal risk contagion.

Originality/value

This paper provides new ideas for investors and researchers to analyze the credit risk correlation or contagion.

Details

China Finance Review International, vol. 6 no. 3
Type: Research Article
ISSN: 2044-1398

Keywords

To view the access options for this content please click here
Book part
Publication date: 15 August 2006

Steven Cosares and Fred J. Rispoli

We address the problem of selecting a topological design for a network having a single traffic source and uncertain demand at the remaining nodes. Solving the associated…

Abstract

We address the problem of selecting a topological design for a network having a single traffic source and uncertain demand at the remaining nodes. Solving the associated fixed charge network flow (FCF) problem requires finding a network design that limits both the fixed costs of establishing links and the variable costs of sending flow to the destinations. In this paper, we discuss how to obtain a sequence of optimal solutions that arise as the demand intensity varies from low levels to high. One of the network design alternatives associated with these solutions will be chosen based upon the dominant selection criteria of the decision maker. We consider both probabilistic and non-probabilistic criteria and compare the network designs associated with each. We show that the entire sequence of optimal solutions can be identified with little more effort than solving a single FCF problem instance. We also provide solution approaches that are relatively efficient and suggest good design alternatives based upon approximations to the optimal sequence.

Details

Applications of Management Science: In Productivity, Finance, and Operations
Type: Book
ISBN: 978-0-85724-999-9

To view the access options for this content please click here
Article
Publication date: 15 February 2008

Andrew Adamatzky

The purpose of this paper is to address the novel issues of executing graph optimization tasks on distributed simple growing biological systems.

Abstract

Purpose

The purpose of this paper is to address the novel issues of executing graph optimization tasks on distributed simple growing biological systems.

Design/methodology/approach

The author utilizes biological and physical processes to implement non‐classical, and in principle more powerful, computing devices. The author experimentally verifies his previously discovered techniques on approximating spanning trees during single cell ontogeny. Plasmodium, a vegetative stage of slime mold Physarum polycephalum, is used as experimental computing substrate to approximate spanning trees. Points of given data set are represented by positions of nutrient sources, then a plasmodium is placed on one of the data points. Plasmodium develops and span all sources of nutrients, connecting them by protoplasmic strands. The protoplasmic strands represent edges of the computed spanning tree.

Findings

Offers experimental implementation of plasmodium devices for approximation of spanning tree.

Practical implications

The techniques, discussed in the paper, can be used in design and development of soft bodied robotic devices, including gel‐based robots, reconfigurable massively robots, and hybrid wet‐hardware robots.

Originality/value

Discusses original ideas on growing spanning trees, and provide innovative experimental implementation.

Details

Kybernetes, vol. 37 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Book part
Publication date: 5 July 2012

Delphine Lautier and Franck Raynaud

In this chapter, we propose a nonconventional methodology, the graph theory, which is especially relevant for the study of high-dimensional financial data. We illustrate…

Abstract

In this chapter, we propose a nonconventional methodology, the graph theory, which is especially relevant for the study of high-dimensional financial data. We illustrate the advantages of this method in the context of systemic risk in derivative markets, a main subject nowadays in finance. A key issue is that this methodology can be used in various areas. Numerous applications have now to face the challenge of analyzing gigantic financial data sets, which are more and more frequent. We offer a pedagogical introduction to the use of the graph theory in finance and to some tools provided by this method. As we focus on systemic risk, we first examine correlation-based graphs in order to investigate markets integration and inter/cross-market linkages. We then restrain the analysis to a subset of these graphs, the so-called “minimum spanning trees.” We study their topological and dynamic properties and discuss the relevance of these tools as well as the robustness of the empirical results relying on them.

Details

Derivative Securities Pricing and Modelling
Type: Book
ISBN: 978-1-78052-616-4

To view the access options for this content please click here
Article
Publication date: 8 May 2019

Youli Wang, Liming Dai, Xueliang Zhang and Xiaohui Wang

The purpose of this paper is to obtain the reasonable dimensioning for each part and a full-dimension model of assembly dimensions.

Abstract

Purpose

The purpose of this paper is to obtain the reasonable dimensioning for each part and a full-dimension model of assembly dimensions.

Design/methodology/approach

The relational path graph of assembly dimension, the shortest-path spanning tree of functional dimension and a revised spanning tree are established in this paper.

Findings

The proposed method can obtain reasonable dimensioning of parts and establishment of dimension model in an assembly.

Originality/value

The proposed method can easily realise by computer and be more suitable to automatic dimensioning and establishment of dimension model of parts.

Details

Assembly Automation, vol. 39 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 10 January 2020

Khawla Asmi, Dounia Lotfi and Mohamed El Marraki

The state-of-the-art methods designed for overlapping community detection are limited by their high execution time as in CPM or the need to provide some parameters like…

Abstract

Purpose

The state-of-the-art methods designed for overlapping community detection are limited by their high execution time as in CPM or the need to provide some parameters like the number of communities in Bigclam and Nise_sph, which is a nontrivial information. Hence, there is a need to develop the accuracy that represents the primordial goal, where the actual state-of-the-art methods do not succeed to achieve high correspondence with the ground truth for many instances of networks. The paper aims to discuss this issue.

Design/methodology/approach

The authors offer a new method that explore the union of all maximum spanning trees (UMST) and models the strength of links between nodes. Also, each node in the UMST is linked with its most similar neighbor. From this model, the authors extract local community for each node, and then they combine the produced communities according to their number of shared nodes.

Findings

The experiments on eight real-world data sets and four sets of artificial networks show that the proposed method achieves obvious improvements over four state-of-the-art (BigClam, OSLOM, Demon, SE, DMST and ST) methods in terms of the F-score and ONMI for the networks with ground truth (Amazon, Youtube, LiveJournal and Orkut). Also, for the other networks, it provides communities with a good overlapping modularity.

Originality/value

In this paper, the authors investigate the UMST for the overlapping community detection.

To view the access options for this content please click here
Article
Publication date: 4 October 2011

Khaldoun Khashanah and Linyan Miao

This paper empirically investigates the structural evolution of the US financial systems. It particularly aims to explore if the structure of the financial systems changes…

Abstract

Purpose

This paper empirically investigates the structural evolution of the US financial systems. It particularly aims to explore if the structure of the financial systems changes when the economy enters a recession.

Design/methodology/approach

The empirical analysis is conducted through the statistical approach of principal components analysis (PCA) and the graph theoretic approach of minimum spanning trees (MSTs).

Findings

The PCA results suggest that the VIX was the dominant factor influencing the financial system prior to the recession; however, the monetary policy represented by the three‐month T‐bill yield became the leading factor in the system during the recession. By analyzing the MSTs, we find evidence that the structure of the financial system during the economic recession is substantially different from that during the period of economic expansion. Moreover, we discover that the financial markets are more integrated during the economic recession. The much stronger integration of the financial system was found to start right before the advent of the recession.

Practical implications

Research findings will help individuals, institutions, regulators, central bankers better understand the market structure under the economic turmoil, so more efficient strategies can be used to minimize the systemic risk.

Originality/value

This study compares the structure of the US financial markets in economic expansion and contraction periods. The structural dynamics of the financial system are explored, focusing on the recent economic recession triggered by the US subprime mortgage crisis. We introduce a new systemic risk measure.

Details

Studies in Economics and Finance, vol. 28 no. 4
Type: Research Article
ISSN: 1086-7376

Keywords

To view the access options for this content please click here
Article
Publication date: 5 February 2018

Shashank Gupta and Piyush Gupta

Material handling (MH) is an important facility in any manufacturing system. It facilitates the transport of in-process material from one workstation (WS) to another. MH…

Abstract

Purpose

Material handling (MH) is an important facility in any manufacturing system. It facilitates the transport of in-process material from one workstation (WS) to another. MH devices do imply incurring capital costs and, therefore, minimizing their deployment without compromising on smooth material flow will ensure savings in addition to the optimal use of productive shop floor space and, avoid space cluttering. The purpose of this paper is to evaluate the minimal network that connects all the WSs, therefore ensuring economic and safe manufacturing operations.

Design/methodology/approach

Graph theoretical approach and Prim’s algorithm for minimal spanning tree is used to evaluate the minimal span of the MH devices. The algorithm is initialized by translating the graph of WSs into a distance matrix to evaluate the minimal MH network.

Findings

The minimal length of the MH devices is evaluated for a typical case study.

Research limitations/implications

The step-by step methodology explained in the manuscript acts as a good guide for practicing operational managers. The shortcoming of the methodology is that, it presumes the use of modular MH devices that will need to be reconfigured based on dynamic changes to the manufacturing system.

Practical implications

The methodology is explained in detail to enable the practicing managers to use it for designing their MH networks in any manufacturing system.

Originality/value

There is no evidence to indicate the use of minimal spanning tree algorithm for design of MH networks in a manufacturing system. This paper attempts to fill this gap.

Details

Journal of Advances in Management Research, vol. 15 no. 1
Type: Research Article
ISSN: 0972-7981

Keywords

To view the access options for this content please click here
Article
Publication date: 18 October 2011

Andrew Adamatzky and Pedro P.B. de Oliveira

This paper seeks to develop experimental laboratory biological techniques for approximation of existing road networks, optimizing transport links, and designing…

Abstract

Purpose

This paper seeks to develop experimental laboratory biological techniques for approximation of existing road networks, optimizing transport links, and designing alternative optimal solutions to current transport problems. It studies how slime mould of Physarum polycephalum approximate highway networks of Brazil.

Design/methodology/approach

The 21 most populous urban areas in Brazil are considered and represented with source of nutrients placed in the positions of slime mould growing substrate corresponding to the areas. At the beginning of each experiment slime mould is inoculated in São Paulo area. Slime mould exhibits foraging behavior and spans sources of nutrients (which represent urban areas) with a network of protoplasmic tubes (which approximate vehicular transport networks). The structure of transport networks developed by slime mould are analyzed and compared with families of known proximity graphs. The paper also imitates slime‐mould response to simulated disaster.

Findings

It was found that the plasmodium of P. polycephalum develops a minimal approximation of a transport network spanning urban areas. Physarum‐developed network matches man‐made highway network very well. The high degree of similarity is preserved even when high‐demand constraints are placed on repeatability of links in the experiments. Physarum approximates almost all major transport links. In response to a sudden disaster, gradually spreading from its epicenter, the Physarum transport networks react by abandoning transport links affected by disaster zone, enhancement of those unaffected directly by the disaster, massive sprouting from the epicenter, and increase of scouting activity in the regions distant to the epicenter of the disaster.

Originality/value

Experimental methods and computer analysis techniques presented in the paper lay a foundation of novel biological laboratory approaches to imitation and prognostication of socio‐economical developments.

Details

Kybernetes, vol. 40 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 1000