Search results

1 – 10 of 112
Article
Publication date: 18 February 2021

Rafael Renteria, Mario Chong, Irineu de Brito Junior, Ana Luna and Renato Quiliche

This paper aims to design a vulnerability assessment model considering the multidimensional and systematic approach to disaster risk and vulnerability. This model serves to both…

Abstract

Purpose

This paper aims to design a vulnerability assessment model considering the multidimensional and systematic approach to disaster risk and vulnerability. This model serves to both risk mitigation and disaster preparedness phases of humanitarian logistics.

Design/methodology/approach

A survey of 27,218 households in Pueblo Rico and Dosquebradas was conducted to obtain information about disaster risk for landslides, floods and collapses. We adopted a cross entropy-based approach for the measure of disaster vulnerability (KullbackLeibler divergence), and a maximum-entropy estimation for the reconstruction of risk a priori categorization (logistic regression). The capabilities approach of Sen supported theoretically our multidimensional assessment of disaster vulnerability.

Findings

Disaster vulnerability is shaped by economic, such as physical attributes of households, and health indicators, which are in specific morbidity indicators that seem to affect vulnerability outputs. Vulnerability is heterogeneous between communities/districts according to formal comparisons of KullbackLeibler divergence. Nor social dimension, neither chronic illness indicators seem to shape vulnerability, at least for Pueblo Rico and Dosquebradas.

Research limitations/implications

The results need a qualitative or case study validation at the community/district level.

Practical implications

We discuss how risk mitigation policies and disaster preparedness strategies can be driven by empirical results. For example, the type of stock to preposition can vary according to the disaster or the kind of alternative policies that can be formulated on the basis of the strong relationship between morbidity and disaster risk.

Originality/value

Entropy-based metrics are not widely used in humanitarian logistics literature, as well as empirical data-driven techniques.

Details

Journal of Humanitarian Logistics and Supply Chain Management, vol. 11 no. 3
Type: Research Article
ISSN: 2042-6747

Keywords

Article
Publication date: 28 August 2009

Vassiliki A. Koutsonikola, Sophia G. Petridou, Athena I. Vakali and Georgios I. Papadimitriou

Web users' clustering is an important mining task since it contributes in identifying usage patterns, a beneficial task for a wide range of applications that rely on the web. The…

Abstract

Purpose

Web users' clustering is an important mining task since it contributes in identifying usage patterns, a beneficial task for a wide range of applications that rely on the web. The purpose of this paper is to examine the usage of KullbackLeibler (KL) divergence, an information theoretic distance, as an alternative option for measuring distances in web users clustering.

Design/methodology/approach

KL‐divergence is compared with other well‐known distance measures and clustering results are evaluated using a criterion function, validity indices, and graphical representations. Furthermore, the impact of noise (i.e. occasional or mistaken page visits) is evaluated, since it is imperative to assess whether a clustering process exhibits tolerance in noisy environments such as the web.

Findings

The proposed KL clustering approach is of similar performance when compared with other distance measures under both synthetic and real data workloads. Moreover, imposing extra noise on real data, the approach shows minimum deterioration among most of the other conventional distance measures.

Practical implications

The experimental results show that a probabilistic measure such as KL‐divergence has proven to be quite efficient in noisy environments and thus constitute a good alternative, the web users clustering problem.

Originality/value

This work is inspired by the usage of divergence in clustering of biological data and it is introduced by the authors in the area of web clustering. According to the experimental results presented in this paper, KL‐divergence can be considered as a good alternative for measuring distances in noisy environments such as the web.

Details

International Journal of Web Information Systems, vol. 5 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 4 August 2020

Mehmet Caner Akay and Hakan Temeltaş

Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and…

119

Abstract

Purpose

Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and exploration. Exploration missions with heterogeneous robot teams (HeRTs) should acquire a common map for understanding the surroundings better. The purpose of this paper is to provide a unique approach with cooperative use of agents that provides a well-detailed observation over the environment where challenging details and complex structures are involved. Also, this method is suitable for real-time applications and autonomous path planning for exploration.

Design/methodology/approach

Lidar odometry and mapping and various similarity metrics such as Shannon entropy, KullbackLeibler divergence, Jeffrey divergence, K divergence, Topsoe divergence, Jensen–Shannon divergence and Jensen divergence are used to construct a common height map of the environment. Furthermore, the authors presented the layering method that provides more accuracy and a better understanding of the common map.

Findings

In summary, with the experiments, the authors observed features located beneath the trees or the roofed top areas and above them without any need for global positioning system signal. Additionally, a more effective common map that enables planning trajectories for both vehicles is obtained with the determined similarity metric and the layering method.

Originality/value

In this study, the authors present a unique solution that implements various entropy-based similarity metrics with the aim of constructing common maps of the environment with HeRTs. To create common maps, Shannon entropy–based similarity metrics can be used, as it is the only one that holds the chain rule of conditional probability precisely. Seven distinct similarity metrics are compared, and the most effective one is chosen for getting a more comprehensive and valid common map. Moreover, different from all the studies in literature, the layering method is used to compute the similarities of each local map obtained by a HeRT. This method also provides the accuracy of the merged common map, as robots’ sight of view prevents the same observations of the environment in features such as a roofed area or trees. This novel approach can also be used in global positioning system-denied and closed environments. The results are verified with experiments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 January 2021

Ben Mansour Dia

The author examine the sequestration of CO2 in abandoned geological formations where leakages are permitted up to only a certain threshold to meet the international CO2 emissions…

Abstract

Purpose

The author examine the sequestration of CO2 in abandoned geological formations where leakages are permitted up to only a certain threshold to meet the international CO2 emissions standards. Technically, the author address a Bayesian experimental design problem to optimally mitigate uncertainties and to perform risk assessment on a CO2 sequestration model, where the parameters to be inferred are random subsurface properties while the quantity of interest is desired to be kept within safety margins.

Design/methodology/approach

The author start with a probabilistic formulation of learning the leak-age rate, and the author later relax it to a Bayesian experimental design of learning the formations geo-physical properties. The injection rate is the design parameter, and the learned properties are used to estimate the leakage rate by means of a nonlinear operator. The forward model governs a two-phase two-component flow in a porous medium with no solubility of CO2 in water. The Laplace approximation is combined with Monte Carlo sampling to estimate the expectation of the KullbackLeibler divergence that stands for the objective function.

Findings

Different scenarios, of confining CO2 while measuring the risk of harmful leakages, are analyzed numerically. The efficiency of the inversion of the CO2 leakage rate improves with the injection rate as great improvements, in terms of the accuracy of the estimation of the formation properties, are noticed. However, this study shows that those results do not imply in any way that the learned value of the CO2 leakage should exhibit the same behavior. Also this study enhances the implementation of CO2 sequestrations by extending the duration given by the reservoir capacity, controlling the injection while the emissions remain in agreement with the international standards.

Originality/value

Uncertainty quantification of the reservoir properties is addressed. Nonlinear goal-oriented inverse problem, for the estimation of the leakage rate, is known to be very challenging. This study presents a relaxation of the probabilistic design of learning the leakage rate to the Bayesian experimental design of learning the reservoir geophysical properties.

Details

Engineering Computations, vol. 38 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 10 August 2021

Elham Amirizadeh and Reza Boostani

The aim of this study is to propose a deep neural network (DNN) method that uses side information to improve clustering results for big datasets; also, the authors show that…

Abstract

Purpose

The aim of this study is to propose a deep neural network (DNN) method that uses side information to improve clustering results for big datasets; also, the authors show that applying this information improves the performance of clustering and also increase the speed of the network training convergence.

Design/methodology/approach

In data mining, semisupervised learning is an interesting approach because good performance can be achieved with a small subset of labeled data; one reason is that the data labeling is expensive, and semisupervised learning does not need all labels. One type of semisupervised learning is constrained clustering; this type of learning does not use class labels for clustering. Instead, it uses information of some pairs of instances (side information), and these instances maybe are in the same cluster (must-link [ML]) or in different clusters (cannot-link [CL]). Constrained clustering was studied extensively; however, little works have focused on constrained clustering for big datasets. In this paper, the authors have presented a constrained clustering for big datasets, and the method uses a DNN. The authors inject the constraints (ML and CL) to this DNN to promote the clustering performance and call it constrained deep embedded clustering (CDEC). In this manner, an autoencoder was implemented to elicit informative low dimensional features in the latent space and then retrain the encoder network using a proposed KullbackLeibler divergence objective function, which captures the constraints in order to cluster the projected samples. The proposed CDEC has been compared with the adversarial autoencoder, constrained 1-spectral clustering and autoencoder + k-means was applied to the known MNIST, Reuters-10k and USPS datasets, and their performance were assessed in terms of clustering accuracy. Empirical results confirmed the statistical superiority of CDEC in terms of clustering accuracy to the counterparts.

Findings

First of all, this is the first DNN-constrained clustering that uses side information to improve the performance of clustering without using labels in big datasets with high dimension. Second, the author defined a formula to inject side information to the DNN. Third, the proposed method improves clustering performance and network convergence speed.

Originality/value

Little works have focused on constrained clustering for big datasets; also, the studies in DNNs for clustering, with specific loss function that simultaneously extract features and clustering the data, are rare. The method improves the performance of big data clustering without using labels, and it is important because the data labeling is expensive and time-consuming, especially for big datasets.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 June 2021

Lixue Zou, Xiwen Liu, Wray Buntine and Yanli Liu

Full text of a document is a rich source of information that can be used to provide meaningful topics. The purpose of this paper is to demonstrate how to use citation context (CC…

Abstract

Purpose

Full text of a document is a rich source of information that can be used to provide meaningful topics. The purpose of this paper is to demonstrate how to use citation context (CC) in the full text to identify the cited topics and citing topics efficiently and effectively by employing automatic text analysis algorithms.

Design/methodology/approach

The authors present two novel topic models, Citation-Context-LDA (CC-LDA) and Citation-Context-Reference-LDA (CCRef-LDA). CC is leveraged to extract the citing text from the full text, which makes it possible to discover topics with accuracy. CC-LDA incorporates CC, citing text, and their latent relationship, while CCRef-LDA incorporates CC, citing text, their latent relationship and reference information in CC. Collapsed Gibbs sampling is used to achieve an approximate estimation. The capacity of CC-LDA to simultaneously learn cited topics and citing topics together with their links is investigated. Moreover, a topic influence measure method based on CC-LDA is proposed and applied to create links between the two-level topics. In addition, the capacity of CCRef-LDA to discover topic influential references is also investigated.

Findings

The results indicate CC-LDA and CCRef-LDA achieve improved or comparable performance in terms of both perplexity and symmetric KullbackLeibler (sKL) divergence. Moreover, CC-LDA is effective in discovering the cited topics and citing topics with topic influence, and CCRef-LDA is able to find the cited topic influential references.

Originality/value

The automatic method provides novel knowledge for cited topics and citing topics discovery. Topic influence learnt by our model can link two-level topics and create a semantic topic network. The method can also use topic specificity as a feature to rank references.

Details

Library Hi Tech, vol. 39 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 1 March 1995

L. Pardo, D. Morales and I.J. Taneja

Fisher’s amount of information is the most parametric measure in the literature of statistics. However, not for every family of probability density functions do the well‐known…

198

Abstract

Fisher’s amount of information is the most parametric measure in the literature of statistics. However, not for every family of probability density functions do the well‐known regularity assumptions hold. To avoid this problem, several parametric measures have been proposed on the basis of divergence measures. In this work, parametric measures of information are obtained on the basis of the generalized Jensen difference divergence measures. When the regularity assumptions hold, their relations with Fisher’s amount of information are also studied.

Details

Kybernetes, vol. 24 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Book part
Publication date: 15 April 2020

Badi H. Baltagi, Georges Bresson and Jean-Michel Etienne

This chapter proposes semiparametric estimation of the relationship between growth rate of GDP per capita, growth rates of physical and human capital, labor as well as other…

Abstract

This chapter proposes semiparametric estimation of the relationship between growth rate of GDP per capita, growth rates of physical and human capital, labor as well as other covariates and common trends for a panel of 23 OECD countries observed over the period 1971–2015. The observed differentiated behaviors by country reveal strong heterogeneity. This is the motivation behind using a mixed fixed- and random coefficients model to estimate this relationship. In particular, this chapter uses a semiparametric specification with random intercepts and slopes coefficients. Motivated by Lee and Wand (2016), the authors estimate a mean field variational Bayes semiparametric model with random coefficients for this panel of countries. Results reveal nonparametric specifications for the common trends. The use of this flexible methodology may enrich the empirical growth literature underlining a large diversity of responses across variables and countries.

Open Access
Article
Publication date: 31 December 2018

Khuram Ali Khan, Tasadduq Niaz, Đilda Pečarić and Josip Pečarić

In this work, we estimated the different entropies like Shannon entropy, Rényi divergences, Csiszár divergence by using Jensen’s type functionals. The Zipf’s–Mandelbrot law and…

Abstract

In this work, we estimated the different entropies like Shannon entropy, Rényi divergences, Csiszár divergence by using Jensen’s type functionals. The Zipf’s–Mandelbrot law and hybrid Zipf’s–Mandelbrot law are used to estimate the Shannon entropy. The Abel–Gontscharoff Green functions and Fink’s Identity are used to construct new inequalities and generalized them for m-convex function.

Details

Arab Journal of Mathematical Sciences, vol. 26 no. 1/2
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 7 October 2021

Jianran Liu and Wen Ji

In recent years, with the increase in computing power, artificial intelligence can gradually be regarded as intelligent agents and interact with humans, this interactive network…

Abstract

Purpose

In recent years, with the increase in computing power, artificial intelligence can gradually be regarded as intelligent agents and interact with humans, this interactive network has become increasingly complex. Therefore, it is necessary to model and analyze this complex interactive network. This paper aims to model and demonstrate the evolution of crowd intelligence using visual complex networks.

Design/methodology/approach

This paper uses the complex network to model and observe the collaborative evolution behavior and self-organizing system of crowd intelligence.

Findings

The authors use the complex network to construct the cooperative behavior and self-organizing system in crowd intelligence. Determine the evolution mode of the node by constructing the interactive relationship between nodes and observe the global evolution state through the force layout.

Practical implications

The simulation results show that the state evolution map can effectively simulate the distribution, interaction and evolution of crowd intelligence through force layout and the intelligent agents’ link mode the authors proposed.

Originality/value

Based on the complex network, this paper constructs the interactive behavior and organization system in crowd intelligence and visualizes the evolution process.

Details

International Journal of Crowd Science, vol. 5 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

1 – 10 of 112