Search results

1 – 10 of over 23000
Article
Publication date: 5 June 2017

Eugene Yujun Fu, Hong Va Leong, Grace Ngai and Stephen C.F. Chan

Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life…

Abstract

Purpose

Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life. A fight detection system finds wide applications. This paper aims to detect fights in a natural and low-cost manner.

Design/methodology/approach

Research works on fight detection are often based on visual features, demanding substantive computation and good video quality. In this paper, the authors propose an approach to detect fight events through motion analysis. Most existing works evaluated their algorithms on public data sets manifesting simulated fights, where the fights are acted out by actors. To evaluate real fights, the authors collected videos involving real fights to form a data set. Based on the two types of data sets, the authors evaluated the performance of their motion signal analysis algorithm, which was then compared with the state-of-the-art approach based on MoSIFT descriptors with Bag-of-Words mechanism, and basic motion signal analysis with Bag-of-Words.

Findings

The experimental results indicate that the proposed approach accurately detects fights in real scenarios and performs better than the MoSIFT approach.

Originality/value

By collecting and annotating real surveillance videos containing real fight events and augmenting with well-known data sets, the authors proposed, implemented and evaluated a low computation approach, comparing it with the state-of-the-art approach. The authors uncovered some fundamental differences between real and simulated fights and initiated a new study in discriminating real against simulated fight events, with very good performance.

Details

International Journal of Pervasive Computing and Communications, vol. 13 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 12 December 2017

Miguel Torres-Ruiz, Marco Moreno-Ibarra, Wadee Alhalabi, Rolando Quintero and Giovanni Guzmán

Up-to-date, the simulation of pedestrian behavior is used to support the design and analysis of urban infrastructure and public facilities. The purpose of this paper is to present…

Abstract

Purpose

Up-to-date, the simulation of pedestrian behavior is used to support the design and analysis of urban infrastructure and public facilities. The purpose of this paper is to present a microscopic model that describes pedestrian behavior in a two-dimensional space. It is based on multi-agent systems and cellular automata theory. The concept of layered-intelligent terrain from the video game industry is reused and concepts such as tracing, evasion and rejection effects related to pedestrian interactive behavior are involved. In a simulation scenario, an agent represents a pedestrian with homogeneous physical characteristics such as walking speed and height. The agents are moved through a discrete space formed by a lattice of hexagonal cells, where each one can contain up to one agent at the same time. The model was validated by using a test that is composed of 17 real data sets of pedestrian unidirectional flow. Each data set has been extracted from laboratory-controlled scenarios carried out with up to 400 people walking through a corridor whose configuration changed in form of the amplitude of its entrance doors and the amplitude of its exit doors from one experiment to another. Moreover, each data set contained different groups of coordinates that compose pedestrian trajectories. The scenarios were replicated and simulated using the proposed model, obtaining 17 simulated data sets. In addition, a measurement methodology based on Voronoi diagrams was used to compute the velocity, density and specific flow of pedestrians to build a time-series graphic and a set of heat maps for each of the real and simulated data sets.

Design methodology/approach

The approach consists of a multi-agent system and cellular automata theory. The obtained results were compared with other studies and a statistical analysis based on similarity measurement is presented.

Findings

A microscopic mobility model that describes pedestrian behavior in a two-dimensional space is presented. It is based on multi-agent systems and cellular automata theory. The concept of layered-intelligent terrain from the video game industry is reused and concepts such as tracing, evasion and rejection effects related to pedestrian interactive behavior are involved. On average, the simulated data sets are similar by 82 per cent in density and 62 per cent in velocity compared to the real data sets. It was observed that the relation between velocity and density from real scenarios could not be replicated.

Research limitations/implications

The main limitations are presented in the speed simulations. Although the obtained results present a similar behavior to the reality, it is necessary to introduce more variables in the model to improve the precision and calibration. Other limitation is the dimension for simulating variables at this moment 2D is presented. So the resolution of cells, making that pedestrian to occupy many cells at the same time and the addition of three dimensions to the terrain will be a good challenge.

Practical implications

In total, 17 data sets were generated as a case study. They contain information related to speed, trajectories, initial and ending points. The data sets were used to calibrate the model and analyze the behavior of pedestrians. Geospatial data were used to simulate the public infrastructure in which pedestrians navigate, taking into account the initial and ending points.

Social implications

The social impact is directly related to the behavior analysis of pedestrians to know tendencies, trajectories and other features that aid to improve the public facilities. The results could be used to generate policies oriented toward developing more consciousness in the public infrastructure development.

Originality/value

The general methodology is the main value of this work. Many approaches were used, designed and implemented for analyzing the pedestrians’ behavior. In addition, all the methods were implemented in plug-in for Quantum GIS. The analysis was described with heat maps and statistical approaches. In addition, the obtained results are focused on analyzing the density, speed and the relationship between these features.

Details

Journal of Science and Technology Policy Management, vol. 9 no. 2
Type: Research Article
ISSN: 2053-4620

Keywords

Article
Publication date: 9 October 2019

Rokas Jurevičius and Virginijus Marcinkevičius

The purpose of this paper is to present a new data set of aerial imagery from robotics simulator (AIR). AIR data set aims to provide a starting point for localization system…

Abstract

Purpose

The purpose of this paper is to present a new data set of aerial imagery from robotics simulator (AIR). AIR data set aims to provide a starting point for localization system development and to become a typical benchmark for accuracy comparison of map-based localization algorithms, visual odometry and SLAM for high-altitude flights.

Design/methodology/approach

The presented data set contains over 100,000 aerial images captured from Gazebo robotics simulator using orthophoto maps as a ground plane. Flights with three different trajectories are performed on maps from urban and forest environment at different altitudes, totaling over 33 kilometers of flight distance.

Findings

The review of previous research studies show that the presented data set is the largest currently available public data set with downward facing camera imagery.

Originality/value

This paper presents the problem of missing publicly available data sets for high-altitude (100‒3,000 meters) UAV flights; the current state-of-the-art research studies performed to develop map-based localization system for UAVs depend on real-life test flights and custom-simulated data sets for accuracy evaluation of the algorithms. The presented new data set solves this problem and aims to help the researchers to improve and benchmark new algorithms for high-altitude flights.

Details

International Journal of Intelligent Unmanned Systems, vol. 8 no. 3
Type: Research Article
ISSN: 2049-6427

Keywords

Book part
Publication date: 11 August 2016

Kousik Guhathakurta, Basabi Bhattacharya and A. Roy Chowdhury

It has long been challenged that the distributions of empirical returns do not follow the log-normal distribution upon which many celebrated results of finance are based including…

Abstract

It has long been challenged that the distributions of empirical returns do not follow the log-normal distribution upon which many celebrated results of finance are based including the Black–Scholes Option-Pricing model. Borland (2002) succeeds in obtaining alternate closed form solutions for European options based on Tsallis distribution, which allow for statistical feedback as a model of the underlying stock returns. Motivated by this, we simulate two distinct time series based on initial data from NIFTY daily close values, one based on the Gaussian return distribution and the other on non-Gaussian distribution. Using techniques of non-linear dynamics, we examine the underlying dynamic characteristics of both the simulated time series and compare them with the characteristics of actual data. Our findings give a definite edge to the non-Gaussian model over the Gaussian one.

Details

The Spread of Financial Sophistication through Emerging Markets Worldwide
Type: Book
ISBN: 978-1-78635-155-5

Keywords

Book part
Publication date: 2 November 2009

Adrian R. Fleissig and Gerald A. Whitney

A new nonparametric procedure is developed to evaluate the significance of violations of weak separability. The procedure correctly detects weak separability with high probability…

Abstract

A new nonparametric procedure is developed to evaluate the significance of violations of weak separability. The procedure correctly detects weak separability with high probability using simulated data that have violations of weak separability caused by adding measurement error. Results are not very sensitive when the amount of measurement error is miss-specified by the researcher. The methodology also correctly rejects weak separability for nonseparable simulated data. We fail to reject weak separability for a monetary and consumption data set that has violations of revealed preference, which suggests that measurement error may be the source of the observed violations.

Details

Measurement Error: Consequences, Applications and Solutions
Type: Book
ISBN: 978-1-84855-902-8

Abstract

Details

Travel Survey Methods
Type: Book
ISBN: 978-0-08-044662-2

Book part
Publication date: 23 November 2011

Gayaneh Kyureghian, Oral Capps and Rodolfo M. Nayga

The objective of this research is to examine, validate, and recommend techniques for handling the problem of missingness in observational data. We use a rich observational data set

Abstract

The objective of this research is to examine, validate, and recommend techniques for handling the problem of missingness in observational data. We use a rich observational data set, the Nielsen HomeScan data set, which allows us to effectively combine elements from simulated data sets: large numbers of observations, large number of data sets and variables, allowing elements of “design” that typically come with simulated data, and its observational nature. We created random 20% and 50% uniform missingness in our data sets and employed several widely used methods of single imputation, such as mean, regression, and stochastic regression imputations, and multiple imputation methods to fill in the data gaps. We compared these methods by measuring the error of predicting the missing values and the parameter estimates from the subsequent regression analysis using the imputed values. We also compared coverage or the percentages of intervals that covered the true parameter in both cases. Based on our results, the method of single regression or conditional mean imputation provided the best predictions of the missing price values with 28.34 and 28.59 mean absolute percent errors in 20% and 50% missingness settings, respectively. The imputation from conditional distribution method had the best rate of coverage. The parameter estimates based on data sets imputed by conditional mean method were consistently unbiased and had the smallest standard deviations. The multiple imputation methods had the best coverage of both the parameter estimates and predictions of the dependent variable.

Details

Missing Data Methods: Cross-sectional Methods and Applications
Type: Book
ISBN: 978-1-78052-525-9

Keywords

Book part
Publication date: 21 December 2010

Chandra R. Bhat, Cristiano Varin and Nazneen Ferdous

This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response…

Abstract

This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response situations. The ability of the two approaches to recover model parameters in simulated data sets is examined, as is the efficiency of estimated parameters and computational cost. Overall, the simulation results demonstrate the ability of the CML approach to recover the parameters very well in a 5–6 dimensional ordered-response choice model context. In addition, the CML recovers parameters as well as the MSL estimation approach in the simulation contexts used in this study, while also doing so at a substantially reduced computational cost. Further, any reduction in the efficiency of the CML approach relative to the MSL approach is in the range of nonexistent to small. When taken together with its conceptual and implementation simplicity, the CML approach appears to be a promising approach for the estimation of not only the multivariate ordered-response model considered here, but also for other analytically intractable econometric models.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Article
Publication date: 27 August 2021

Rui Xiang, Colin Jones, Rogemar Mamon and Marierose Chavez

This paper aims to put forward and compare two accessible approaches to model and forecast spot prices in the fishing industry. The first modelling approach is a Markov-switching…

Abstract

Purpose

This paper aims to put forward and compare two accessible approaches to model and forecast spot prices in the fishing industry. The first modelling approach is a Markov-switching model (MSM) in which a Markov chain captures different economic regimes and a stochastic convenience yield is embedded in the spot price. The second approach is based on a multi-factor model (MFM) featuring three correlated stochastic factors.

Design/methodology/approach

The two proposed approaches are analysed in terms of parameter-estimation accuracy, information criteria and prediction performance. For MSM’s calibration, the quasi-log-likelihood method was applied directly while for the MFM’s parameter estimation, this paper designs an enhanced multi-variate maximum likelihood method with the aid of moments matching. The numerical experiments make use of both simulated and actual data compiled by the Fish Pool ASA. Data on both the Fish Pool’s forwards and Norwegian T-bill yields were additionally used in the MFM’s implementation.

Findings

Using simulated data sets, the MSM estimation gives more accurate results than the MFM estimation in terms of the norm in ℓ2 between the “true” and “computed” parameter estimates and significantly lower standard errors. With actual data sets used to evaluate the forecast values, both approaches have similar performances based on the error analysis. Under some metrics balancing goodness of fit and model complexity, the MFM outperforms the MSM.

Originality/value

With the aid of simulated and observed data sets examined in this paper, insights are gained concerning the appropriateness, as well as the benefits and weaknesses of the two proposed approaches. The modelling and estimation methodologies serve as prelude to reliable frameworks that will support the pricing and risk management of derivative contracts on fish price evolution, which creates price risk transfer mechanisms from the fisheries/aquaculture sector to the financial industry.

Article
Publication date: 7 February 2019

Youngjin Lee

The purpose of this paper is to investigate an efficient means of estimating the ability of students solving problems in the computer-based learning environment.

Abstract

Purpose

The purpose of this paper is to investigate an efficient means of estimating the ability of students solving problems in the computer-based learning environment.

Design/methodology/approach

Item response theory (IRT) and TrueSkill were applied to simulated and real problem solving data to estimate the ability of students solving homework problems in the massive open online course (MOOC). Based on the estimated ability, data mining models predicting whether students can correctly solve homework and quiz problems in the MOOC were developed. The predictive power of IRT- and TrueSkill-based data mining models was compared in terms of Area Under the receiver operating characteristic Curve.

Findings

The correlation between students’ ability estimated from IRT and TrueSkill was strong. In addition, IRT- and TrueSkill-based data mining models showed a comparable predictive power when the data included a large number of students. While IRT failed to estimate students’ ability and could not predict their problem solving performance when the data included a small number of students, TrueSkill did not experience such problems.

Originality/value

Estimating students’ ability is critical to determine the most appropriate time for providing instructional scaffolding in the computer-based learning environment. The findings of this study suggest that TrueSkill can be an efficient means for estimating the ability of students solving problems in the computer-based learning environment regardless of the number of students.

Details

Information Discovery and Delivery, vol. 47 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

1 – 10 of over 23000