Search results
1 – 10 of over 1000Shi-quan Jiang, Si-feng Liu, Zhi-geng Fang and Zhong-xia Liu
The purpose of this paper is to study distance measuring and sorting method of general grey number.
Abstract
Purpose
The purpose of this paper is to study distance measuring and sorting method of general grey number.
Design/methodology/approach
First, the concept of generalised grey number based on grey system theory is given in this paper. Second, from the perspective of kernel and degree of greyness of general grey number, the method of measuring the distance of general grey number and its properties are given. At the same time, the concepts of the kernel expectation and the kernel variance of the general grey number are proposed.
Findings
Up to now, the method of measuring the distance and sorting of general grey number is established. Thus, the difficult problem for set up sorting of general grey number has been solved to a certain degree.
Research limitations/implications
The method exposed in this paper can be used to integrate information form a different source. Distance measuring and sorting method of general grey number could be extended to the case of grey algebraic equation, grey differential equation and grey matrix which includes general grey numbers, etc.
Originality/value
The concepts of the kernel expectation and the kernel variance of the general grey number are proposed for the first time in this paper; the novel sorting rules of general grey numbers were also constructed.
Details
Keywords
David P. Brown and Jens Carsten Jackwerth
The pricing kernel puzzle of Jackwerth (2000) concerns the fact that the empirical pricing kernel implied in S&P 500 index options and index returns is not monotonically…
Abstract
The pricing kernel puzzle of Jackwerth (2000) concerns the fact that the empirical pricing kernel implied in S&P 500 index options and index returns is not monotonically decreasing in wealth as standard economic theory would suggest. Thus, those options are currently priced in a way such that any risk-averse investor would increase his/her utility by trading in them. We provide a representative agent model where volatility is a function of a second momentum state variable. This model is capable of generating the empirical patterns in the pricing kernel, albeit only for parameter constellations that are not typically observed in the real world.
The paper aims to compare and clarify the differences and between the two well-known decomposition spectral techniques; the Winer–Chaos expansion (WCE) and the Winer–Hermite…
Abstract
Purpose
The paper aims to compare and clarify the differences and between the two well-known decomposition spectral techniques; the Winer–Chaos expansion (WCE) and the Winer–Hermite expansion (WHE). The details of the two decompositions are outlined. The difficulties arise when using the two techniques are also mentioned along with the convergence orders. The reader can also find a collection of references to understand the two decompositions with their origins. The geometrical Brownian motion is considered as an example for an important process with exact solution for the sake of comparison. The two decompositions are found practical in analysing the SDEs. The WCE is, in general, simpler, while WHE is more efficient as it is the limit of WCE when using infinite number of random variables. The Burgers turbulence is considered as a nonlinear example and WHE is shown to be more efficient in detecting the turbulence. In general, WHE is more efficient especially in case of nonlinear and/or non-Gaussian processes.
Design/methodology/approach
The paper outlined the technical and literature review of the WCE and WHE techniques. Linear and nonlinear processes are compared to outline the comparison along with the convergence of both techniques.
Findings
The paper shows that both decompositions are practical in solving the stochastic differential equations. The WCE is found simpler and WHE is the limit when using infinite number of random variables in WCE. The WHE is more efficient especially in case of nonlinear problems.
Research limitations/implications
Applicable for SDEs with square integrable processes and coefficients satisfying Lipschitz conditions.
Originality/value
This paper fulfils a comparison required by the researchers in the stochastic analysis area. It also introduces a simple efficient technique to model the flow turbulence in the physical domain.
Details
Keywords
Jaya Mamta Prosad, Sujata Kapoor and Jhumur Sengupta
– The purpose of this paper is to capture the presence and impact of optimism in the Indian equity market.
Abstract
Purpose
The purpose of this paper is to capture the presence and impact of optimism in the Indian equity market.
Design/methodology/approach
The data set comprises the daily values of the Nifty 50 index, index options and Treasury-bill index for a period of five years (2006-2011). The focus of this paper is two pronged. It first investigates the presence of optimism (pessimism) using the pricing kernel technique suggested by Barone-Adesi et al. (2012). Second, it tries to analyze the relationship of this bias with stock market indicators like risk premium, market return and volatility using time series regression.
Findings
The findings indicate that the Indian equity market has been predominantly pessimistic from the period 2006 to 2011. The interaction of this bias with market indicators also unveils some interesting insights. The study shows that high past volatility can lead to pessimism in the Indian equity market and vice versa. It further explores that when the investors are rational, their risk and return relationship is positive while it tends to be negative when they are irrational. The impact of investors’ irrationalities on asset valuation has also been accounted by Brown and Cliff (2005).
Research limitations/implications
The findings of the paper have significant implications for fund managers and asset management companies. It is recommended that they should try to identify behavioral biases in their clients before designing their portfolios.
Originality/value
This study is one of the very few attempts to capture the presence and impact optimism (pessimism) in the Indian equity market.
Details
Keywords
Seng Bee Ng, Ze Yee Pong, Kian Aun Chang, Yun Ping Neo, Lye Yee Chew, Hock Eng Khoo and Kin Weng Kong
Mango seed, an agricultural food (agri-food) waste obtained during mango processing is generally discarded and causes environmental stress. In the present study, mango kernel was…
Abstract
Purpose
Mango seed, an agricultural food (agri-food) waste obtained during mango processing is generally discarded and causes environmental stress. In the present study, mango kernel was processed into flour and incorporated into macaron formulation. This study aimed to investigate the antioxidant potential, physicochemical properties, and sensory qualities of macaron after substitution of almond flour (AF) with 0% (control, MC-0), 20% (MC-20), 40% (MC-40), 60% (MC-60), 80% (MC-80) and 100% (MC-100) (w/w) of mango kernel flour (MKF).
Design/methodology/approach
Sensory evaluation was conducted using a nine-point hedonic scale (n = 90 young adults) to evaluate the acceptance towards MC-0 (control), MC-20, MC-40, MC-60, MC-80 and MC-100 macaron shells (without mango-flavoured ganache). The most preferred formulated and control macarons were subjected to proximate analyses according to the methods of Association of Official Analytical Chemists for moisture, ash, and protein, and American Oil Chemists' Society for fat. Carbohydrate content was estimated by difference. Physical analyses such as colour, pH and water activity (aw) were performed using various instrumental techniques. Antioxidant activity of all macaron shell formulations was accessed using 2,2-diphenyl-1-picrylhydrazyl radical scavenging activity (DPPH-RSA). Sensory evaluation was repeated using a five-point hedonic scale to determine the acceptance of the most preferred macaron (with mango-flavoured ganache) among consumer panels of all age groups (n = 80).
Findings
The first sensory acceptance test revealed that macaron shell with 40% MKF substitution (MC-40) was the most preferred formulation among young adults aged 18–35 years old. Moisture and ash contents between MC-0 and MC-40 were found to be similar (p > 0.05), while significant differences (p < 0.05) were observed for the fat and protein contents. Antioxidant activity increased proportionally with increasing MKF substitution and exhibited significant improvement (p < 0.05) in MC-40. Approximately 93% of the panellists expressed their liking towards MC-40 in the second sensory acceptance test. Overall, this study demonstrated that 40% MKF substitution in macaron improved its antioxidant performance without compromising consumers' acceptance.
Originality/value
This innovative research features the incorporation of MKF in the development of an economical and healthier macaron. The results constitute industrial relevant findings pertaining to its application feasibility and nutritional properties. This research has contributed knowledge to the existing literature as well as benefitted food manufacturers to develop value-added products that could meet the needs and expectations of interested parties, including consumers, governmental organisations, health-cautious advocates and healthcare providers.
Details
Keywords
Jianghao Chu, Tae-Hwy Lee and Aman Ullah
In this chapter we consider the “Regularization of Derivative Expectation Operator” (Rodeo) of Lafferty and Wasserman (2008) and propose a modified Rodeo algorithm for…
Abstract
In this chapter we consider the “Regularization of Derivative Expectation Operator” (Rodeo) of Lafferty and Wasserman (2008) and propose a modified Rodeo algorithm for semiparametric single index models (SIMs) in big data environment with many regressors. The method assumes sparsity that many of the regressors are irrelevant. It uses a greedy algorithm, in that, to estimate the semiparametric SIM of Ichimura (1993), all coefficients of the regressors are initially set to start from near zero, then we test iteratively if the derivative of the regression function estimator with respect to each coefficient is significantly different from zero. The basic idea of the modified Rodeo algorithm for SIM (to be called SIM-Rodeo) is to view the local bandwidth selection as a variable selection scheme which amplifies the coefficients for relevant variables while keeping the coefficients of irrelevant variables relatively small or at the initial starting values near zero. For sparse semiparametric SIM, the SIM-Rodeo algorithm is shown to attain consistency in variable selection. In addition, the algorithm is fast to finish the greedy steps. We compare SIM-Rodeo with SIM-Lasso method in Zeng et al. (2012). Our simulation results demonstrate that the proposed SIM-Rodeo method is consistent for variable selection and show that it has smaller integrated mean squared errors (IMSE) than SIM-Lasso.
Details
Keywords
Yulia Kotlyarova, Marcia M. A. Schafgans and Victoria Zinde-Walsh
For kernel-based estimators, smoothness conditions ensure that the asymptotic rate at which the bias goes to zero is determined by the kernel order. In a finite sample, the…
Abstract
For kernel-based estimators, smoothness conditions ensure that the asymptotic rate at which the bias goes to zero is determined by the kernel order. In a finite sample, the leading term in the expansion of the bias may provide a poor approximation. We explore the relation between smoothness and bias and provide estimators for the degree of the smoothness and the bias. We demonstrate the existence of a linear combination of estimators whose trace of the asymptotic mean-squared error is reduced relative to the individual estimator at the optimal bandwidth. We examine the finite-sample performance of a combined estimator that minimizes the trace of the MSE of a linear combination of individual kernel estimators for a multimodal density. The combined estimator provides a robust alternative to individual estimators that protects against uncertainty about the degree of smoothness.
Details
Keywords
This paper gives a selective review on some recent developments of nonparametric methods in both continuous and discrete time finance, particularly in the areas of nonparametric…
Abstract
This paper gives a selective review on some recent developments of nonparametric methods in both continuous and discrete time finance, particularly in the areas of nonparametric estimation and testing of diffusion processes, nonparametric testing of parametric diffusion models, nonparametric pricing of derivatives, nonparametric estimation and hypothesis testing for nonlinear pricing kernel, and nonparametric predictability of asset returns. For each financial context, the paper discusses the suitable statistical concepts, models, and modeling procedures, as well as some of their applications to financial data. Their relative strengths and weaknesses are discussed. Much theoretical and empirical research is needed in this area, and more importantly, the paper points to several aspects that deserve further investigation.
Applied econometric analysis is often performed using data collected from large-scale surveys. These surveys use complex sampling plans in order to reduce costs and increase the…
Abstract
Applied econometric analysis is often performed using data collected from large-scale surveys. These surveys use complex sampling plans in order to reduce costs and increase the estimation efficiency for subgroups of the population. These sampling plans result in unequal inclusion probabilities across units in the population. The purpose of this paper is to derive the asymptotic properties of a design-based nonparametric regression estimator under a combined inference framework. The nonparametric regression estimator considered is the local constant estimator. This work contributes to the literature in two ways. First, it derives the asymptotic properties for the multivariate mixed-data case, including the asymptotic normality of the estimator. Second, I use least squares cross-validation for selecting the bandwidths for both continuous and discrete variables. I run Monte Carlo simulations designed to assess the finite-sample performance of the design-based local constant estimator versus the traditional local constant estimator for three sampling methods, namely, simple random sampling, exogenous stratification and endogenous stratification. Simulation results show that the estimator is consistent and that efficiency gains can be achieved by weighting observations by the inverse of their inclusion probabilities if the sampling is endogenous.
Details
Keywords
Abdullah Alharbi, Wajdi Alhakami, Sami Bourouis, Fatma Najar and Nizar Bouguila
We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is…
Abstract
We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is extremely challenging and important for many applications. The proposed approach involves developing new probabilistic support vector machines (SVMs) kernels from a flexible generative statistical model named “bounded generalized Gaussian mixture model”. The developed learning framework has the advantage to combine properly the benefits of both discriminative and generative models and to include prior knowledge about the nature of data. It can effectively recognize if an image is a tampered one and also to identify both forged and authentic images. The obtained results confirmed that the developed framework has good performance under numerous inpainted images.
Details