Search results

1 – 10 of over 21000
To view the access options for this content please click here
Book part
Publication date: 16 December 2009

Zongwu Cai, Jingping Gu and Qi Li

There is a growing literature in nonparametric econometrics in the recent two decades. Given the space limitation, it is impossible to survey all the important recent…

Abstract

There is a growing literature in nonparametric econometrics in the recent two decades. Given the space limitation, it is impossible to survey all the important recent developments in nonparametric econometrics. Therefore, we choose to limit our focus on the following areas. In Section 2, we review the recent developments of nonparametric estimation and testing of regression functions with mixed discrete and continuous covariates. We discuss nonparametric estimation and testing of econometric models for nonstationary data in Section 3. Section 4 is devoted to surveying the literature of nonparametric instrumental variable (IV) models. We review nonparametric estimation of quantile regression models in Section 5. In Sections 2–5, we also point out some open research problems, which might be useful for graduate students to review the important research papers in this field and to search for their own research interests, particularly dissertation topics for doctoral students. Finally, in Section 6 we highlight some important research areas that are not covered in this paper due to space limitation. We plan to write a separate survey paper to discuss some of the omitted topics.

Details

Nonparametric Econometric Methods
Type: Book
ISBN: 978-1-84950-624-3

To view the access options for this content please click here
Article
Publication date: 30 June 2021

Zhiwei Liu, Jianjun Chen, Yifan Xia and Yao Zheng

Sizing functions are crucial inputs for unstructured mesh generation since they determine the element distributions of resulting meshes to a large extent. Meanwhile…

Abstract

Purpose

Sizing functions are crucial inputs for unstructured mesh generation since they determine the element distributions of resulting meshes to a large extent. Meanwhile, automating the procedure of creating a sizing function is a prerequisite to set up a fully automatic mesh generation pipeline. In this paper, an automatic algorithm is proposed to create a high-quality sizing function for an unstructured surface and volume mesh generation by using a triangular mesh as the background mesh.

Design/methodology/approach

A practically efficient and effective solution is developed by using local operators carefully to re-mesh the tessellation of the input Computer Aided Design (CAD) models. A nonlinear programming (NLP) problem has been formulated to limit the gradient of the sizing function, while in this study, the object function of this NLP is replaced by an analytical equation that predicts the number of elements. For the query of the sizing value, an improved algorithm is developed by using the axis-aligned bounding box (AABB) tree structure.

Findings

The local operations of re-meshing could effectively and efficiently resolve the banding issue caused by using the default tessellation of the model to define a sizing function. Experiments show that the solution of the revised NLP, in most cases, could provide a better solution at the lower cost of computational time. With the help of the AABB tree, the sizing function defined at a surface background mesh can be also used as the input of volume mesh generation.

Originality/value

Theoretical analysis reveals that the construction of the initial sizing function could be reduced to the solution of an optimization problem. The definitions of the banding elements and surface proximity are also given. Under the guidance of this theoretical analysis, re-meshing and ray-casting technologies are well-designed to initial the sizing function. Smoothing with the revised NLP and querying by the AABB tree, the paper provides an automatic method to get a high-quality sizing function for both surface and volume mesh generation.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 1997

B.X. Zhang, B.T.F. Chung and Edward T. Lee

An efficient method utilizing a “max‐pro” optimum scheme for solving the “max‐min” decision function in a fuzzy optimization environment. The proposed method significantly…

Abstract

An efficient method utilizing a “max‐pro” optimum scheme for solving the “max‐min” decision function in a fuzzy optimization environment. The proposed method significantly simplifies the “max‐min” optimum solving problem, especially in the case when the number of objectives and constraints is large. Presents illustrative examples. The technique may also have valuable applications in solving general optimization problems with a piecewise‐smoothed objective function.

Details

Kybernetes, vol. 26 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 2016

Manoj Manuja and Deepak Garg

Syntax-based text classification (TC) mechanisms have been overtly replaced by semantic-based systems in recent years. Semantic-based TC systems are particularly useful in…

Abstract

Purpose

Syntax-based text classification (TC) mechanisms have been overtly replaced by semantic-based systems in recent years. Semantic-based TC systems are particularly useful in those scenarios where similarity among documents is computed considering semantic relationships among their terms. Kernel functions have received major attention because of the unprecedented popularity of SVMs in the field of TC. Most of the kernel functions exploit syntactic structures of the text, but quite a few also use a priori semantic information for knowledge extraction. The purpose of this paper is to investigate semantic kernel functions in the context of TC.

Design/methodology/approach

This work presents performance and accuracy analysis of seven semantic kernel functions (Semantic Smoothing Kernel, Latent Semantic Kernel, Semantic WordNet-based Kernel, Semantic Smoothing Kernel having Implicit Superconcept Expansions, Compactness-based Disambiguation Kernel Function, Omiotis-based S-VSM semantic kernel function and Top-k S-VSM semantic kernel) being implemented with SVM as kernel method. All seven semantic kernels are implemented in SVM-Light tool.

Findings

Performance and accuracy parameters of seven semantic kernel functions have been evaluated and compared. The experimental results show that Top-k S-VSM semantic kernel has the highest performance and accuracy among all the evaluated kernel functions which make it a preferred building block for kernel methods for TC and retrieval.

Research limitations/implications

A combination of semantic kernel function with syntactic kernel function needs to be investigated as there is a scope of further improvement in terms of accuracy and performance in all the seven semantic kernel functions.

Practical implications

This research provides an insight into TC using a priori semantic knowledge. Three commonly used data sets are being exploited. It will be quite interesting to explore these kernel functions on live web data which may test their actual utility in real business scenarios.

Originality/value

Comparison of performance and accuracy parameters is the novel point of this research paper. To the best of the authors’ knowledge, this type of comparison has not been done previously.

Details

Program, vol. 50 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

To view the access options for this content please click here
Book part
Publication date: 16 December 2009

Daniel J. Henderson and Christopher F. Parmeter

Economic conditions such as convexity, homogeneity, homotheticity, and monotonicity are all important assumptions or consequences of assumptions of economic functionals to…

Abstract

Economic conditions such as convexity, homogeneity, homotheticity, and monotonicity are all important assumptions or consequences of assumptions of economic functionals to be estimated. Recent research has seen a renewed interest in imposing constraints in nonparametric regression. We survey the available methods in the literature, discuss the challenges that present themselves when empirically implementing these methods, and extend an existing method to handle general nonlinear constraints. A heuristic discussion on the empirical implementation for methods that use sequential quadratic programming is provided for the reader, and simulated and empirical evidence on the distinction between constrained and unconstrained nonparametric regression surfaces is covered.

Details

Nonparametric Econometric Methods
Type: Book
ISBN: 978-1-84950-624-3

To view the access options for this content please click here
Article
Publication date: 1 December 2003

A.G. Adeagbo‐Sheikh

Considers a conceptual model that leads to the notions of a “distance function” g(t) and that of a “controlled‐disturbance function” δ(t)=h(g(t)). Using these notions we…

Abstract

Considers a conceptual model that leads to the notions of a “distance function” g(t) and that of a “controlled‐disturbance function” δ(t)=h(g(t)). Using these notions we begin a mathematical theory of a system that is self‐organizing to achieve a given state of affairs in a given environment. Obtains, in terms of the functions δ(t) and g(t), a condition under which the system always progresses towards the goal. We also establish the form of expression for the distance function g(t). This comes as a major tool in the proofs of the so‐called goal‐state‐description theorems. These theorems have results that facilitate the determination of the “working functions” of the self‐organizing system (SOS). When they exist, the “working functions” specify a goal‐path for the SOS to learn to adopt.

Details

Kybernetes, vol. 32 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

Abstract

Details

Nonlinear Time Series Analysis of Business Cycles
Type: Book
ISBN: 978-0-44451-838-5

To view the access options for this content please click here
Article
Publication date: 30 September 2014

Chihiro Shimizu, Koji Karato and Kiyohiko Nishimura

The purpose of this article, starting from linear regression, was to estimate a switching regression model, nonparametric model and generalized additive model as a…

Abstract

Purpose

The purpose of this article, starting from linear regression, was to estimate a switching regression model, nonparametric model and generalized additive model as a semi-parametric model, perform function estimation with multiple nonlinear estimation methods and conduct comparative analysis of their predictive accuracy. The theoretical importance of estimating hedonic functions using a nonlinear function form has been pointed out in ample previous research (e.g. Heckman et al. (2010).

Design/methodology/approach

The distinctive features of this study include not only our estimation of multiple nonlinear model function forms but also the method of verifying predictive accuracy. Using out-of-sample testing, we predicted and verified predictive accuracy by performing random sampling 500 times without replacement for 9,682 data items (the same number used in model estimation), based on data for the years before and after the year used for model estimation.

Findings

As a result of estimating multiple models, we believe that when it comes to hedonic function estimation, nonlinear models are superior based on the strength of predictive accuracy viewed in statistical terms and on graphic comparisons. However, when we examined predictive accuracy using out-of-sample testing, we found that the predictive accuracy was inferior to linear models for all nonlinear models.

Research limitations/implications

In terms of the reason why the predictive accuracy was inferior, it is possible that there was an overfitting in the function estimation. Because this research was conducted for a specific period of time, it needs to be developed by expanding it to multiple periods over which the market fluctuates dynamically and conducting further analysis.

Practical implications

Many studies compare predictive accuracy by separating the estimation model and verification model using data at the same point in time. However, when attempting practical application for auto-appraisal systems and the like, it is necessary to estimate a model using past data and make predictions with respect to current transactions. It is possible to apply this study to auto-appraisal systems.

Social implications

It is recognized that housing price fluctuations caused by the subprime crisis had a massive impact on the financial system. The findings of this study are expected to serve as a tool for measuring housing price fluctuation risks in the financial system.

Originality/value

While the importance of nonlinear estimation when estimating hedonic functions has been pointed out in theoretical terms, there is a noticeable lag when it comes to testing based on actual data. Given this, we believe that our verification of nonlinear estimation’s validity using multiple nonlinear models is significant not just from an academic perspective – it may also have practical applications.

Details

International Journal of Housing Markets and Analysis, vol. 7 no. 4
Type: Research Article
ISSN: 1753-8270

Keywords

To view the access options for this content please click here
Article
Publication date: 12 April 2013

Abdelraheem M. Aly, Mitsuteru Asai and Yoshimi Sonda

The purpose of this paper is to show how a surface tension model and an eddy viscosity based on the Smagorinsky sub‐grid scale model, which belongs to the Large‐Eddy…

Abstract

Purpose

The purpose of this paper is to show how a surface tension model and an eddy viscosity based on the Smagorinsky sub‐grid scale model, which belongs to the Large‐Eddy Simulation (LES) theory for turbulent flow, have been introduced into ISPH (Incompressible smoothed particle hydrodynamics) method. In addition, a small modification in the source term of pressure Poisson equation has been introduced as a stabilizer for robust simulations. This stabilization generates a smoothed pressure distribution and keeps the total volume of fluid, and it is analogous to the recent modification in MPS.

Design/methodology/approach

The surface tension force in free surface flow is evaluated without a direct modeling of surrounding air for decreasing computational costs. The proposed model was validated by calculating the surface tension force in the free surface interface for a cubic‐droplet under null‐gravity and the milk crown problem with different resolution models. Finally, effects of the eddy viscosity have been discussed with a fluid‐fluid interaction simulation.

Findings

From the numerical tests, the surface tension model can handle free surface tension problems including high curvature without special treatments. The eddy viscosity has clear effects in adjusting the splashes and reduces the deformation of free surface in the interaction. Finally, the proposed stabilization appeared in the source term of pressure Poisson equation has an important role in the simulation to keep the total volume of fluid.

Originality/value

An incompressible smoothed particle hydrodynamics is developed to simulate milk crown problem using a surface tension model and the eddy viscosity.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 23 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

To view the access options for this content please click here
Book part
Publication date: 19 December 2012

Liangjun Su and Halbert L. White

We provide straightforward new nonparametric methods for testing conditional independence using local polynomial quantile regression, allowing weakly dependent data…

Abstract

We provide straightforward new nonparametric methods for testing conditional independence using local polynomial quantile regression, allowing weakly dependent data. Inspired by Hausman's (1978) specification testing ideas, our methods essentially compare two collections of estimators that converge to the same limits under correct specification (conditional independence) and that diverge under the alternative. To establish the properties of our estimators, we generalize the existing nonparametric quantile literature not only by allowing for dependent heterogeneous data but also by establishing a weak consistency rate for the local Bahadur representation that is uniform in both the conditioning variables and the quantile index. We also show that, despite our nonparametric approach, our tests can detect local alternatives to conditional independence that decay to zero at the parametric rate. Our approach gives the first nonparametric tests for time-series conditional independence that can detect local alternatives at the parametric rate. Monte Carlo simulations suggest that our tests perform well in finite samples. We apply our test to test for a key identifying assumption in the literature on nonparametric, nonseparable models by studying the returns to schooling.

1 – 10 of over 21000