Search results

1 – 10 of 64
Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Book part
Publication date: 5 April 2024

Luis Orea, Inmaculada Álvarez-Ayuso and Luis Servén

This chapter provides an empirical assessment of the effects of infrastructure provision on structural change and aggregate productivity using industrylevel data for a set of…

Abstract

This chapter provides an empirical assessment of the effects of infrastructure provision on structural change and aggregate productivity using industrylevel data for a set of developed and developing countries over 1995–2010. A distinctive feature of the empirical strategy followed is that it allows the measurement of the resource reallocation directly attributable to infrastructure provision. To achieve this, a two-level top-down decomposition of aggregate productivity that combines and extends several strands of the literature is proposed. The empirical application reveals significant production losses attributable to misallocation of inputs across firms, especially among African countries. Also, the results show that infrastructure provision has stimulated aggregate total factor productivity growth through both within and between industry productivity gains.

Book part
Publication date: 5 April 2024

Bruce E. Hansen and Jeffrey S. Racine

Classical unit root tests are known to suffer from potentially crippling size distortions, and a range of procedures have been proposed to attenuate this problem, including the…

Abstract

Classical unit root tests are known to suffer from potentially crippling size distortions, and a range of procedures have been proposed to attenuate this problem, including the use of bootstrap procedures. It is also known that the estimating equation’s functional form can affect the outcome of the test, and various model selection procedures have been proposed to overcome this limitation. In this chapter, the authors adopt a model averaging procedure to deal with model uncertainty at the testing stage. In addition, the authors leverage an automatic model-free dependent bootstrap procedure where the null is imposed by simple differencing (the block length is automatically determined using recent developments for bootstrapping dependent processes). Monte Carlo simulations indicate that this approach exhibits the lowest size distortions among its peers in settings that confound existing approaches, while it has superior power relative to those peers whose size distortions do not preclude their general use. The proposed approach is fully automatic, and there are no nuisance parameters that have to be set by the user, which ought to appeal to practitioners.

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Keywords

Book part
Publication date: 5 April 2024

Hung-pin Lai

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic…

Abstract

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic error v and a one-sided inefficiency random component u. When v or u has a nonstandard distribution, such as v follows a generalized t distribution or u has a χ2 distribution, the likelihood function can be complicated or untractable. This chapter introduces using indirect inference to estimate the SF models, where only least squares estimation is used. There is no need to derive the density or likelihood function, thus it is easier to handle a model with complicated distributions in practice. The author examines the finite sample performance of the proposed estimator and also compare it with the standard ML estimator as well as the maximum simulated likelihood (MSL) estimator using Monte Carlo simulations. The author found that the indirect inference estimator performs quite well in finite samples.

Book part
Publication date: 5 April 2024

Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…

Abstract

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.

Book part
Publication date: 5 April 2024

Mike G. Tsionas

In this chapter, we consider the possibility that a firm may use costly resources to improve its technical efficiency. Results from static analyses imply that technical efficiency…

Abstract

In this chapter, we consider the possibility that a firm may use costly resources to improve its technical efficiency. Results from static analyses imply that technical efficiency is determined by the configuration of factor prices. A dynamic model of the firm is developed under the assumption that managerial skill contributes to technical efficiency. Dynamic analysis shows that the firm can never be technically efficient if it maximizes profits, the steady state is always inefficient, and it is locally stable. In terms of empirical analysis, we show how likelihood-based methods can be used to uncover, in a semi-non-parametric manner, important features of the inefficiency-management relationship using a flexible functional form accounting for the endogeneity of inputs in a production function. Managerial compensation can also be identified and estimated using the new techniques. The new empirical methodology is applied in a data set previously analyzed by Bloom and van Reenen (2007) on managerial practices of manufacturing firms in the UK, US, France and Germany.

Article
Publication date: 11 October 2023

Radha Subramanyam, Y. Adline Jancy and P. Nagabushanam

Cross-layer approach in media access control (MAC) layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data…

Abstract

Purpose

Cross-layer approach in media access control (MAC) layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in wireless sensor network (WSN) and Internet of Things (IoT) applications. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes. Game theory optimization for distributed may increase the network performance. The purpose of this study is to survey the various operations that can be carried out using distributive and adaptive MAC protocol. Hill climbing distributed MAC does not need a central coordination system and location-based transmission with neighbor awareness reduces transmission power.

Design/methodology/approach

Distributed MAC in wireless networks is used to address the challenges like network lifetime, reduced energy consumption and for improving delay performance. In this paper, a survey is made on various cooperative communications in MAC protocols, optimization techniques used to improve MAC performance in various applications and mathematical approaches involved in game theory optimization for MAC protocol.

Findings

Spatial reuse of channel improved by 3%–29%, and multichannel improves throughput by 8% using distributed MAC protocol. Nash equilibrium is found to perform well, which focuses on energy utility in the network by individual players. Fuzzy logic improves channel selection by 17% and secondary users’ involvement by 8%. Cross-layer approach in MAC layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in WSN and IoT applications. Cross-layer and cooperative communication give energy savings of 27% and reduces hop distance by 4.7%. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes.

Research limitations/implications

Other optimization techniques can be applied for WSN to analyze the performance.

Practical implications

Game theory optimization for distributed may increase the network performance. Optimal cuckoo search improves throughput by 90% and reduces delay by 91%. Stochastic approaches detect 80% attacks even in 90% malicious nodes.

Social implications

Channel allocations in centralized or static manner must be based on traffic demands whether dynamic traffic or fluctuated traffic. Usage of multimedia devices also increased which in turn increased the demand for high throughput. Cochannel interference keep on changing or mitigations occur which can be handled by proper resource allocations. Network survival is by efficient usage of valid patis in the network by avoiding transmission failures and time slots’ effective usage.

Originality/value

Literature survey is carried out to find the methods which give better performance.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 25 April 2024

Metin Uzun

This research study aims to minimize autonomous flight cost and maximize autonomous flight performance of a slung load carrying rotary wing mini unmanned aerial vehicle (i.e. UAV…

Abstract

Purpose

This research study aims to minimize autonomous flight cost and maximize autonomous flight performance of a slung load carrying rotary wing mini unmanned aerial vehicle (i.e. UAV) by stochastically optimizing autonomous flight control system (AFCS) parameters. For minimizing autonomous flight cost and maximizing autonomous flight performance, a stochastic design approach is benefitted over certain parameters (i.e. gains of longitudinal PID controller of a hierarchical autopilot system) meanwhile lower and upper constraints exist on these design parameters.

Design/methodology/approach

A rotary wing mini UAV is produced in drone Laboratory of Iskenderun Technical University. This rotary wing UAV has three blades main rotor, fuselage, landing gear and tail rotor. It is also able to carry slung loads. AFCS variables (i.e. gains of longitudinal PID controller of hierarchical autopilot system) are stochastically optimized to minimize autonomous flight cost capturing rise time, settling time and overshoot during longitudinal flight and to maximize autonomous flight performance. Found outcomes are applied during composing rotary wing mini UAV autonomous flight simulations.

Findings

By using stochastic optimization of AFCS for rotary wing mini UAVs carrying slung loads over previously mentioned gains longitudinal PID controller when there are lower and upper constraints on these variables, a high autonomous performance having rotary wing mini UAV is obtained.

Research limitations/implications

Approval of Directorate General of Civil Aviation in Republic of Türkiye is essential for real-time rotary wing mini UAV autonomous flights.

Practical implications

Stochastic optimization of AFCS for rotary wing mini UAVs carrying slung loads is properly valuable for recovering autonomous flight performance cost of any rotary wing mini UAV.

Originality/value

Establishing a novel procedure for improving autonomous flight performance cost of a rotary wing mini UAV carrying slung loads and introducing a new process performing stochastic optimization of AFCS for rotary wing mini UAVs carrying slung loads meanwhile there exists upper and lower bounds on design variables.

Details

Aircraft Engineering and Aerospace Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 15 February 2023

Tiago F.A.C. Sigahi and Laerte Idal Sznelwar

The purpose of this paper is twofold: (1) to map and analyze existing complexity typologies and (2) to develop a framework for characterizing complexity-based approaches.

Abstract

Purpose

The purpose of this paper is twofold: (1) to map and analyze existing complexity typologies and (2) to develop a framework for characterizing complexity-based approaches.

Design/methodology/approach

This study was conducted in three stages: (1) initial identification of typologies related to complexity following a structured procedure based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol; (2) backward and forward review to identify additional relevant typologies and (3) content analysis of the selected typologies, categorization and framework development.

Findings

Based on 17 selected typologies, a comprehensive overview of complexity studies is provided. Each typology is described considering key concepts, contributions and convergences and differences between them. The epistemological, theoretical and methodological diversity of complexity studies was explored, allowing the identification of the main schools of thought and authors. A framework for characterizing complexity-based approaches was proposed including the following perspectives: ontology of complexity, epistemology of complexity, purpose and object of interest, methodology and methods and theoretical pillars.

Originality/value

This study examines the main typologies of complexity from an integrated and multidisciplinary perspective and, based on that, proposes a novel framework to understanding and characterizing complexity-based approaches.

Details

Kybernetes, vol. 53 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 2 May 2024

Ali Hashemi Baghi and Jasmin Mansour

Fused Filament Fabrication (FFF) is one of the growing technologies in additive manufacturing, that can be used in a number of applications. In this method, process parameters can…

Abstract

Purpose

Fused Filament Fabrication (FFF) is one of the growing technologies in additive manufacturing, that can be used in a number of applications. In this method, process parameters can be customized and their simultaneous variation has conflicting impacts on various properties of printed parts such as dimensional accuracy (DA) and surface finish. These properties could be improved by optimizing the values of these parameters.

Design/methodology/approach

In this paper, four process parameters, namely, print speed, build orientation, raster width, and layer height which are referred to as “input variables” were investigated. The conflicting influence of their simultaneous variations on the DA of printed parts was investigated and predicated. To achieve this goal, a hybrid Genetic Algorithm – Artificial Neural Network (GA-ANN) model, was developed in C#.net, and three geometries, namely, U-shape, cube and cylinder were selected. To investigate the DA of printed parts, samples were printed with a central through hole. Design of Experiments (DoE), specifically the Rotational Central Composite Design method was adopted to establish the number of parts to be printed (30 for each selected geometry) and also the value of each input process parameter. The dimensions of printed parts were accurately measured by a shadowgraph and were used as an input data set for the training phase of the developed ANN to predict the behavior of process parameters. Then the predicted values were used as input to the Desirability Function tool which resulted in a mathematical model that optimizes the input process variables for selected geometries. The mean square error of 0.0528 was achieved, which is indicative of the accuracy of the developed model.

Findings

The results showed that print speed is the most dominant input variable compared to others, and by increasing its value, considerable variations resulted in DA. The inaccuracy increased, especially with parts of circular cross section. In addition, if there is no need to print parts in vertical position, the build orientation should be set at 0° to achieve the highest DA. Finally, optimized values of raster width and layer height improved the DA especially when the print speed was set at a high value.

Originality/value

By using ANN, it is possible to investigate the impact of simultaneous variations of FFF machines’ input process parameters on the DA of printed parts. By their optimization, parts of highly accurate dimensions could be printed. These findings will be of significant value to those industries that need to produce parts of high DA on FFF machines.

Details

Rapid Prototyping Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2546

Keywords

1 – 10 of 64