Search results

1 – 10 of over 5000
Article
Publication date: 1 May 2009

E.M. Osman, A.A. El-Ebissy and M.N. Michael

The objective of this paper is to establish a quantitative method to determine the levelness (L) of coloration by spectrophotometric measurements. Previously, the L of coloration…

Abstract

The objective of this paper is to establish a quantitative method to determine the levelness (L) of coloration by spectrophotometric measurements. Previously, the L of coloration is mainly evaluated by visual assessment.

Hence, we are not able to produce quantitative L data because the visual evaluation of the same colored material obtained from different observers can be quite different. Color levelness is actually a description of the uniformity of color shade in different places of the fabric. It is a very important parameter for the quality of textile coloration, quality control, and communication between laboratories.

Thus, this research work evaluates the L parameters by using different variables, including: a) three different natural fabrics; namely, wool, silk and cotton dyed with yellow natural dye from onion skins under the effect of different mordants, and b) three different natural dyes; namely, onion skins, turmeric and madder applied on wool fabric samples under the effect of different mordants.

The obtained results show that dyed samples with the highest color strength (K/S) have the highest unlevelness (U) and the lowest color difference (ΔE) values (i.e. the highest light fastness). These results are obtained regardless of the fabric type or dye used.

Article
Publication date: 2 April 2019

Yunyong Li, Yong Zhao, Haidong Yu and Xinmin Lai

A new deviation propagation model considering the form defects in compliant assembly process is proposed. The purpose of this paper is to analyze the deviation propagation by…

Abstract

Purpose

A new deviation propagation model considering the form defects in compliant assembly process is proposed. The purpose of this paper is to analyze the deviation propagation by using the basic deviation fields. In particular, each basic deviation field is defined with a generalized compliance matrix of part.

Design/methodology/approach

First, the form defects of parts may be decomposed into a linear combination of basic deviation fields, which are constructed by the eigen-decomposition of the structure stiffness of parts with ideal dimensions. Each basic deviation field is defined with a generalized compliance of part. Moreover, by analyzing the relationship between the basic deviation fields before and after assembling process, a new sensitive matrix is obtained in which each value expresses the correlation of a basic deviation field between the parts and the assembly.

Findings

This model may solve the deviation propagation problems of compliant assembly with considering form defects. Here, one case is used to illustrate the deviation propagation in the assembly process. The results indicate that the proposed method has higher accuracy than the method of influence coefficient when the entire deviation fields of parts are considered. Moreover, the numerical results with the proposed method basically agree with the experimental measurements.

Research limitations/implications

Owing to the hypothesis of linear superposition of basic deviation fields, the research in this paper is limited to the parts with linear elastic deformation. However, the entire form defects of parts are considered rather than the deviations of the local feature points. It may be extended to analyze the three-dimensional deviations of complex thin-walled parts.

Originality/value

A deviation propagation model considering parts form defects is developed to achieve more accurate predictions of assembly deviation by using the basic deviation fields.

Details

Assembly Automation, vol. 39 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Content available
Book part
Publication date: 16 September 2022

Pedro Brinca, Nikolay Iskrev and Francesca Loria

Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of

Abstract

Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of such exercises and to methodological departures from the baseline methodology. Little attention has been paid to identification issues within these classes of models. In this chapter, the authors investigate whether such issues are of concern in the original methodology and in an extension proposed by Šustek (2011) called Monetary Business Cycle Accounting. The authors resort to two types of identification tests in population. One concerns strict identification as theorized by Komunjer and Ng (2011) while the other deals both with strict and weak identification as in Iskrev (2010). Most importantly, the authors explore the extent to which these weak identification problems affect the main economic takeaways and find that the identification deficiencies are not relevant for the standard BCA model. Finally, the authors compute some statistics of interest to practitioners of the BCA methodology.

Details

Essays in Honour of Fabio Canova
Type: Book
ISBN: 978-1-80382-636-3

Keywords

Book part
Publication date: 1 July 2015

Enrique Martínez-García

The global slack hypothesis is central to the discussion of the trade-offs that monetary policy faces in an increasingly more integrated world. The workhorse New Open Economy…

Abstract

The global slack hypothesis is central to the discussion of the trade-offs that monetary policy faces in an increasingly more integrated world. The workhorse New Open Economy Macro (NOEM) model of Martínez-García and Wynne (2010), which fleshes out this hypothesis, shows how expected future local inflation and global slack affect current local inflation. In this chapter, I propose the use of the orthogonalization method of Aoki (1981) and Fukuda (1993) on the workhorse NOEM model to further decompose local inflation into a global component and an inflation differential component. I find that the log-linearized rational expectations model of Martínez-García and Wynne (2010) can be solved with two separate subsystems to describe each of these two components of inflation.

I estimate the full NOEM model with Bayesian techniques using data for the United States and an aggregate of its 38 largest trading partners from 1980Q1 until 2011Q4. The Bayesian estimation recognizes the parameter uncertainty surrounding the model and calls on the data (inflation and output) to discipline the parameterization. My findings show that the strength of the international spillovers through trade – even in the absence of common shocks – is reflected in the response of global inflation and is incorporated into local inflation dynamics. Furthermore, I find that key features of the economy can have different impacts on global and local inflation – in particular, I show that the parameters that determine the import share and the price-elasticity of trade matter in explaining the inflation differential component but not the global component of inflation.

Details

Monetary Policy in the Context of the Financial Crisis: New Challenges and Lessons
Type: Book
ISBN: 978-1-78441-779-6

Keywords

Book part
Publication date: 21 September 2022

Michael Chin, Ferre De Graeve, Thomai Filippeli and Konstantinos Theodoridis

Long-term interest rates of small open economies (SOE) correlate strongly with the USA long-term rate. Can central banks in those countries decouple from the United States? An

Abstract

Long-term interest rates of small open economies (SOE) correlate strongly with the USA long-term rate. Can central banks in those countries decouple from the United States? An estimated Dynamic Stochastic General Equilibrium (DSGE) model for the UK (vis-á-vis the USA) establishes three structural empirical results: (1) Comovement arises due to nominal fluctuations, not through real rates or term premia; (2) the cause of comovement is the central bank of the SOE accommodating foreign inflation trends, rather than systematically curbing them; and (3) SOE may find themselves much more affected by changes in USA inflation trends than the United States itself. All three results are shown to be intuitive and backed by off-model evidence.

Details

Essays in Honour of Fabio Canova
Type: Book
ISBN: 978-1-80382-832-9

Keywords

Book part
Publication date: 6 January 2016

Jens H. E. Christensen and Glenn D. Rudebusch

Recent U.S. Treasury yields have been constrained to some extent by the zero lower bound (ZLB) on nominal interest rates. Therefore, we compare the performance of a standard

Abstract

Recent U.S. Treasury yields have been constrained to some extent by the zero lower bound (ZLB) on nominal interest rates. Therefore, we compare the performance of a standard affine Gaussian dynamic term structure model (DTSM), which ignores the ZLB, to a shadow-rate DTSM, which respects the ZLB. Near the ZLB, we find notable declines in the forecast accuracy of the standard model, while the shadow-rate model forecasts well. However, 10-year yield term premiums are broadly similar across the two models. Finally, in applying the shadow-rate model, we find no gain from estimating a slightly positive lower bound on U.S. yields.

Details

Dynamic Factor Models
Type: Book
ISBN: 978-1-78560-353-2

Keywords

Article
Publication date: 28 June 2022

Peter Wanke, Sahar Ostovan, Mohammad Reza Mozaffari, Javad Gerami and Yong Tan

This paper aims to present two-stage network models in the presence of stochastic ratio data.

Abstract

Purpose

This paper aims to present two-stage network models in the presence of stochastic ratio data.

Design/methodology/approach

Black-box, free-link and fix-link techniques are used to apply the internal relations of the two-stage network. A deterministic linear programming model is derived from a stochastic two-stage network data envelopment analysis (DEA) model by assuming that some basic stochastic elements are related to the inputs, outputs and intermediate products. The linkages between the overall process and the two subprocesses are proposed. The authors obtain the relation between the efficiency scores obtained from the stochastic two stage network DEA-ratio considering three different strategies involving black box, free-link and fix-link. The authors applied their proposed approach to 11 airlines in Iran.

Findings

In most of the scenarios, when alpha in particular takes any value between 0.1 and 0.4, three models from Charnes, Cooper, and Rhodes (1978), free-link and fix-link generate similar efficiency scores for the decision-making units (DMUs), While a relatively higher degree of variations in efficiency scores among the DMUs is generated when the alpha takes the value of 0.5. Comparing the results when the alpha takes the value of 0.1–0.4, the DMUs have the same ranking in terms of their efficiency scores.

Originality/value

The authors innovatively propose a deterministic linear programming model, and to the best of the authors’ knowledge, for the first time, the internal relationships of a two-stage network are analyzed by different techniques. The comparison of the results would be able to provide insights from both the policy perspective as well as the methodological perspective.

Details

Journal of Modelling in Management, vol. 18 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 23 October 2023

Morten I. Lau, Hong Il Yoo and Hongming Zhao

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of…

Abstract

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of decision tasks that allows one to identify a full set of structural parameters characterizing risk preferences under Cumulative Prospect Theory (CPT), including loss aversion. We consider temporal stability in those structural parameters at both population and individual levels. The population-level stability pertains to whether the distribution of risk preferences across individuals in the subject population remains stable over time. The individual-level stability pertains to within-individual correlation in risk preferences over time. We embed the CPT structure in a random coefficient model that allows us to evaluate temporal stability at both levels in a coherent manner, without having to switch between different sets of models to draw inferences at a specific level.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Book part
Publication date: 1 December 2016

Roman Liesenfeld, Jean-François Richard and Jan Vogler

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and…

Abstract

We propose a generic algorithm for numerically accurate likelihood evaluation of a broad class of spatial models characterized by a high-dimensional latent Gaussian process and non-Gaussian response variables. The class of models under consideration includes specifications for discrete choices, event counts and limited-dependent variables (truncation, censoring, and sample selection) among others. Our algorithm relies upon a novel implementation of efficient importance sampling (EIS) specifically designed to exploit typical sparsity of high-dimensional spatial precision (or covariance) matrices. It is numerically very accurate and computationally feasible even for very high-dimensional latent processes. Thus, maximum likelihood (ML) estimation of high-dimensional non-Gaussian spatial models, hitherto considered to be computationally prohibitive, becomes feasible. We illustrate our approach with ML estimation of a spatial probit for US presidential voting decisions and spatial count data models (Poisson and Negbin) for firm location choices.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Open Access
Article
Publication date: 21 September 2021

Xiangfei Chen, David Trafimow, Tonghui Wang, Tingting Tong and Cong Wang

The authors derive the necessary mathematics, provide computer simulations, provide links to free and user-friendly computer programs, and analyze real data sets.

Abstract

Purpose

The authors derive the necessary mathematics, provide computer simulations, provide links to free and user-friendly computer programs, and analyze real data sets.

Design/methodology/approach

Cohen's d, which indexes the difference in means in standard deviation units, is the most popular effect size measure in the social sciences and economics. Not surprisingly, researchers have developed statistical procedures for estimating sample sizes needed to have a desirable probability of rejecting the null hypothesis given assumed values for Cohen's d, or for estimating sample sizes needed to have a desirable probability of obtaining a confidence interval of a specified width. However, for researchers interested in using the sample Cohen's d to estimate the population value, these are insufficient. Therefore, it would be useful to have a procedure for obtaining sample sizes needed to be confident that the sample. Cohen's d to be obtained is close to the population parameter the researcher wishes to estimate, an expansion of the a priori procedure (APP). The authors derive the necessary mathematics, provide computer simulations and links to free and user-friendly computer programs, and analyze real data sets for illustration of our main results.

Findings

In this paper, the authors answered the following two questions: The precision question: How close do I want my sample Cohen's d to be to the population value? The confidence question: What probability do I want to have of being within the specified distance?

Originality/value

To the best of the authors’ knowledge, this is the first paper for estimating Cohen's effect size, using the APP method. It is convenient for researchers and practitioners to use the online computing packages.

Details

Asian Journal of Economics and Banking, vol. 5 no. 3
Type: Research Article
ISSN: 2615-9821

Keywords

1 – 10 of over 5000