Search results

1 – 10 of over 18000
Open Access
Article
Publication date: 19 April 2024

Thi Bich Tran and Duy Khoi Nguyen

This study investigates the optimum size for manufacturing firms and the impact of subcontracting on firms' likelihood of achieving their optimal scale in Vietnam.

Abstract

Purpose

This study investigates the optimum size for manufacturing firms and the impact of subcontracting on firms' likelihood of achieving their optimal scale in Vietnam.

Design/methodology/approach

Using data from the enterprise census in 2017 and 2021, the paper first estimates the production function to identify the optimum firm size for manufacturing firms and then, applies the logit model to investigate factors associated with the optimal firm size.

Findings

The study reveals that medium-sized firms exhibit the highest level of productivity. Nevertheless, a consistent trend emerges, indicating that nearly 90% of manufacturing firms in Vietnam operated below their optimal scale in both 2017 and 2021. An analysis of the impact of subcontracting on firms' likelihood to achieve their optimal scale emphasizes its crucial role, especially for foreign firms, exerting an influence nearly five times greater than that of the judiciary system.

Practical implications

The paper's findings offer crucial policy implications, suggesting that initiatives aimed at enhancing the overall productivity of the manufacturing sector should prioritise facilitating contract arrangements to encourage firms to reach their optimal size. These insights are also valuable for other countries with comparable firm size distributions.

Originality/value

This paper provides the first empirical evidence on the relationship between firm size and productivity as well as the role of subcontracting in firms' ability to reach their optimal scale in a country with a right-skewed distribution of firm sizes.

Details

Journal of Economics and Development, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1859-0020

Keywords

Article
Publication date: 1 February 1971

CESAR M. SABULAO and G. ALAN HICKROD

Economic efficiency of public school districts was explored by utilization of the concept of economies and dis‐economies of scale. An optimum size relative to costs was discovered…

Abstract

Economic efficiency of public school districts was explored by utilization of the concept of economies and dis‐economies of scale. An optimum size relative to costs was discovered by analyzing the data with curvilinear least squares regression and also with the differential calculus. The sample was taken from elementary, high school, and K–12 (unit) districts in the state of Illinois, U.S.A. Suggestions for further research on the general notion of optimum size of school districts are presented.

Details

Journal of Educational Administration, vol. 9 no. 2
Type: Research Article
ISSN: 0957-8234

Article
Publication date: 10 August 2018

Richard D. Sudduth

The importance of maximizing the particle packing fraction in a suspension by maximizing average particle size ratio of D5/D1 has been adequately shown to be important as…

Abstract

Purpose

The importance of maximizing the particle packing fraction in a suspension by maximizing average particle size ratio of D5/D1 has been adequately shown to be important as previously reported in the literature. This study aims to extend that analysis to include the best formulation approach to maximize the packing fraction with a minimum number of monodisperse particle sizes.

Design/methodology/approach

An existing model previously developed by this author was modified theoretically to optimize the ratio used between consecutive monodisperse particle sizes. This process was found to apply to a broad range of particle configurations and applications. In addition, five different approaches for maximizing average particle size ratio D̅5/D̅1 were addressed for blending several different particle size distributions. Maximizing average particle size ratio D̅5/D̅1 has been found to result in an optimization of the packing fraction. Several new concepts were also introduced in the process of maximizing the packing fraction for these different approaches.

Findings

The critical part of the analysis to maximize the packing fraction with a minimum number of particles was the theoretical optimization of the ratio used between consecutive monodisperse particle sizes. This analysis was also found to be effectively independent of the maximum starting particle size. This study also clarified the recent incorrect claim in the literature that Furnas in 1931 was the first to generate the maximum theoretical packing fraction possible for n different particles that was actually originally developed in conjunction with the Sudduth generalized viscosity equation. In addition, the Furnas generated equation was also shown to give significantly different results from the Sudduth generated equation.

Research limitations/implications

Experimental data involving monodisperse particles of different blends with a minimum number of particle sizes that are truly monodisperse are often extremely difficult to obtain. However, the theoretical general concepts can still be applicable.

Practical implications

The expanded model presented in this article provides practical guidelines for blending pigments using a minimum number of monodisperse particle sizes that can yield much higher ratios of the particle size averages D̅5/D̅1 and thus potentially achieve significantly improved properties such as viscosity.

Originality/value

The model presented in this article provides the first apparent guidelines to control the blending of pigments in coatings by the optimization of the ratio used between consecutive monodisperse particle sizes. This analysis was also found to be effectively independent of the maximum starting particle size.

Details

Pigment & Resin Technology, vol. 48 no. 1
Type: Research Article
ISSN: 0369-9420

Keywords

Article
Publication date: 19 June 2009

Zesheng Sun and Xiangdong Xu

The purpose of this paper is to empirically study whether China could achieve strong export market power considering its highly decentralized coke production and trade.

Abstract

Purpose

The purpose of this paper is to empirically study whether China could achieve strong export market power considering its highly decentralized coke production and trade.

Design/methodology/approach

By using time series data, this paper econometrically estimates the coke export market power with the Hall model; then, through analyzing micro trade data and public policy, tries to explain the co‐existing dilemma of China's highly decentralized coke production/export and its strong market power in the world market; lastly, by using Stigler's survival technique, it explores the optimum size of China's coke production and export.

Findings

The paper finds that the market power of Chinese coke export is quite strong, even if its micro market structure is highly decentralized; the main explanation for the expanding of China's coke export market power comes from its oligopolistic position in the world coke market, its strong industry policy and trade policy restriction. Also it is found that the optimum size in the coke industry should be the market share below 0.5 percent, or in 1‐10 percent, while other market sizes are of diseconomies of scale.

Practical implications

Such findings provide evidence for China's policy adjustment regarding maintaining strong coke export market power, while eliminating economic distortions and negative production externality.

Originality/value

This paper highlights the co‐existing issues of micro competitive structure and nationally oligopolistic position in an industry. This study is the first try to combine market power and economies of scale, through empirical analysis and optimum size estimation, to generate implications for optimal government public policy.

Details

Journal of Chinese Economic and Foreign Trade Studies, vol. 2 no. 2
Type: Research Article
ISSN: 1754-4408

Keywords

Article
Publication date: 6 April 2012

Jos L.T. Blank, Bart L. van Hulst, Patrick M. Koot and Ruud van der Aa

The purpose of this paper is to focus on the efficiency of Dutch secondary schools. In particular, the size of the schools' management is benchmarked.

Abstract

Purpose

The purpose of this paper is to focus on the efficiency of Dutch secondary schools. In particular, the size of the schools' management is benchmarked.

Design/methodology/approach

The methodology used is an advanced micro‐econometric technique called stochastic frontier analysis.

Findings

The method used is applicable for identifying the optimum allocation, in particular the size of management. The overall result is that there is no systematic over or under allocation of management in Dutch secondary schools.

Practical implications

Each school received an individual benchmark. Schools can position themselves in respect with other schools and have information on how to adjust allocation of resources.

Originality/value

The paper contributes to the discussion about the size of management costs of Dutch secondary schools. The analysis is based on state‐of‐the‐art methodologies and has not been applied to the educational process.

Details

Benchmarking: An International Journal, vol. 19 no. 2
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 8 February 2018

David G. Carmichael and Nur Kamaliah Mustaffa

The performance of earthmoving operations, in terms of emissions, production and cost, is dependent on many variables and has been the study of a number of publications. Such…

Abstract

Purpose

The performance of earthmoving operations, in terms of emissions, production and cost, is dependent on many variables and has been the study of a number of publications. Such publications look at typical operation design and management, without establishing what the penalties or bonuses might be for non-standard, but still observed, practices. To fill this gap in knowledge, this paper examines alternative loading policies of zero waiting-time loading, fractional loading and double-sided loading, and compares the performance of these with standard single-sided loading.

Design/methodology/approach

Original recursive relationships, that are amenable to Monte Carlo simulation, are derived. Case study data are used to illustrate the emissions, production and cost penalties or bonuses.

Findings

Double-sided loading contributes the least impact to the environment and is the most cost effective. Zero waiting-time loading performs the worst in terms of environmental impact and cost. Minimizing truck waiting times through using fractional loading is generally not an attractive policy because it leads to an increase in unit emissions and unit costs. The consequences of adopting fractional loading are detailed. Optimum unit emissions and optimum unit cost are coincident with respect to fleet size for single- and double-sided loading policies. That is, by minimizing unit cost, as in traditional practice, then least impact on the environment is obtained. Not minimizing unit cost will lead to unnecessary emissions.

Practical implications

The results of this paper will be of interest to those designing and managing earthmoving operations.

Originality/value

All modeling and results presented in the paper do not exist elsewhere in the literature.

Details

Construction Innovation, vol. 18 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 1 April 1974

A.C.N. Bailey and I.M. Gascoigne

This article describes an investigation into the factors affecting load and vehicle sizes for road delivery of oil products. An overall framework of analysis was developed, a…

Abstract

This article describes an investigation into the factors affecting load and vehicle sizes for road delivery of oil products. An overall framework of analysis was developed, a sampling method was used, and potential cost savings identified and achieved. The ability to construct an analytical framework of sufficient accuracy depended on a reasonably convincing costing formula for part loads, which is also described.

Details

International Journal of Physical Distribution, vol. 4 no. 5
Type: Research Article
ISSN: 0020-7527

Article
Publication date: 1 April 2005

Rajeevan Chandel, S. Sarkar and R.P. Agarwal

Delay and power dissipation are the two major design constraints in very large scale integration (VLSI) circuits. These arise due to millions of active devices and…

1700

Abstract

Purpose

Delay and power dissipation are the two major design constraints in very large scale integration (VLSI) circuits. These arise due to millions of active devices and interconnections connecting this gigantic number of devices on the chip. Important technique of repeater insertion in long interconnections to reduce delay in VLSI circuits has been reported during the last two decades. This paper deals with delay, power dissipation and the role of voltage‐scaling in repeaters loaded long interconnects in VLSI circuits for low power environment.

Design/methodology/approach

Trade off between delay and power dissipation in repeaters inserted long interconnects has been reviewed here with a bibliographic survey. SPICE simulations have been used to validate the findings.

Findings

Optimum number of uniform sized CMOS repeaters inserted in long interconnects, lead to delay minimization. Voltage‐scaling is highly effective in reduction of power dissipation in repeaters loaded long interconnects. The new finding given here is that optimum number of repeaters required for delay minimization decreases with voltage‐scaling. This leads to area and further power saving.

Research limitations

The bibliographic survey needs to be revised in future, taking the various other aspects of VLSI interconnects viz. noise, cross talk extra into account.

Originality/value

The paper is of high significance in VLSI design and low‐power high‐speed applications. It is also valuable for new researchers in this emerging field.

Details

Microelectronics International, vol. 22 no. 1
Type: Research Article
ISSN: 1356-5362

Keywords

Article
Publication date: 29 February 2008

Elizabeth Bye, Karen LaBat, Ellen McKinney and Dong‐Eun Kim

To evaluate current apparel industry Misses grading practices in providing good fit and propose grading practices to improve fit.

2114

Abstract

Purpose

To evaluate current apparel industry Misses grading practices in providing good fit and propose grading practices to improve fit.

Design/methodology/approach

Participants representing Misses sizes 6‐20 based on ASTM D 5585 were selected. The fit of garments from traditionally graded patterns was assessed. Garments were fit‐to‐shape on participants. Traditionally graded patterns were compared to fit‐to‐shape patterns using quantitative and qualitative visual analysis.

Findings

Current apparel industry grading practices do not provide good fit for consumers. The greatest variation between the traditionally graded patterns and the fit‐to‐shape patterns occurred between sizes 14 and 16. For size 16 and up, neck and armscye circumferences were too large and bust dart intakes were too small.

Research limitations/implications

This study was limited to a sheath dress in Misses sizes 6‐20. Future research should assess the fit of garments from traditionally graded patterns for other size ranges.

Practical implications

Multiple fit modes are needed in a range of more than five sizes. The fit model should be at the middle of a sizing group that does not range more than two sizes up or down.

Originality/value

There are few studies on apparel grading that test fit of actual garments on the body. The analysis documents the real growth of the body across the size range and suggests that changes in body measurements and shape determine the fit of a garment. These findings impact future research in apparel and the practices of apparel manufacturers.

Details

International Journal of Clothing Science and Technology, vol. 20 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 7 December 2021

Ayten Yiğiter, Canan Hamurkaroğlu and Nazan Danacıoğlu

Acceptance sampling plans are a decision-making process on the basis of a randomly selected sampling from a party, where it is not possible to completely scan the products for…

Abstract

Purpose

Acceptance sampling plans are a decision-making process on the basis of a randomly selected sampling from a party, where it is not possible to completely scan the products for reasons such as time and cost being limited or the formation of damaged products during the inspection. For some products, the life span (time from beginning to failure) may be an important quality characteristic. In this case, the quality control adequacy of the products can be checked with an acceptance sampling plan based on the truncated life test with a censored scheme for the lifetime of the products. In this study, group acceptance sampling plans (GASPs) based on life tests are studied under the Type-I censored scheme for the compound Weibull-exponential (CWE) distribution.

Design/methodology/approach

GASPs based on life tests under the Type-I censored scheme for the CWE distribution are developed by using both the producer's risk and the consumer's risk.

Findings

In this study, optimum sample size, optimum number of groups and acceptance number are obtained under the Type-I censored scheme for the CWE distribution. Real data set illustration is given to show GASPs how to be used for the industry applications.

Originality/value

Different from acceptance sampling plans with just considering the producer's risk, GASPs are constructed by using two-point approach included both the producer's risk and the consumer's risk for CWE distribution.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 18000