Search results

1 – 10 of over 165000
Article
Publication date: 12 July 2011

Stefan Ludwig and Wolfgang Mathis

This paper aims to present a method for the efficient reduction of networks modelling parasitic couplings in very‐large‐scale integration (VLSI) circuits.

Abstract

Purpose

This paper aims to present a method for the efficient reduction of networks modelling parasitic couplings in very‐large‐scale integration (VLSI) circuits.

Design/methodology/approach

The parasitic effects are modelled by large RLC networks and current sources for the digital switching currents. Based on the determined behaviour of the digital modules, an efficient description of these networks is proposed, which allows for a more efficient model reduction than standard methods.

Findings

The proposed method enables a fast and efficient simulation of the parasitic effects. Additionally, an extension of the reduction method to elements, which incorporate some supply voltage dependence to model the internal currents more precisely than independent current sources is presented.

Practical implications

The presented method can be applied to large electrical networks, used in the modelling of parasitic effects, for reducing their size. A reduced model is created which can be used in investigations with circuit simulators requiring a lowered computational effort.

Originality/value

Contrary to existing methods, the presented method includes the knowledge of the behaviour of the sources in the model to enhance the model reduction process.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 30 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 June 2000

George K. Chako

Briefly reviews previous literature by the author before presenting an original 12 step system integration protocol designed to ensure the success of companies or countries in…

7240

Abstract

Briefly reviews previous literature by the author before presenting an original 12 step system integration protocol designed to ensure the success of companies or countries in their efforts to develop and market new products. Looks at the issues from different strategic levels such as corporate, international, military and economic. Presents 31 case studies, including the success of Japan in microchips to the failure of Xerox to sell its invention of the Alto personal computer 3 years before Apple: from the success in DNA and Superconductor research to the success of Sunbeam in inventing and marketing food processors: and from the daring invention and production of atomic energy for survival to the successes of sewing machine inventor Howe in co‐operating on patents to compete in markets. Includes 306 questions and answers in order to qualify concepts introduced.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 12 no. 2/3
Type: Research Article
ISSN: 1355-5855

Keywords

Article
Publication date: 26 January 2022

Siang Miang Yeo, Ho Kwang Yow and Keat Hoe Yeoh

Semiconductor packaging industry has in recent years tightened the tolerance criteria for acceptable solder void size in the semiconductor packages due to the high usage in…

Abstract

Purpose

Semiconductor packaging industry has in recent years tightened the tolerance criteria for acceptable solder void size in the semiconductor packages due to the high usage in automotive applications. Semiconductor packaging component makers have strengthened the quality of the solder joint and its electrical conductivity by controlling the maximum solder void size reduction from 10-15% to 5% or below over die size. This paper aims to reduce the solder void size to minimum level that current industry could not achieve and introduce a new soldering processes by combining vacuum reflow and pressure cure to effectively reduce solder void.

Design/methodology/approach

This study is using the empirical data collection to prove the feasible in achieve the goal. It is an engineering approach. This research study is even considering sufficient data (>22 units) in each evaluation to represent the actual performance.

Findings

Successfully eliminate all the hollow solder void that current industry claimed as solder void. EDX analysis showed that the compressed solder voids remained in the solder are filled with solid carbon-based substances which could be originated from the trapped flux residues. It is empirical data proven in feasibility stage.

Research limitations/implications

The study is able to produce solder void-less. This method is suitable for high volume manufacturing process also. This may lead a new pave way for industry to resolve solder void problem. The current pressure cure machine could not apply more than 200°C temperature which limits medium and high temperature solder paste or alloy testing. Therefore, only low temperature solder alloy Pb37Sn63 was able to be evaluated.

Originality/value

This study is original and has not been published elsewhere to produce high efficiency product in semiconductor packaging performance in electrical path and heat dissipation. It also improves package reliability due to solder joint used as interconnect in semiconductor packaging.

Details

Soldering & Surface Mount Technology, vol. 34 no. 4
Type: Research Article
ISSN: 0954-0911

Keywords

Article
Publication date: 6 September 2011

Matloub Hussain and Paul R. Drake

The purpose of this paper is to analyze the effect of batching on bullwhip effect in a model of multi‐echelon supply chain with information sharing.

2635

Abstract

Purpose

The purpose of this paper is to analyze the effect of batching on bullwhip effect in a model of multi‐echelon supply chain with information sharing.

Design/methodology/approach

The model uses the system dynamics and control theoretic concepts of variables, flows, and feedback processes and is implemented using iThink® software.

Findings

It has been seen that the relationship between batch size and demand amplification is non‐monotonic. Large batch sizes, when combined in integer multiples, can produce order rates that are close to the actual demand and produce little demand amplification, i.e. it is the size of the remainder of the quotient that is the determinant. It is further noted that the value of information sharing is greatest for smaller batch sizes, for which there is a much greater improvement in the amplification ratio.

Research limitations/implications

Batching is associated with the inventory holding and backlog cost. Therefore, future work should investigate the cost implications of order batching in multi‐echelon supply chains.

Practical implications

This is a contribution to the continuing research into the bullwhip effect, giving supply chain operations managers and designers a practical way into controlling the bullwhip produced by batching across multi‐echelon supply chains. Economies of scale processes usually favor large batch sizes. Reducing batch size in order to reduce the demand amplification is not a good solution.

Originality/value

Previous similar studies have used control theoretic techniques and it has been pointed out that control theorists are unable to solve the lot sizing problem. Therefore, system dynamic simulation is then applied to investigate the impact of various batch sizes on bullwhip effect.

Details

International Journal of Physical Distribution & Logistics Management, vol. 41 no. 8
Type: Research Article
ISSN: 0960-0035

Keywords

Article
Publication date: 29 April 2014

Mohammad Amin Shayegan and Saeed Aghabozorgi

Pattern recognition systems often have to handle problem of large volume of training data sets including duplicate and similar training samples. This problem leads to large memory…

Abstract

Purpose

Pattern recognition systems often have to handle problem of large volume of training data sets including duplicate and similar training samples. This problem leads to large memory requirement for saving and processing data, and the time complexity for training algorithms. The purpose of the paper is to reduce the volume of training part of a data set – in order to increase the system speed, without any significant decrease in system accuracy.

Design/methodology/approach

A new technique for data set size reduction – using a version of modified frequency diagram approach – is presented. In order to reduce processing time, the proposed method compares the samples of a class to other samples in the same class, instead of comparing samples from different classes. It only removes patterns that are similar to the generated class template in each class. To achieve this aim, no feature extraction operation was carried out, in order to produce more precise assessment on the proposed data size reduction technique.

Findings

The results from the experiments, and according to one of the biggest handwritten numeral standard optical character recognition (OCR) data sets, Hoda, show a 14.88 percent decrease in data set volume without significant decrease in performance.

Practical implications

The proposed technique is effective for size reduction for all pictorial databases such as OCR data sets.

Originality/value

State-of-the-art algorithms currently used for data set size reduction usually remove samples near to class's centers, or support vector (SV) samples between different classes. However, the samples near to a class center have valuable information about class characteristics, and they are necessary to build a system model. Also, SV s are important samples to evaluate the system efficiency. The proposed technique, unlike the other available methods, keeps both outlier samples, as well as the samples close to the class centers.

Article
Publication date: 8 November 2011

Matloub Hussain and Paul R. Drake

The purpose of this paper is to understand the effect of batching on bullwhip effect in a model of multi‐echelon supply chain with information sharing.

2922

Abstract

Purpose

The purpose of this paper is to understand the effect of batching on bullwhip effect in a model of multi‐echelon supply chain with information sharing.

Design/methodology/approach

The model uses the system dynamics and control theoretic concepts of variables, flows and feedback processes and is implemented using iThink® software.

Findings

It has been seen that the relationship between batch size and demand amplification is non‐monotonic. Large batch sizes, that when combined in integer multiples can produce order rates that are close to the actual demand, produce little demand amplification, i.e. it is the size of the remainder of the quotient that is the determinant. It is further noted that the value of information sharing is greatest for smaller batch sizes, for which there is a much greater improvement in the amplification ratio.

Research limitations/implications

Batching is associated with the inventory holding and backlog cost. Therefore, future work should investigate the cost implications of order batching in multi‐echelon supply chains.

Practical implications

This is a contribution to the continuing research into the bullwhip effect, giving supply chain operations managers and designers a practical way into controlling the bullwhip produced by batching across multi‐echelon supply chains.

Originality/value

Previous similar studies have used control theoretic techniques and it has been pointed out that control theorists are unable to solve the lot sizing problem. Therefore, system dynamic simulation has been applied to investigate the impact of various batch sizes on bullwhip effect.

Details

International Journal of Physical Distribution & Logistics Management, vol. 41 no. 10
Type: Research Article
ISSN: 0960-0035

Keywords

Article
Publication date: 7 January 2019

Mian Ilyas Ahmad, Peter Benner and Lihong Feng

The purpose of this paper is to propose an interpolation-based projection framework for model reduction of quadratic-bilinear systems. The approach constructs projection matrices…

Abstract

Purpose

The purpose of this paper is to propose an interpolation-based projection framework for model reduction of quadratic-bilinear systems. The approach constructs projection matrices from the bilinear part of the original quadratic-bilinear descriptor system and uses these matrices to project the original system.

Design/methodology/approach

The projection matrices are constructed by viewing the bilinear system as a linear parametric system, where the input associated with the bilinear part is treated as a parameter. The advantage of this approach is that the projection matrices can be constructed reliably by using an a posteriori error bound for linear parametric systems. The use of the error bound allows us to select a good choice of interpolation points and parameter samples for the construction of the projection matrices by using a greedy-type framework.

Findings

The results are compared with the standard quadratic-bilinear projection methods and it is observed that the approximations through the proposed method are comparable to the standard method but at a lower computational cost (offline time).

Originality/value

In addition to the proposed model order reduction framework, the authors extend the one-sided moment matching parametric model order reduction (PMOR) method to a two-sided method that doubles the number of moments matched in the PMOR method.

Article
Publication date: 4 January 2008

Ruth V. Sabariego and Patrick Dular

The aim of the present paper is to compare the performances of a finite‐element perturbation technique applied either to the h‐ conform magnetodynamic formulation or to its b‐

Abstract

Purpose

The aim of the present paper is to compare the performances of a finite‐element perturbation technique applied either to the h‐ conform magnetodynamic formulation or to its b‐ conform counterpart in the frame of nondestructive eddy‐current testing problems.

Design/methodology/approach

In both complementary perturbation techniques, the computation is split into a computation without defect (unperturbed problem) and a computation of the field distorsion due to its presence (perturbation problem). The unperturbed problem is conventionally solved in the complete domain. The source of the perturbation problem is then determined by the projection of the unperturbed solution in a relatively small region surrounding the defect. The discretisation of this reduced domain is chosen independently of the dimensions of the excitation probe and the specimen under study and is thus well adapted to the size of the defect.

Findings

The accuracy of the perturbation model is evidenced by comparing the results of the two counterpart formulations to those achieved in the conventional way for different dimensions of the reduced domain. The size of the reduced domain increases with the size of the defect at hand. This proposed sub‐domain approach eases considerably the meshing process and speeds‐up the computation for different probe positions.

Originality/value

At a discrete level, the impedance change due to the defect is efficiently and accurately computed by integrating only over the defect itself and a layer of elements in the reduced domain that touches its boundary. Therefore, no integration of any flux variation in the coils is required.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 27 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 10 November 2014

Hamish D. Anderson and Yuan Peng

The purpose of this paper is to examine the impact on stock liquidity following the reduction of minimum tick size from $0.01 to $0.005 for a selection of dual-listed and property…

Abstract

Purpose

The purpose of this paper is to examine the impact on stock liquidity following the reduction of minimum tick size from $0.01 to $0.005 for a selection of dual-listed and property stocks on the New Zealand Exchange (NZX) during 2011.

Design/methodology/approach

Various liquidity measures were examined six months either side of the change in minimum tick size for the eligible stocks and these were compared to a sample of stocks matched on similar liquidity characteristics. Liquidity measures examined in the paper include quoted and effective spread, volume, depth and binding-constraint probability.

Findings

After controlling for firms matched on similar pre-period liquidity characteristics both spread and depth decline significantly. Evidence that small firms experience significant declines in trading activity was also found, and while firms with higher binding-constraints probability have greater declines in spread, their decline in depth is greater still.

Research limitations/implications

The small sample of 17 stocks eligible for the $0.005 minimum tick size potentially impacts on the strength of the statistical analysis. As such, it is harder to detect statistically significant changes in liquidity.

Practical implications

These findings have important implications for policymakers as the hoped for benefits of smaller tick increments may only be fully realized by larger more active stocks.

Originality/value

The paper examines the impact of a change in minimum tick size on eligible New Zealand Exchange (NZX) stocks to determine whether it meet the stated NZX goal of boosting liquidity.

Details

Pacific Accounting Review, vol. 26 no. 3
Type: Research Article
ISSN: 0114-0582

Keywords

Article
Publication date: 20 November 2020

S. Madhu and M. Balasubramanian

The purpose of this study is for solving many issues in production that includes processing of complex-shaped profile, machining of high-strength materials, good surface finish…

Abstract

Purpose

The purpose of this study is for solving many issues in production that includes processing of complex-shaped profile, machining of high-strength materials, good surface finish with high-level precision and minimization of waste. Among the various advanced machining processes, abrasive jet machining (AJM) is one of the non-traditional machining techniques used for various applications such as polishing, deburring and hole making. Hence, an overview of the investigations done on carbon fiber-reinforced polymer (CFRP) and glass fiber-reinforced polymer (GRFP) composites becomes important.

Design/methodology/approach

Discussion on various approaches to AJM, the effect of process parameters on the glass fiber and carbon fiber polymeric composites are presented. Kerf characteristics, surface roughness and various nozzle design were also discussed.

Findings

It was observed that abrasive jet pressure, stand-off distance, traverse rate, abrasive size, nozzle diameter, angle of attack are the significant process parameters which affect the machining time, material removal rate, top kerf, bottom kerf and kerf angle. When the particle size is maximum, the increased kinetic energy of the particle improves the penetration depth on the CFRP surface. As the abrasive jet pressure is increased, the cutting process is enabled without severe jet deflection which in turn minimizes the waviness pattern, resulting in a decrease of the surface roughness.

Research limitations/implications

The review is limited to glass fiber and carbon fiber polymeric composites.

Practical implications

In many applications, the use of composite has gained wide acceptance. Hence, machining of the composite need for the study also has gained wide acceptance.

Social implications

The usage of composites reduces the usage of very costly materials of high density. The cost of the material also comes down.

Originality/value

This paper is a comprehensive review of machining composite with abrasive jet. The paper covers in detail about machining of only GFRP and CFRP composites with various nozzle designs, unlike many studies which has focused widely on general AJM of various materials.

Details

World Journal of Engineering, vol. 18 no. 2
Type: Research Article
ISSN: 1708-5284

Keywords

1 – 10 of over 165000