Technology breakthroughs – bending the design rules

Circuit World

ISSN: 0305-6120

Article publication date: 15 May 2009

463

Citation

Starkey, P. (2009), "Technology breakthroughs – bending the design rules", Circuit World, Vol. 35 No. 2. https://doi.org/10.1108/cw.2009.21735bac.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited


Technology breakthroughs – bending the design rules

Article Type: Conferences and exhibitions From: Circuit World, Volume 35, Issue 2

A dull and drizzly November morning outdoors in Coventry UK, but bright lights and a hum of activity in the exhibition suite of the imposing Ricoh Arena as TechnologyWorld08 brought together 160 companies from overseas with 130 from the UK to talk business via focused “speed-meetings”. Organised by UK Trade and Investment, the UK Government’s International Business Development Organisation, TechnologyWorld08 provided a platform for companies to identify and build new business relationships in the areas of communications, industrial electronics, enterprise software and technologies for retail and logistics.

A major attraction was the two-day seminar programme managed by SMART Group on behalf of the Electronics Knowledge Transfer Network, entitled “Technology breakthroughs – bending the design rules”.

Moderated by SMART Group Technical Director Bob Willis, the seminar included presentations from Joe Fjelstad of Verdant Electronics, Thomas Ahrens of Fraunhofer Institute, Craig Hillman of DfR Solutions, David Pedder of TWI and Markys Cain of NPL.

Joe Fjelstad captured the attention and imagination of the audience with his lateral-thinking appraisal of the benefits of solderless assembly. The electronics industry was continuously driven to change and evolve by the desire to meet customer needs and control cost. Changes mandated by legislation, for example restriction of hazardous substance (RoHS), raised many questions, issues and concerns. It had been claimed that RoHS has already cost the electronics industry more than $38 billion. Reliability experts had reported that solder joint interconnections were increasingly becoming the limiting factor in overall product reliability. Therefore, improved approaches to electronics interconnection design and manufacture were required to meet future requirements for cost, performance and reliability.

Introducing the Occam process concept, Fjelstad emphasised that although the approach might be novel, the technologies employed were well established. In essence, the process sequence was to position and bond various tested components on a temporary substrate or permanent carrier, encapsulate the components in place, remove the substrate, expose the component terminations then interconnect them by an additive or semi-additive PCB fabrication method.

Obvious benefits of the process included low material use and near zero waste, a reduction in total manufacturing steps, no exposure to high temperatures, no substrate warping concerns, no expensive finishes, no shelf-life solderability problems, no post-cleaning issues, and reduced testing. Furthermore, RoHS restricted-material concerns were bypassed. A cost-comparison model indicated a potential 33 per cent saving over conventional surface-mount technology. From a design point of view, Occam enabled simplification of layout, tighter component placement, improved routing and layer reduction. It eliminated the risks associated with handling bare dice and offered the scope for a “Lego-brick” approach to electronics assembly. And co-design together with co-manufacture presented significant opportunities for compressing time-to-product.

Fjelstad made no wild claims that Occam was the universal solution, although the concept was becoming accepted and the paradigm shift was gaining momentum “People are still going to be soldering electronics long after I have returned to my elemental form”. But change happens, and evolution favors species that adapt to change.

“Bending the design rules is not about bending the rules of physics” was how Thomas Ahrens introduced his presentation on the current and future challenges of heat dissipation in design, which focused first on structures to get rid of heat, and then on thermal simulation techniques. Of the functions of packaging: energy transport, signal distribution, heat transfer and guarding against environmental impact, the dissipation of lost power was an elemental requirement, and temperature was a major cause of failure. Consideration of thermal management served to evaluate and ensure quality and reliability of the product. The thermal characteristics of components in real situations did not necessarily follow what might be predicted from the data sheet value, which had generally been measured under ideal conditions. And there was a need not only for cooling electronics, but also for increased thermal endurance of packaging materials.

Ahrens reviewed, the mechanisms of heat transfer: radiation, conduction, natural convection and forced convection, and showed several examples of technical solutions for heat dissipation. Efficient thermal coupling between component and heat sink was essential – anything is better than air! He explained the mathematics of heat conductivity in solids and heat transfer to fluids, and the definitions of thermal resistance, heat conduction resistance, heat transfer resistance. Heat transfer coefficients could be calculated using computational fluid dynamics, or determined by measurement. Modelling approaches could give good agreement with physical measurement provided that the model correctly described the most important criteria – geometry, materials and processes. He emphasised that microelectronics assemblies are complex combinations of materials and composites, and that understanding of the composite system behaviour, thermal and mechanical as well as electrical, was the key issue.

David Pedder discussed the architectural options and benefits of system-in-package electronics integration, with reference to a current TWI Project being carried out on behalf of the Electronics Knowledge Transfer Network, aimed at providing a guide for UK engineers wishing to embark on SiP design. The guide would introduce the range of SiP technologies in current application and review their benefits, provide examples of SiP applications, set out the available design routes and design tools for SiP and outline future trends in SiP technology and applications.

System-in-package was defined as a functional system or subsystem assembled into a single package containing two or more dissimilar dice, typically combined with other components such as passives (resistors, capacitors and inductors), passive networks (filters, baluns, antennas) and/or mechanical and/or optical components micro-electro-mechanical systems micro-electro-mechanical systems –MEMS (MEMS, micro-opto-electromechanical systems, photonics devices), mounted, embedded and/or integrated together on a substrate or package base to create a customised, highly integrated product for a given application. From a hierarchical perspective, Pedder described SiP architectures from active devices, passive components, mechanical and optical devices through to packaging platforms. To aid the decision-making process, he listed key attributes of the different platform options: lead-frame, low-temperature co-fired ceramic, laminate or thin-film. Compared with system-on-chip, characterized by low cost in volume but high non-recoverable costs and some compromises in performance, SiP offered flexibility and faster time to market, with low non-recoverable costs, whilst allowing higher performance in terms of signal propagation, power dissipation, noise and electromagnetic compatibility properties, together with high-added value and good IPR protection.

The Coventry weather was brighter for day 2, which began with Craig Hillman’s analysis of the common mistakes made by designers when endeavouring to design for reliability. The foundation of any reliable product was robust design, which provided margin, mitigated the risk from defects and satisfied the customer. Hardware design was generally controlled by two separate groups – electrical designers who took responsibility for component selection and bill-of-materials, and mechanical designers who took responsibility for PCB layout and other aspects of packaging – and many mistakes resulted simply from these groups not communicating effectively with each other. There was often a poor understanding of supplier limitations, and customer expectations of reliability, lifetime and use environment were not correctly incorporated into the new product development process.

Avoiding hardware mistakes became increasingly difficult due to the increasing complexity of electronic circuits, increasing power requirements, the introduction of new component and material technologies and the introduction of less robust components. Consequently, there were multiple potential drivers for failure. Most hardware mistakes tended to occur in parts selection, particularly in component rating and de-rating, and mechanical layout, and many problems with long-term reliability could be traced back to these areas. Component rating was the specification provided by the component manufacturer to guide the user as to the appropriate range of stresses over which the component was guaranteed to function. De-rating was the practice of limiting stress on electronic parts to levels below the manufacturer’s specified ratings, with the objectives of maintaining critical parameters during operation, providing a margin of safety from deviant lots and achieving desired operating life. De-rating could be done by reference to guidelines from governmental organizations, original equipment manufacturer, or component manufacturers, but could only be properly assessed by component stress analysis. Failure to perform component stress analysis could result in enormous eventual costs. Introduction of lead-free soldering and other changes in technology had added additional challenges to the part-selection process.

Design-for-manufacture was often overlooked in the design process because the design team had insufficient understanding of best practices or of the limitations of the supply chain, or did not request, or receive, effective design for manufacturability (DfM) feedback. Hillman showed a number of examples and case studies where DfM had been successful in resolving reliability issues. Component wear-out was becoming a real problem as integrated circuit (IC) feature sizes became progressively smaller, and he demonstrated how physics-of-failure techniques were used in reliability modelling. He urged designers to maximize their knowledge of the design as early in the product-development process as possible, and to practice design for excellence, design for manufacturability, design for sourcing and design for reliability.

Joe Fjelstad returned to the platform to give an amazingly comprehensive overview of flexible circuit technology, and current and advanced applications, within his allotted timeframe of 45 min. He admitted at the start that the topic was so enormous that his presentation would provide an experience analogous to taking a drink from a fire hose! But he generously offered delegates the opportunity to download their own personal copy of the latest edition of his book.

Beginning with a review of the history of development of flexible circuits, he described basic flex structures for static, dynamic and even stretchable applications, the drivers for individual applications, and the many market sectors where the benefits of flex circuits had been realised. After reviewing materials and properties, he went on to explain how flexible circuit technology could be implemented, knowing the end product, operating environment and reliability requirements. He demonstrated how package confguration, mechanical requirements and electrical requirements should be defined, then addressed a whole list of topics including basic design principles, tolerancing, mechanical issues of bending and forming, connection strategies, coverlay processing, finishes and profile cutting. He left his audience in no doubt that flexible circuits are varied and versatile, that new concepts for manufacture and use are being developed almost daily and that the designer must understand many electrical and mechanical issues. His parting invitation was “Enjoy the ride!”

The final speaker was Markys Cain, and his intriguing subject was the state-of-the-art of vibrational energy harvesting, in effect scavenging wasted energy from the surroundings. The objective was not to “save the planet”, or even to save money. The amounts of energy were very small, and the devices were likely to cost far more than the amount of energy saved. Rather, technologies were being developed to capture and store the latent energy associated with vibrating structures, bearings or machines and turn it into useful sources of electricity to power devices such as autonomous wireless sensors and transducers. He showed examples of inertial devices harvesting energy from the body, the self-powered wristwatch being a familiar example, and direct force devices like piezo-electric generators in shoes. Vibrational energy harvesters could be based on electromagnetic or piezo-electric mechanisms, and piezo-electric based energy scavengers offered very good performance, especially in microsystem applications. MEMS-based piezo-electric power generators were in development. Many technical challenges were associated with the realisation of commercial products, but the piezo-electric powered tyre pressure sensor was an illustration of a proven application. Although expectations needed to be kept in realistic proportion, the development of successful energy harvesting technology could significantly reduce our dependence on batteries and other local power sources.

Peter StarkeyEditorial Advisory Board Member, Circuit World.

Related articles