CitationDownload as .RIS
Emerald Group Publishing Limited
Copyright © 1998, MCB UP Limited
A wish list for industrial machine vision
A wish list for industrial machine vision
The authorDon Braggins is the Guest Associate Editor of this issue and is based at Machine Vision Systems Consultancy, Royston, Herts, UK. Tel: +44 (0) 1763 260333; Fax: +44 (0) 1763 261961; e-mail: email@example.com
Keywords Inspection, Machine vision
When asked to set out what it was that still needed to be developed in order to make machine vision more widely used in manufacturing industry, my thoughts immediately turned to a presentation made by Valerie Bolhouse of the Ford Motor Company at the annual Business Conference of the Automated Imaging Association in Orlando, Florida. She has been involved in machine vision for over 20 years and came up with a "wish list" based on her recent review of the take-up of vision within Ford's North American operations.
She points out that Ford manufactures both electronics for automotive applications, and the vehicles themselves. The electronics sector is the model of success for machine vision. The equipment which assembles components onto boards simply cannot function without vision. The operator does not have to know vision terms and concepts such as pixels, grey-scale, thresholds and the like, but merely has to set up parameters in the physical domain specific to the product.
By contrast, the "mechanical" manufacturing side at Ford appears to have been most reluctant to accept vision, and in this part of Ford she found many instances where vision systems had been switched off or were not being used properly.
Her report contained several sad stories about vision systems turned off (for instance one which detected grease on needle bearings so rejected them unnecessarily) or reduced to acting as closed circuit TV, as in the case of one supposed to automatically read serial numbers on engine blocks. In one plant alone she identified three quarters of a million dollars' worth of vision systems switched off, with the added cost of labour introduced to carry out what should have been automated tasks.
Her conclusions are that:
The application solution needs to be trained, not programmed.
The user (meaning the person applying the system to a particular product or component) should not be required to think in terms of pixels, grey-scale, f/stops and all the other jargon beloved of the imaging specialist.
Vision algorithms should not be feature based, so that small changes in product do not require re-programming.
She suggests that the vision computer should guide the person doing the application work through the set-up procedure, providing a self-assessment of performance and the quality of decisions, and make suggestions where to set parameters or whether the image is acceptable for decisions. Although she claimed not to know how to meet these demands, she identified artificial intelligence, expert systems, and the use of 3D sensors rather than reflected images as some of the possible routes to achieving the desired result.
Much of what she is seeking is already available, though not necessarily in a single package or cost effectively. Products have been on the market for several years which have certainly gone some way to meeting the need for very simple set-up procedures, possibly more from UK and European suppliers than North American ones.
One of the consequences of the general trend towards increasing processing power at reduced cost is that it becomes cost effective to apply vision systems in "single" applications which are easy to understand and develop. A few years back we saw robot mounted cameras carrying out assembly checking operations by moving one camera all round the assembled product. Now we can afford to have a stationary camera checking each individual assembly operation ideally before any more value can be added to a potentially incorrect product, and much simpler to set up.
Low cost systems which do indeed guide the user through the set-up operations, whether based on conventional algorithms, neural networks, or synergetic computing, are all well established commercially. One supplier even offers a "no questions asked" money back guarantee if users cannot develop their own applications satisfactorily within a month of acquiring the system returns have been few and far between.
The technique of normalised correlation, once too computing intensive to apply except with very specialised hardware, is now capable of being implemented by "ordinary" systems, and in addition to being reasonably robust against small changes in product appearance, can be made to give an assessment of confidence of "fit" between the ideal and what is observed.
Her suggestion about assessment of image suitability is well worth pursuing. The human visual system is so adaptable that it will put up with images that are far from ideal without realising that there is anything wrong. Some auto-focus systems look for "best" image quality and could easily be adapted to report on what they find. Every photo manipulation package these days comes with a facility to judge (and manipulate) contrast; vision systems could easily incorporate such a facility, though electronic manipulation of grey levels is not a substitute for good optical contrast as the input to a vision system.
Perhaps the most important development she suggests is in the 3D field. Engineers are used to dealing with drawings showing dimensions and if these could simply be transferred to the vision system's inspection criteria the whole question of how a 2D image represents a 3D scene ceases to be a problem. Several years ago an Italian company did develop a system which had a limited understanding of perspective, and could indeed be programmed in real dimensions, but it relied on conventional 2D images as its input. There has been an upsurge in development of true 3D image capture systems recently, partly stimulated by the demands of virtual reality, and this could be the basis for an "engineering friendly" vision system.