Research in imaging

Kybernetes

ISSN: 0368-492X

Article publication date: 1 April 2001

298

Citation

Rudall, B.H. (2001), "Research in imaging", Kybernetes, Vol. 30 No. 3. https://doi.org/10.1108/k.2001.06730caa.006

Publisher

:

Emerald Group Publishing Limited

Copyright © 2001, MCB UP Limited


Research in imaging

Research in imaging

1. Computer vision systems

Professor David Murray of the UK's Department of Engineering Science, at the University of Oxford, has been exploring the use of zoom in automatic tracking. He explains that:

Most of us will have watched TV broadcasts where skilled camera work enhances the information flow to the viewer. There seems every reason to think that a similar automatic capability would be of benefit to a computer vision system.

The movements of a camera and lens used to follow the action in, for example, a televised sporting event are typified by rotational and translational motions of the camera, and zooming of the lens. By comparison with the considerable attention lavished on automated motion tracking, automatic zoom control is a rather unexplored area.

The university's department is conducting research and exploring the use of zoom in automated tracking through its Active Vision Group. To an extent, we are told, its work aims to:

Mimic human operator control, where camera zooming is initiated both purposefully and reactively. In the former case, some higher level process indicates that it would be valuable either to zoom in to collect more object detail, perhaps to help recognition, or to zoom out to obtain surrounding contextual detail. In the reactive case, the zoom setting is adjusted to preserve the image size of the target as it moves away from or towards the camera.

Methods have been developed in the laboratory for automatic calibration, for zoom-invarient tracking, for reactive zoom control and for Euclidean 3D scene reconstruction from multiple zooming cameras. With these in place the researchers now aim to demonstrate these techniques in real-time surveillance applications.

Further information about this research and the project can be found on: www.robot.ox.ac.uk/ActiveVision/Research/ZAP/ and also more general information at: www.robots.ox.ac.uk/ActiveVision/

2. Echocardiography image processing

A report on the development of a 3D echocardiographic image processing methods by Dr Alison Noble of Oxford University, UK (November, 2000), Research File (Impact, No. 28, 2000) outlines the methods and the background to the research initiative. She writes that:

For many years cardiologists have used 2D ultrasound imaging (echocardiography) to view the wall-motion of the left ventricle as a way to assess the healthiness of heart function. For instance an ischaemic heart appears "sluggish" with the ischaemic regions of the heart muscle exhibiting "abnormal" motion. However, over the past five to ten years we have seen a notable improvement in medical imaging technology, and a more prolific use of information technology in medicine. As a result we can now image the heart in 3D and cardiologists are beginning to demand automated quantification (numbers) rather than visualisations of the heart to help guide diagnosis and monitor treatment, and to remove some of the subjectiveness from clinical assessment.

This project concerns the development of 3D echocardiographic image processing methods, particularly those that focus on sparse-view (or few view) reconstruction methods. The main difference from research on 2D methods, we are told, is that 3D echocardiography is a relatively new imaging method that has yet to establish its role in cardiac diagnosis and treatment planning and monitoring. It is emphasised that one of the key objectives of the research is to establish the foundations for clinical quantitative 3D echocardiography.

At the Medical Vision Laboratory at Oxford University, Dr Noble leads a research team that has been focussing on the development of a fully automated method for quantitative 3D echocardiographic image processing. A small clinical study on patients commenced in November 2000 at the Oxford John Radcliffe Hospital to compare the 3D echocardiographic analysis methods being developed with nuclear medicine-based methods of clinical assessment. The research report tells us that:

3D echocardiographic analysis stretches current machine vision techniques to a limit. Ultrasound images are very noisy compared to conventional visual images meaning new approaches to image feature extraction (segmentation) are required, and the non-rigid deformation of a heart offers challenges in terms of tracking and motion interpretation. The Oxford group has developed some novel solutions to these problems. A key factor limiting the acceptance of 3D echocardiography in routine clinical practice is the relative poor image quality compared with 2D echocardiography. Two-dimensional-array transducer technology, which provides "real-time" 3D echocardiography, is being developed as a solution to this problem by a number of the leading ultrasound physics groups around the world and should be available commercially in approximately three years time.

The researchers will, we are told, be working on the clinical validation of their methods with cardiologists at the Oxford John Radcliffe Hospital over the next two to three years. It is believed that as real-time 3D echocardiolography becomes commercially available they will be in a position to move their methodology into routine clinical use.

More information is available about this and other projects on: www.robots.ox.ac.uk/~mv1

3. Developing a new 3D imaging sensor

At the Heriot-Watt University, Scotland, UK, members of the Departments of Physics and Computing and Electrical Engineering, in conjunction with Edinburgh Instruments and British Aerospace, have developed a new 3D imaging sensor based on time correlated single photon counting (TCSPC). The researchers say that:

Three-dimensional imaging is becoming very important for a wide range of applications. In the automotive industry it is used to create CAD models from clay prototypes, providing vital input to assess the performance and style of the final production model. In the aerospace industry, accurate 3D data is required to check whether the aircraft meets tight specifications, and to act as input to packages which simulate aerodynamic performance. And 3D images are captured to allow avatars to roam through virtual environments in the entertainment and leisure industries.

The Heriot-Watt 3D imaging sensor has been used to create images from objects a few centimetres to several metres in working volume. It is claimed that it has achieved accuracies of the order of 15m on single point measurement. It is reported that it is a "time of flight" technique, but because it is responsive to single returned photons directly from the target surface, it has very accurate time, and hence distance, resolution. It is also explained that as it works at low levels of returned signal it is very sensitive and has been used to create 3D images of distant, poorly reflecting and even transparent objects. The research teams in addition to the innovative hardware design have also developed a new approach to modelling and interpreting the received signal data.

Moves are ahead to find routes to industrial exploitation of the technique with new work continuing in a number of areas.

The report of the work in progress was authored by Professor Andrew Wallace of the Computing and Electrical Engineering Department of the Heriot-Watt University, who can be contacted on andy@cee.hw.ac.uk More information is also available on: www.cee.hw.ac.uk/Research/cvip_index.html

Related articles