Greater than the sum of its parts?

Sensor Review

ISSN: 0260-2288

Article publication date: 1 December 2002

238

Keywords

Citation

Monkman, G. (2002), "Greater than the sum of its parts?", Sensor Review, Vol. 22 No. 4. https://doi.org/10.1108/sr.2002.08722daa.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2002, MCB UP Limited


Greater than the sum of its parts?

Gareth Monkman

Keywords: Sensors, Fusion

The term data fusion, or more specifically sensor fusion, is normally considered to be an aspect of intelligent sensing coupled with modern computer integrated measurement systems. However, its philosophical origins reach back to the beginning of the last century. Sensor fusion is based on the fact that we can achieve a state of enriched perception by collating all the data we can extract from as many sensory sources as possible. So why is it that many complicated multi-sensory systems fail, or can only be made to work with the assistance of considerable computing overhead? Perhaps the answer lies in the organisation.

We are often told that "the whole is always greater than the sum of its component parts". A statement which sounds so profound that it is seldom contradicted. However, a Russian philosopher by the name of Alexander Bogdanov (1873-1928) developed the general science of organisation which he named Tektology, first published in 1912. In Tektology, "the whole is practically greater than the sum of its parts" only when it is organised. Bogdanov uses the example of two men clearing a field of stones. To be sure, the two men organising themselves to work together will clear the field in less than half the time required for one man to complete the job working alone. However, if they quarrel, sit and talk, get drunk together or otherwise conduct themselves in a disorganised manner, the job will take much longer, if completed at all!

Many of the first attempts at sensor fusion ran directly into this sort of problem. Sensor-driven control depends on reliable sensors. When sensor outputs are less than 100 per cent reliable, sensor conflicts must be resolved. With a small number of sensors, a stochastic voting system can be implemented. At first sight one can be excused for thinking that the larger the number of sensors, the more likely we are to reach a consensus. However, such systems often have the tendency to become polydesmic (possessing no unique solution) making decision certainty even more illusive. Vision-based systems employing high levels of redundancy are good examples – instead of identifying a single feature within an image, one is confronted with several plausible images. Moreover, most automation systems are highly dynamic and sensor transition driven control also demands signal stability, in addition to reliability. Given a simple look-up table, every possible combination of sensor outputs could be catered for and decisions made accordingly. Ok for purely Markovian systems but what happens when dependency on past events also plays a role? A history stack is implementable only for small systems, otherwise one is quickly confronted with combinatorial explosion.

Dynamic stochastics with running averages, Kalman filtering, fuzzy logic and neural network systems all add to the computational overhead. That is not to say that such methods do not work. On the contrary, much success has been demonstrated with these numerical nut crackers. Perhaps Isambard Kingdom Brunel's famous adage "good engineering is simple engineering" no longer rings true? – I wonder.

Related articles