Quantitative Descriptive Analysis (QDA) - utilising the human instrument

Nutrition & Food Science

ISSN: 0034-6659

Article publication date: 1 December 1999

4568

Citation

Armstrong, G.A. (1999), "Quantitative Descriptive Analysis (QDA) - utilising the human instrument", Nutrition & Food Science, Vol. 99 No. 6. https://doi.org/10.1108/nfs.1999.01799faf.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 1999, MCB UP Limited


Quantitative Descriptive Analysis (QDA) - utilising the human instrument

Quantitative Descriptive Analysis (QDA) - utilising the human instrument

Introduction

As yet no instrument can duplicate the sensory and psychological responses of a human being (Szczesniak, 1987). Despite the fact that instrumental methods have progressed from simple physical or chemical tests to sophisticated instrumental procedures, data obtained by such procedures must be validated against sensory data collected by the human instrument (Noble, 1975). One category of sensory techniques that can be used to provide analytical and reliable information on sensory perception is that of descriptive analysis. This particular category of techniques is believed to contain the most sophisticated sensory methodology (Mancini, 1992; Stone and Sidel, 1993a). The objective of descriptive analysis is to provide quantitative descriptions of products based on the perceptions of a group of trained panellists. Over the last 40 years, many descriptive techniques have been developed, and a number of standardised methods have emerged (Gacula, 1997). However, Quantitative Descriptive Analysis (QDA) (Stone et al., 1974; Stone and Sidel, 1993b) is the most commonly used technique (Watson, 1992) because it is product-specific, measures all properties, is accurate, reproducible and may be adapted and applied in a wide range of experimental situations (Zook and Pearce, 1988). Data obtained from QDA have been used to relate to physical and chemical analyses, product formulations, preferences, and other kinds of consumer measures of concepts, pricing and so forth (Stone and Sidel, 1998).

Technique

The QDA technique was developed in response to the need for analytical and statistical analysis of qualitative profile data. Hence, the technique relies on statistical techniques to determine appropriate descriptors, procedures and panellists to be used for the analysis of a specific product.

QDA panellists undergo a rigorous screening, training and selection process to form a final panel of between five and 15 panellists (BSI, 1986; 1993; Stone and Sidel, 1993b) for each specific food or beverage product (Mancini, 1992). QDA involves individual development and panel agreement in generating a list of descriptors, establishing the order of descriptor occurrence and measuring the relative intensity for each descriptor (Einstein, 1991). The technique uses a continuous line scale, 15cm in length for each descriptor. A line scale is considered to reduce any bias in scaling which might occur from using discrete number scales (Meilgaard et al., 1991). The scale endpoints are represented as anchor points, located at approximately 1.5cm from each end and the scale direction always goes from left to right with increasing intensity (Stone and Sidel, 1993b). Panellists evaluate the intensity of each descriptor by placing a vertical line at the point which best reflects their perception of the relative intensity for that attribute. To yield a numerical value, the distance along the scale to the vertical line/mark is measured.

Statistical analysis of data

Data obtained by QDA may be analysed and/or presented by a variety of means (Stone and Sidel, 1993a). The results from QDA may be presented graphically as a "spider's web", with lines radiating from the centre representing an individual descriptor and the distance from the centre point to a plotted point representing the measured intensity of that descriptor (Stone and Sidel, 1998). Plotted points are joined together to provide a "spider's web"/product profile (Zook and Wessman, 1977). The graphic representation of QDA data has been reported to be one of the unique features of QDA (compared to other descriptive techniques) and has proved to be a popular and easy-to-understand format for the presentation of data (Zook and Pearce, 1988; Spooner, 1996).

QDA data may also be statistically analysed to determine if products are significantly different and to what degree the differences exist. The choice of statistical analysis technique is affected by the experimental design of the QDA study. A student "t" test can be used when two products are being tested (O'Mahoney, 1986). Analysis of variance (ANOVA) can be used if more than two products are being tested (Zook and Pearce, 1988; Stone and Sidel, 1993b), as conducting numerous "t" tests can give an unacceptably high level of significance by chance (O'Mahoney, 1986). A repeated measures ANOVA would be most suitable if the data were drawn from a number of replications of the experiment. This technique will identify more significant product differences than other statistical analyses (Stone and Sidel, 1993b). Identification of the direction of the difference may be achieved by using multiple-comparison tests, from which there are a large number to choose, e.g. Sheffe, Tukey HSD, Newman-Keuls, Duncan's multiple-range, Fisher's LSD, Dunn, Dunnet (O'Mahoney, 1986).

For the more experienced statistician, data from QDA can also be interpreted using multivariate analysis such as multivariate analysis of variance (MANOVA) (O'Mahoney, 1986), principal component analysis (PCA), factor analysis (Chatfield and Collins, 1980) and generalised Procrustes analysis (GPA) (Gower, 1975).

Screening subjects for QDA.

In an attempt to assess subject suitability, the completion of a product attitude survey (PAS) can be used. The PAS produces demographic and background information, identifies those individuals who dislike the product or those who exhibit atypical behaviour (like or dislike all products in the extreme) and therefore cannot accurately or reliably discriminate to the level required (Stone and Sidel, 1993b).

Further assessment of subject suitability is conducted by screening subjects, particularly in relation to their discriminatory abilities. Screening ensures that subjects are available, interested, healthy, sensitive to product differences, analytical, and are either a user or a potential user of the product class to be evaluated (Stone and Sidel, 1993b). Many techniques are available for screening subjects for descriptive panels (ASTM, 1981; Meilgaard et al., 1991; Stone and Sidel, 1993b).

Within the screening process subjects are required to participate in a series of discrimination tests to assess subject sensitivity to product differences. Suitable discrimination techniques, such as the duo-trio test or the triangle test (Meilgaard et al., 1991) provide objective results on the ability of subjects to perceive product differences (Zook and Wessman, 1977). Products used in the screening procedure should be product-category-specific, covering the sensory sensations to be described in the final QDA (Zook and Pearce, 1988).

To achieve adequate screening the number of discrimination tests required varies between 18-30 different product tests or nine to 15 tests carried out in duplicate (Stone and Sidel, 1995; Zook and Pearce, 1988). Respondents who accurately identify above 65 per cent within these screening discrimination tests would be suitable for inclusion on a QDA panel (ASTM, 1981; Zook and Pearce, 1988; Meilgaard et al., 1991; Stone and Sidel, 1993b). Of the subjects who volunteer, it has been reported that approximately 30 per cent will fail the screening process (BSI, 1993; Stone and Sidel, 1995). For this reason, at least 1.5 times the number of subjects required to constitute the final panel need to be initially recruited. In relation to experienced subjects, who have tested the relevant product category before, past performance in actual testing becomes the primary criterion (BSI, 1993; Stone and Sidel, 1993b).

The use of additional tests and interviews has been suggested to determine the ability of subjects to detect and describe product characteristics (Meilgaard et al., 1991). However, due to the additional time that these procedures require, any additional information on subjects is usually obtained through indirect observation and informal interaction with panellists during the screening process (Stone and Sidel, 1993b). Training subjects for QDA

QDA training reduces variability (in panel response) (Stone and Sidel, 1995) by ensuring that a panel can produce valid and reliable results and so function as an analytical instrument (ASTM, 1981; Lawless and Claassen, 1993). The training process is designed to develop a panellist's ability to recognise and identify sensory descriptors. Training will also improve sensitivity and memory, to ensure that precise, consistent and standardised sensory measurements can be produced (ASTM, 1981). The content of QDA training courses essentially includes product orientation, developing descriptors, grouping descriptors by modality, reaching agreement on descriptors, defining descriptors, and familiarising panellists with test procedures (Stone and Sidel, 1993b).

Product orientation

The first stage in QDA training is product orientation, where individual panellists evaluate a typical product and try to generate descriptors which describe the total product. Future sessions focus on the development of descriptors specific to organoleptic categories, such as flavour, texture, aroma and appearance of the product. The QDA methodology remains one of the few methods that does not use standardised terminologies and excludes the panel leader from directly participating in the language development (Stone and Sidel, 1993a). The development of descriptors is a necessary stage in summarising what the panellists perceive, instead of teaching panellists what it is they should perceive, i.e. "behaviour modification" (Sidel and Stone, 1993b).

Development of a consensus language

The second stage in QDA training is the development of a consensus language, where all descriptors are listed on one score sheet, duplicate descriptors are eliminated and decisions on descriptor order, definitions and the total number of descriptors are agreed. Meilgaard et al. (1991) describe this stage as the elimination of overlapping descriptors and their rearrangement into a working list, in which descriptors are comprehensive, yet discrete. In practice, the development and selection of descriptors can be difficult, as agreed descriptors should provide an accurate and precise description of all required characteristics; enable differentiation among products; be understood and agreed by all panellists; and be easily defined (Lawless, 1991; Piggott, 1991).

The use of reference samples, such as the raw material from which the product is produced, has been reported to promote the generation of descriptors (Lawless, 1991; Rainey, 1986). Such reference materials can help highlight a particular sensation that is not easily detected or described, such as an aroma or volatile flavour. They can also be used to provide documentation for descriptors and establish intensity ranges, which helps to alleviate any descriptors which may cause disagreement. The ideal reference sample is one which is simple, reproducible, identifies only one attribute, can be diluted without changing character and does not introduce sensory fatigue during the training process (Rainey, 1986). In QDA, the use of reference samples to promote descriptor understanding is permissible, as opposed to training panellists to provide identical scores for a reference sample, i.e. "behaviour modification" (Stone and Sidel, 1993a).

Various techniques are used to ensure that panellists understand each descriptor and the associated sensory perception it is describing. One such technique is the quadrant rating technique (Zook and Pearce, 1988), in which panellists are asked if they perceive products to be in the first, second, third or fourth quarter of the line scale. The technique also encourages panel participation.

Introduction of scaling to panellists

The third stage in QDA training is an introduction to using the QDA scale and developed descriptors. This stage ensures that panellists are competent in using the horizontal line scale to best reflect the relative intensity of descriptors and that the developed language covers all perceived attributes in test products. A range of products, such as a standard product and fully defined product variants produced either in-house or purchased commercially, may be used to validate the developed score sheet (Wolfe, 1979; Zook and Pearce, 1988).

Practice QDA scoring

The fourth and final stage of training, sometimes known as the replicated pilot stage or practice scoring session, involves panellists evaluating products similar to the test product. Examination of the practice scores obtained during this stage allows evaluation of individual panellist performance, in terms of consistency, discrimination and reliability of response (Armstrong et al., 1997).

Defining the end point in QDA training

The duration of a QDA training course cannot be predetermined but rather relies on the complexity of the product and the objective of the final testing (Meilgaard et al., 1991). Previous workers have successfully produced trained panels from inexperienced QDA panellists within five to ten hours of training (Zook and Wessman, 1977; Zook and Pearce, 1988; Lawless and Claassen, 1993; Stone and Sidel, 1993b; 1998). Despite the fact that Meilgaard et al. (1991) recommend a significantly longer training period (40-120 hours), the authors recommend that this could be substantially reduced for storage or quality control studies. For experienced panellists who are testing an entirely new product category, a maximum of five to seven hours of training has been reported (Stone and Sidel, 1993a).

In practice, the duration of many QDA training courses is determined by the time taken by the panel members in reaching a consensus on a suitable range and number of descriptors (Meilgaard et al., 1991; Wolters and Allchurch, 1994). Some authors (Stone and Sidel, 1993a; Wolters and Allchurch, 1994), however, believe that perfect panel agreement among panel members is not achievable (and should not be expected), given the variety of internal and external influences that affect panellist sensitivity. Taking this into consideration along with the need to prevent panellist boredom and maximise retention of information from session to session, a number of studies have reported concentrated, product-specific training courses (Rutledge, 1992). In the final analysis, the duration of training is not a crucial measure of whether sufficient training has been carried out. This can only be provided by an evaluation of panellist and descriptor performance, during practice scoring sessions.

The role of panellist and descriptor evaluation

Scientific instruments which are selected for their ability to provide accurate and consistent measurement require regular calibration (ASTM, 1981; Meilgaard et al., 1991). QDA panels are no exception and descriptive panel evaluation has grown in importance as accreditation is increasingly demanded within consumer product industries and by customers (Lea et al., 1995). QDA panel evaluation is particularly important when working with trained panels of limited size, as each response has a significant effect on accuracy (Stone and Sidel, 1993a).

Panel evaluation seeks to identify individual panellist ability to discriminate, to respond consistently and to be in agreement with other members of the panel (Noronha et al., 1995). The evaluation process enables effective panellist selection, by identifying panellists who need more training or need to be removed from the panel (Stone and Sidel, 1974; 1993b; Powers, 1988a; BSI, 1993). Evaluating panel performance immediately after training may also indicate whether the developed descriptors are being used in a discriminatory, consistent and similar manner by panellists (Powers, 1988a). To ensure that the criteria by which panellists were initially selected continue to be met, it is essential that the performance of panels is evaluated on a regular basis (BSI, 1993; Stone and Sidel, 1993b). Consistent evaluation techniques and performance records which are easily accessible and understood (by panellists) enable proficient monitoring on a continual basis. The documentation can also be used to provide feedback to panellists and to encourage improvement in performance (Malek et al., 1986; Stone and Sidel, 1993b).

Evaluation of panellist ability to discriminate

It is most important that a QDA panellist can detect subtle differences between products. For this reason, it is important that this ability is established during initial screening and training, and is reassessed and reconfirmed regularly after initial training, to ensure that test results obtained remain valid (Stone and Sidel, 1993a).

The degree to which a panellist discriminates among samples can be calculated using one- or two-way Analysis of Variance (ANOVA) applied to each panellist's replicate scores for each descriptor (Lyon, 1980, 1987; Malek et al., 1986; Powers, 1988a; Wolters and Allchurch, 1994; Lea et al., 1995). Previous workers have successfully used a probability value of < 0.50 (Stone et al., 1974; Lyon, 1980) to ensure discriminatory ability within panels. Lower probability values, e.g. < 0.1, < 0.001, indicate greater discriminatory abilities (Stone and Sidel, 1993b; Lea et al., 1995).

Statistical analysis of the responses from the group in relation to specific descriptors can identify whether the descriptors are being used in a discriminatory manner. The elimination of a descriptor solely on the basis that it is not being used to discriminate effectively, however, may not always be justified. For example, a descriptor may effectively describe a sensory perception, but fail as a discriminator if the perception it represents exists in all the products at approximately the same intensity (Palmer, 1974).

Evaluation of panellist ability to respond consistently

If the technique of QDA is to be viewed as reliable, the QDA panellist must possess the ability to respond consistently to similar products on repeated occasions (Savocan, 1984; Stone and Sidel, 1993b). This measure of reproducibility must therefore be monitored and this may be achieved by calculation of the standard deviation (SD) of panellists' scores from replicate tests (Stone and Sidel, 1993b). ANOVA can also be used to evaluate consistency in scoring, through evaluation of probability or mean square error (MSE) values (Stone and Sidel, 1993b; Lea et al., 1995).

Evaluation of panellist ability to agree with the panel

Individual panellist ability to agree with other members of the panel in the definition and use of agreed descriptors contributes to QDA accuracy. This is the most difficult skill for a panellist to acquire and tends not to be used as a sole factor to exclude poorly performing panellists from the QDA panel (Zook and Pearce, 1988). Scores which do not reflect agreement may be reflecting product variability and/or poorly controlled preparation and handling practice, i.e. real differences between the test samples, rather than an error in assessment.

Several types of univariate analysis can be used to evaluate the level of agreement between individual panellists and to identify attributes which are causing confusion to the panel. The simplest method is to compare the mean and SD for each panellist for each attribute against the panel mean and SD. This method determines how well the individual means agree with the panel as a whole (Meilgaard et al., 1991) and may also be used to identify attributes which are not fully understood by all panellists (Naes and Solheim, 1991). Such simple statistical techniques can be useful within feedback sessions as they are easily understood, but the effort required to produce and compare these for large data sets may prove time-consuming.

When evaluating the level of intra-panel agreement, ANOVA is frequently used to determine whether significant differences are due to artefacts in the data or to real differences among the panellists (Lyon, 1980; Powers, 1988a; Sinesio et al., 1990).

The use of correlation coefficients is often recommended in the evaluation of panel agreement, as they provide numeric data, as opposed to relying on a subjective decision (Cross et al., 1978; Malek et al., 1986; McDaniel et al., 1987).

There are also several multivariate statistical techniques which can be used to evaluate panel agreement. Such techniques include canonical, cluster, correspondence, discriminant, factor, multidimensional scaling, principal component analysis and various regression procedures (Powers, 1988b). GPA (Gower, 1975) is a frequently used multivariate technique which graphically illustrates the level of panel agreement which exists (Arnold and Williams, 1986; Oreskovich et al., 1991; Sivertsen and Risvik, 1994; Hunter and Muir, 1995). The main advantage of multivariate analysis is that it can provide a composite value, rather than several values to which the panel leader has to apply subjective weightings and judgements. Results obtained from multivariate analysis techniques, however, may be open to subjective interpretation (Einstein, 1991), which limits their value in providing feedback to panellists.

Sustaining panel motivation

Sustaining panel motivation is an essential factor in QDA studies, where repeated judgements are made for each product. The effect of insufficient motivation has been reported to include careless testing, poor discrimination between products (ASTM, 1981) and generalisation in testing, known as the "halo-effect" (ASTM, 1968).

Stone and Sidel (1993b) offer guidelines for sustaining motivation, which include providing panellists with indirect monetary rewards, positive acknowledgement of panellist efforts, feedback on performance and "holidays" from testing. The authors, however, note that although many of the recommendations have a high initial impact, they rapidly lose appeal.

McLellan and Cash (1983) recommend using computerised sensory systems as a method of sustaining interest and motivation on a long-term basis. These authors report that panellists generally find the computer comfortable and convenient, whilst the direct entry method allows panellists to react clearly and independently to each question without being biased by seeing the previous responses. Past performance, in relation to other panellists, can also be viewed easily using a computerised database (Seaman, 1996) to provide psychological rewards (Meilgaard et al; 1991). Critics of computerised sensory systems, however, report possible increases in anxiety and dehumanisation, which can have a negative effect on consumer responses (Armstrong et al., 1997).

In the selection of a sensory system, it is therefore important to select one which minimises the demands on panellists, is interesting, versatile and user friendly, to ensure sustained panel motivation (Lyon, 1986).

It is evident, on review of QDA, that analytical and quantitative measurements can be derived from this technique, when effective panel training and ongoing evaluation of panel performance are carried out. Hence, if QDA is correctly applied, the human instrument can successfully produce instrumental measures of sensory quality.

Gillian A. ArmstrongUniversity of Ulster, Ulster, Northern Ireland

References

American Society for Testing and Materials (1968), Basic Principles of Sensory Evaluation, ASTM Special Technical Publication No. 433, ASTM, PA.

American Society for Testing and Materials (1981), Guidelines for the Selection and Training of Sensory Panel Members, STP 758, ASTM, PA.

Armstrong, G., McIlveen, H., McDowell, D. and Blair, I. (1997), "Sensory analysis and assessor motivation: can computers make a difference?", Food Quality and Preference, Vol. 8 No. 1, pp. 1-7.

Arnold, G.M. and Williams, A.A. (1986), "The use of generalised Procrustes techniques in sensory analysis", in Piggott, J.R. (Ed.), Statistical Procedures in Food Research, Elsevier Applied Science, Barking, pp. 233-53.

British Standards Institute (1986), Methods for Sensory Analysis of Food -- Part 1 General Guide to Methodology BS 5929/Part 1, British Standards Institute, London.

British Standards Institute (1993), Assessors for Sensory Analysis -- Part 1 Guide to the Selection, Training and Monitoring of Selected Assessors BS 7667/Part 1, British Standards Institute, London.

Chatfield, C. and Collins, A.J. (1980), Introduction to Multivariate Analysis, Chapman & Hall, London.

Cross, H.R., Moen, R. and Stanfield, M.S. (1978), "Training and testing of judges for sensory analysis of meat quality", Food Technology, Vol. 32 No. 7, pp. 48-54.

Einstein, M.A. (1991), "Descriptive techniques and their hybridisation", in Lawless, H.T. and Klein, B.P. (Eds), Sensory Science Theory and Applications in Foods, Marcel Dekker Inc., New York, NY, pp. 317-38. Gacula, M.C. (1997), Descriptive Sensory Analysis in Practice, Food and Nutrition Press Inc., Trumbull.

Gower, J.C. (1975), "Generalised Procrustes analysis", Psychometrika, Vol. 40 No. 1, pp. 33-51.

Hunter, E.A. and Muir, D.D. (1995), "A comparison of two multivariate methods for the analysis of sensory profile data", Journal of Sensory Studies, Vol. 10 No. 1, pp. 89-104.

Lawless, H. (1991), "The sense of smell in food quality and sensory evaluation", Journal of Food Quality, Vol. 14, pp. 33-60.

Lawless, H.T. and Claassen, M.R. (1993), "Application of the central dogma in sensory evaluation", Food Technology, Vol. 47 No. 6, pp. 139-46.

Lea, P., Rodotten, M. and Naes, T. (1995), "Measuring validity in sensory analysis", Food Quality and Preference, Vol. 6, pp. 321-6.

Lyon, B.G. (1980), "Sensory profiling of canned boned chicken: sensory evaluation procedures and data analysis", Journal of Food Science, Vol. 45, pp. 1341-6.

Lyon, B.G. (1987), "Development of chicken flavour descriptive attribute terms aided by statistical procedures", Journal of Sensory Studies, Vol. 2, pp. 55-67.

Lyon, D.H. (1986), "Sensory analysis by computer", Food Technology, Vol. 61 No. 11, pp. 139-46.

McDaniel, M.C., Henderson, L.A., Watson, B.T. Jr and Heatherbill, D. (1987), "Sensory panel training and screening for descriptive analysis of the aroma of pinot noir wine fermented by several strains of malolactic bacteria", Journal of Sensory Studies, Vol. 2 No. 3, pp. 149-67.

McLellan, M.R. and Cash, J.N. (1983), "Computerised sensory evaluation: a prototype data collection system", Food Technology, Vol. 37 No. 1, pp. 97-9.

Malek, D.M., Munroe, J.H. and Schmitt, D.J. (1986), "Statistical evaluation of sensory judges", American Society Brewing Chemists Journal, Vol. 49 No. 1, pp. 23-7.

Mancini, L. (1992), "Taste trials: making sensory panels work", Food Engineering, Vol. 64 No. 9, pp. 121-6.

Meilgaard, M., Civille, G.V. and Carr, B.T. (1991), Sensory Evaluation Techniques, CRC Press Inc., Boca Raton, FL.

Naes, T. and Solheim, R. (1991), "Detection and interpretation of variation within and between assessors in sensory profiling", Journal of Sensory Studies, Vol. 6, pp. 159-77.

Noble, A.C. (1975), "Instrumental analysis of the sensory properties of food", Food Technology, December, pp. 56-60.

Noronha, R.L., Damasio, M.H., Pivatto, M.M. and Negrillo, B.G. (1995), "Development of the attributes and panel screening for texture descriptive analysis of milk gels aided by multivariate statistical procedures", Food Quality and Preference, Vol. 6 No. 1, pp. 49-54.

O'Mahoney, M. (1986), Sensory Evaluation of Food -- Statistical Methods and Procedures, Marcel Dekker Inc., New York, NY.

Oreskovich, D.C., Klein, B.P. and Sutherland, J.W. (1991), "Procrustes analysis and its applications to free-choice and other sensory profiling" in Lawless, H.T. and Klein, B.P. (Eds), Sensory Science Theory and Applications in Foods, Marcel Dekker Inc., New York, NY, pp. 353-93.

Palmer, D.H. (1974), "Multivariate analysis of flavour terms used by experts and non-experts for describing tea", Journal of Science and Food Agriculture, Vol. 24, pp. 153-64.

Piggott, J.R. (1991), "Selection of terms for descriptive analysis", in Lawless, H.T. and Klein, B.P. (Eds), Sensory Science Theory and Applications in Foods, Marcel Dekker Inc., New York, NY, pp. 339-51.

Powers, J.J. (1988a), "Current practices and application of descriptive methods", in Piggott, J.R. (Ed.), Sensory Analysis of Foods, 2nd ed., Elsevier Applied Science Publishers, London, pp. 187-266.

Powers, J.J. (1988b), "Uses of multivariate methods in screening and training sensory panellists", Food Technology, Vol. 42 No. 10, pp. 123-7, 136.

Rainey, B.A. (1986), "Importance of reference standards in training panellists", Journal of Sensory Studies, Vol. 1, pp. 149-54.

Rutledge, K.P. (1992), "Accelerated training of sensory descriptive flavour analysis panellists", Food Technology, Vol. 11, pp. 114-18.

Savocan, M.R. (1984), "Computer applications in „descriptive testing", Food Technology, Vol. 98, pp. 74-7.

Seaman, C.E.A. (1996), "Computerising the sensory evaluation laboratory", Nutrition & Food Science, Vol. 2, pp. 31-4.

Sinesio, F., Risvik, E. and Rodbotten, M. (1990), "Evaluation of panellist performance in descriptive profiling of rancid sausages: a multivariate study", Journal of Sensory Studies, Vol. 5, pp. 33-52.

Sivertsen, H.K. and Risvik, E. (1994), "A study of sample and assessor variation: a multivariate study of wine profiles", Journal of Sensory Studies, Vol. 9 No. 3, pp. 293-312.

Spooner, M. (1996), "Making sense of sensory analysis", Food Manufacture, Vol. 71 No. 12, pp. 32-3.

Stone, H. and Sidel, J.L. (1993b), Sensory Evaluation Practices, Academic Press Inc., San Diego, CA.

Stone, H. and Sidel, J.L. (1995), "Strategic applications for sensory evaluation in a global market", Food Technology, February, pp. 80-9.

Stone, H. and Sidel, J.L. (1998), "Quantitative descriptive analysis: developments, applications and the future", Food Technology, Vol. 52 No. 8, pp. 48-52.

Stone, H. and Sidel, J. (1993a), "Descriptive analysis", in Macrae, R., Robinson, R.K. and Sadler, M.J. (Eds), Encyclopaedia of Food Science, Food Technology and Nutrition -- Volume 6 (Ph-Soy), Academic Press Ltd, London, pp. 4047-54.

Stone, H., Sidel, J., Oliver, S., Woolsey, A. and Singleton, R.C. (1974), "Sensory evaluation by quantitative descriptive analysis", Food Technology, Vol. 28 No. 11, pp. 24-34.

Szczesniak, A.S. (1987), "Correlating sensory with instrumental texture measurements -- an overview of recent developments", Journal of Texture Studies, Vol. 18, pp. 1-15.

Watson, M. (1992), "Sensory characteristics of food", Nutrition & Food Science, Vol. 4, pp. 4-6.

Wolfe, K.A. (1979), "Use of reference standards for sensory evaluation of product quality", Food Technology, Vol. 33 No. 9, pp. 43-4.

Wolters, C.J. and Allchurch, E.M. (1994), "Effect of training procedure on the performance of descriptive panels", Food Quality and Preference, Vol. 5 No. 3, pp. 203-14.

Zook, K.L. and Pearce, J.H. (1988), "Quantitative descriptive analysis", in Moskowitz, H.C. (Ed.), Applied Sensory Analysis of Foods, Vol. 1, CRC Press Inc., Boca Raton, FL, pp. 43-71.

Zook, K. and Wessman, C. (1977), "The selection and use of judges for descriptive panels", Food Technology, November, pp. 56-61.

Related articles