Scientific Visualization and Computer Graphics > CS > JBI > FWN > RUG

Perception-based Visualization

Funding

University of Groningen (RUG).

People:

Ronald obtained his PhD degree on Oct 16, 2009. As of Nov 1, 2009, he is a postdoctoral researcher at the Department of Neuroscience of the Baylor College of Medicine in Houston, Texas (funded by a Rubicon grant of NWO).

Collaboration

Background

We live in an age in which we are continuously confronted with all kinds of information, most of it coming to us in visual form. If we want to make sense of all this information, it is crucial that it is presented in a easily digestible way. While a given piece of abstract information can be visualized in virtually infinitely many different ways, not all of these are equally effective. An important question, therefore, is whether we can find objective criteria for effective visualization of information. This requires an interdisciplinary approach, because information visualization is a research branch of computing science, but how to do this effectively is a question related to vision and perceptual science.

The emphasis in vision research has always been on central or ‘foveal’ vision, with the result that we know relatively little about peripheral vision. However, knowledge about how we perceive objects in the periphery of our visual field is not only essential for coming to a full understanding of human vision, we believe that it also has important implications for the ways we visualize information. Because of its high resolution, the main role of central vision is to study objects in high detail (the reason why we make eye movements all the time), but information in the peripheral field of view is essential when we are searching objects, or when we want to keep track of the context of the centrally viewed objects. This means that ease of search and context tracking requires carefully designed information visualizations.

Visualizations in which it is hard to find objects and to keep track of context are often labelled ‘cluttered’. Maximizing the effectiveness of visual information therefore requires clutter to be minimized. However, we first need to know what ‘clutter’ exactly is. Although most people have an intuitive feeling of what it means for a visual scene to be ‘cluttered’, it has proven rather difficult to provide an objective definition for this term (Figure1).

Aims and Methods

The main aim of our research is to find objective criteria for measuring visual ‘clutter’. More specifically, we are looking for computable properties of images and information visualizations that correlate with task performance (e.g., visual search) and subjective judgements of clutter in information visualizations. To this end, we adopted methodologies from experimental and computational (visual) perception science to gain a better understanding of visual search and clutter in information displays. Since it is often difficult to blindly extrapolate the results from such experiments (which are usually done with relatively simple stimuli) to more visualization-realistic contexts, we performed additional experiments settings that are visually more complex.

A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as colour, size and orientation. The first question that we addressed is whether perceptual interactions occur when a subject performs a conjunction search (e.g., is it more difficult to find an object with a particular orientation if we simultaneously have to search for a certain colour?). In an initial study, with relatively simple stimuli, we indeed found such interactions [Hannus et al 2006], with important implications for visual search. In an additional study, with more complex stimuli, we showed that these findings also have possible consequences for information visualizations [Van den Berg et al 2007a].

Figure 1: An illustration of visual clutter. Although the four images have approximately the same information content, they clearly have different ‘degrees’ of clutter.

Our next question concerns the perceptual basis of visual clutter. We hypothesize that a major constituent of clutter is a visual effect called ‘crowding’, which refers to the phenomenon that visual objects in the peripheral field of view are more difficult to recognize when they are surrounded by other objects (Figure 2). This effect is characterized by the fact that recognition of objects is hindered by other objects that are as far away as half times the objects’ eccentricity. This means that recognition of an object presented at for example ten degrees in the periphery is hindered by all objects that are at a distance shorter than five degrees. Although this effect has been studied extensively for recognition of shapes, letters and orientations, little is known about crowding in other feature dimensions.

Figure 2: An example of crowding. When fixating the cross, the B on the left is easy to recognize. The B on the right, which is at the same distance, is difficult to recognize, due to the surrounding letters.

In order to find out more about the generality of crowding, we performed and experiment in which we studied the effect for recognition of size, hue, and saturation of visual objects. [Van den Berg et al 2007b]. For all tested features, we found strong evidence for crowding, suggesting that it is a rather general principle in peripheral vision. Although a fully satisfying model of crowding does not yet exist, there are many suggestions that it is the result of some form of local feature pooling (which may have an important function in texture perception). Currently we are studying two related questions: (i) supposing that crowding is indeed the result of feature pooling, what is the neuro-computational basis of crowding? (ii) can a feature pooling model correctly predict visual clutter?

Results

The results from our first study [Hannus et al 2006] suggest that conjunction search is affected by perceptual interactions between feature channels. Additional experiments show that these interactions have possible implications for information visualization. From a visualization point of view, our most relevant findings from the second study [Van den Berg et al 2007a] are that (a) to equalise saliency (and thus bottom-up weighting) of size and colour, colour contrasts have to become very low. Moreover, orientation is less suitable for representing information that consists of a large range of data values because it does not show a clear relationship between contrast and salience; (b) colour and size are features that can be used independently to represent information, at least for the range of colours that were used in our study; (c) the earlier proposed concept of (static) feature salience hierarchies is wrong, because how salient a feature is compared to another is not fixed, but a function of feature contrasts.

The main finding from our crowding study [Van den Berg et al 2007b] is that there is a striking similarity of crowding effects in all tested features. This has the important implication that crowding is a rather general phenomenon. If crowding is indeed the main constituent of visual clutter, and feature pooling the cause of crowding, this would mean that clutter can objectively be predicted by measuring the information loss due to local pooling in multiple feature channels. We are currently investigating this hypothesis.

Conclusion

In conclusion, the results from our studies show that the effectiveness of visually presented information depends on choices regarding feature use. It is therefore important that perceptual factors are taken into account in the design of information visualizations.

References

  • A. Hannus, R. van den Berg, H. Bekkering , J. B. T. M. Roerdink and F. W. Cornelissen. Visual Search Near Threshold: Some features are more equal than others. Journal of Vision , 6 (4), pp. 523-540, 2006.
  • R. van den Berg, F.W. Cornelissen, and J.B.T.M. Roerdink, Perceptual dependencies in information visualisation assessed by complex visual search. ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1-21, 2007a.
  • R. van den Berg, J.B.T.M. Roerdink, and F.W. Cornelissen, On the generality of crowding: Visual crowding in size, saturation, and hue compared to orientation. Journal of Vision, 7(2), pp. 1-11, 2007b.