Pre-attentive processing is the first impression your eyes see before you visually analyze something. It refers to the visual properties that you notice approximately a quarter of a second after you glance somewhere. After that, you can control what you eyes differentiate for by size, color or shape and other characteristics or features.
Pre-attentive design focuses on increasing the parallel processing your visual system does by mapping one data type to one feature.
For instance, in the Parkwhiz application I can immediately recognize where to find parking in San Francisco. But to get a sense of the relative cost of each parking spot, I need to wait for my attention to kick in and then physically look at each placemark, one by one.
The difference between these two tasks is what computer scientist Christopher G. Healey and the cognitive scientists refer to as parallel versus serial visual processing. In Parkwhiz, we visually process location based data in parallel and price in series.
Another example of pre-attentive processing in parallel is by color. See how Google represents typical traffic by color? With pre-attentive processing, I can quickly tell what areas to avoid, but if I wanted to think about this traffic as it relates to parking then I would have to serially search the map as my long term memory tries to retain information I previously saw on the Parkwhiz map.
Pre-attentive processing turns Big Data into little data
Parkwhiz and Google maps have helped people focus on the subset of data they need to make a choice. But what about when I want to predict the best place to park based on typical traffic and cost? Merging the two maps would be tricky since Parkwhiz is representing location by a colored shape while Google is representing density as a color. By themselves this is not an issue. But in enterprise and consumer lives we often have to consider many variables.
Pre-attentive design enables rapid decision making
Looking at one map while mentally comparing it to another takes a lot of capacity from our mind. Pre-attentive design can lessen the work by merging the two views. Thankfully we can take pre-attentive processing to the next level by mapping one visual feature to one data category instead of blending them, thereby decreasing visual noise.
If I could see parking location by shape and traffic flow by color, then I could quickly draw a line with my mind to the fastest route with the cheapest price.
For example, this crude visualization layers Parkwhiz on top of Google traffic. I changed the Parkwhiz placemarks to gray, so that parking location is represented just by shape and not by color. Then I adjusted the Google image to only show red and yellow, since good traffic is implicit by anywhere that is not red. Now, I can quickly scan for low-cost parking options that are not adjacent to red or yellow lines.
Visualization is science, not just aesthetics
With data visualization, function truly gives rise to form. Dr Healey points out search speed is a function of the speed of resource allocation and the amount of competition for access to the visual short-term memory (VSTM). If my VSTM has less information being sent to it about color then my speed in allocating resources goes up.
Conscious application of cognitive science makes visualization useful, not just attractive.