Color brings beauty to our eyes, whether from the wings of a monarch butterfly or the broad brush strokes of a Van Gogh painting. Color also allows us to assign meaning and organization to items. At some point, most people have to ask how they should use color whether they are animating a cartoon character, painting an accent wall or, in my case making, a graphical user interface.
Here I will explain how I would go about using color for utilities-specific augmented reality applications.
The use of color rests on how our eyes and brains process light and detail. When selecting interface colors, I ask myself: What colors do I use, and how to maximize readability and decrease distraction?
It helps to think about how the visual system processes color. In the eye, there are two types of receptors that process light: rods and cones.
Rods are bad for color, but great for detail. Cones are great for color, and aren’t good for detail.
Color exists partly because of an activity pattern of three retinal receptive cones that are suited for different wavelengths of light: short, medium and long wave. These cones work in combinations to send signals to our lateral geniculate nucleus and visual cortex for what color we are to perceive.
Your visual cortex process most information from red and green receptor cones gathered in a small indent in the back of your eye, called the fovea. More space in your cortex is devoted to processing red and green. What is the takeaway? Since blue receptors aren’t in your fovea, your brain works less to process them. Furthermore, rods also process blue, meaning even less energy is devoted to perceiving it.
These variances in how we process light and color leads car designers to two opposing dashboard color philosophies: blue and red.
Red wavelength affects mainly cones, leaving the rods unsaturated, which results in better night vision. On the other hand, red wavelength enters your brain from your fovea, which means you use more visual cortex resources to process for higher acuity. With blue dashboards, your cones don’t require as much detail, which means you use fewer visual cortex resources to process. The trade-off is that your rods are processing light from two sources, the road and your dashboard, and therefore are working harder.
Hold up just one finger on your hand and look at it–your brain increase magnification in your visual cortex, which uses more cones and less rods. Now, look at all five fingers on your hand–your brain lowers magnification, which consumes fewer resources in your visual cortex. This relies on fewer cones and more rods.
Interestingly, if you hold up two hands in front of you with all five fingers extended on the right and only your index finger on your left, your visual cortex activates far more and has more total volume dedicated to the finger than when processing your right hand with all five fingers extended.
So, how does any of this apply to Augmented Reality? Let’s take a look.
Decreasing cortical magnification and acuity.
Here’s an interface that utility workers might use to assess linear assets in the field. The colors are pleasing, modern, unobtrusive–but that’s not the point of the colors. The color design helps field users visualize information more effectively and effortlessly by drawing attention to only what matters at present.
Remember that rods are most sensitive to light and dark changes, shape and movement, and place the smallest demand on the visual cortex. Let’s put all the UI elements in our peripheral that we can, unless they represent the most important data at this current point in time.
Contextual activation of receptors
Let’s make all our buttons and elements blue or white if we can, so they are less taxing on our visual systems. We use green and red very sparingly since they fall right in our fovea. Red alerts us to where the problem is reported via data being uploaded to our system. Green directs our attention to the start and end of where we think our linear asset is experiencing trouble. We can drag, drop, and slide around the placemarks all we want to better approximate and update the data source in real time, allowing asset planners to better diagnose corrective steps to take.
Now that you understand more about how your brain works with light and detail, you can start to notice how products and programs around you are using color to do more than just look pretty.