I recently ran across a chart on Spiegel Online, the most popular German site for online news. The chart was a tilted 3D heatmap in fully saturated primary colors, with a thick black arrow aside.
I quickly uttered my surprise at the presence of such a poorly designed chart – esp. in such a high profile online publication – in a snarky Twitter comment, and soon after, Robert Kosara posted a whole blog post defending the graphic, and calling for “a bit more subtlety in our criticism”.
Well, I am not sure if Twitter was optimized for subtlety, yet, I guess I should clarify a bit the background of my judgement (especially since Robert’s speculative assumptions about my train of thought is not accurate in all points).
The chart in question shows the amount of a certain soccer player’s presence in different areas of the field. The field is divided in cells, and in each cell, a little “tower” indicates by height and color the amount of the player’s presence in that cell. Essentially, this makes it a hybrid of a heatmap and a 3D bar chart overlaid over a soccer field.
The redundant encoding (i.e., in this case, using height and color to encode the same value) is nothing bad per se, and in this case quite justified, as both the color encoding as well as the 3D bars height alone would be too weak visual variables for the data.
The 3D-y-ness of the chart? I am not fond of it. I find it a very clear case of “Hmm, this looks a bit bland. Maybe we should tilt it a little? Ooh look, how awesome.” Frankly, to me this is just childish. Let me put it this way: Bacon is a legitimate ingredient to many dishes, and can be quite tasty, when used right. But if your cooking style is to start with cooking something bland, and then add bacon to make it less bland, then, trust me, you are not a great cook. A great cook makes a feast out of a simple egg, they say, and I think this is what we should aspire to.
The arrow? Well, it serves its purpose, but it is quite loud, isn’t it. The missing legend, title, and description of the data and its transformations? Why bother? We have a 3D chart!
Anyways, all of that is not that grave, maybe even nit-picking, but the one thing that is unforgiveable about the chart is the color palette. If you do a heatmap, there is basically only one thing you need to get right, and this is the color palette. Yet, this one has been given very little love.
Generally, using a green, yellow and red gradient could be justified when we have benefits from a “traffic lights” reading. But I cannot see how this would apply in our case. This leaves us with the screaming dissonance of the complimentary primary colors used in full saturation, lacking any difference whatsoever in value or saturation. I hope we don’t need to discuss the aesthetic shortcomings of this approach.
Conceptually, things fall apart more, if we look closely, as there is a huge gap between no (zero) and quite little presence (1 in the supposed scale above) in a cell: Of course, I understand that this is due to the green being the color of the playing field, but why not work with that self-imposed constraint, instead of just ignoring it?
Lastly, here is how the gradient looks desaturated:
You might say, this is not an issue, as the color hue carries the information, but be reminded that a good proportion of our population is in fact red-green blind, and also for the others, key to establishing contour and depth in an image is to work with brightness contrast.
Update: Mike reminds us in the comments that red-green blindness is quite different from just not seeing the respective color hues, which is correct. I did indeed run a test on the image on vischeck.com, and here is the result:
Well, to end on a more positive note – how could we fix this?
Starting with the colors, here is the lowdown: the recommended approach for encoding “little to high amount” in a color palette is to use small variation in color hue and combine it with a higher variation in brightness (see, e.g. Stephen Few’s color primer). In our case, we might want to stick with the green of the playing field, but rather go into a darker, blue-isher direction for the higher intensities of the data, achieving a harmonious palette. Second, we will group the data into a smaller number of bins, to increase separability and emphasize the fact that the exact numerical measurement is not the point of the chart, but the overall patterns. This could result in a palette like this:
Moving to the heatmap itself, I found the 3D blocks emphasize the flaws of the measurement process over the information we want to measure. There is nothing blocky, or square about the soccer player’s movement, it is just an artefact of the data gathering and representation chosen. In a perfect world, we could measure the player’s position to the inch, each single second, 
resulting in which we could use to model [end edit, thanks Mike, for spotting my inaccuracy here] a smooth 3D manifold instead of the blocks. One way to approximate this could be to smooth the data, and separate it with isolines into regions with a similar intensity. This allows us to focus on the resulting (estimated) topology, instead of the measurement process:
(Note: This is just a mock-up, as I did not have access to realistic data.)
As a bonus, this image works in very, very small, too, as well as in black and white (these two tests are quite effective, in my opinion):
I am not claiming that this is the perfect solution, there is a myriad ways to work with this data. It is just a quick sketch. But at least, I can justify the design choices I made quite well, and I hope I could demonstrate that if our only goal is merely to “do no harm”, and not to try and make the best choices possible, we are missing out. And remember: don’t eat too much bacon, dear people. Thanks for your attention.