2021 has been an incredible year for me professionally and personally. Still, I was super excited to be selected as a Round 1 IronViz Qualifying Judge with all of these super talented and first-rate members of our community. With the announcement of our Top-10 finalists in the Data + Joy-themed IronViz Qualifier contest, I’m ready to write about the experience and help you determine what to consider with future qualifying events.

Credit: Tableau and Adam Mico

Before we get into this post, here are some things I will not share here or privately:

  • Specific visualizations reviewed
  • Itemized scoring breakdowns

However, I will cover traits of a successful IronViz visualization based on the trends of high-scoring data visualizations from the 64 visualizations I reviewed and the tendencies of the ten finalists.

Iron Viz Qualifying Judging Background

Judges are expected to look at visualizations objectively. Tableau made significant efforts to remove personally identifiable information from the visualizations, and we did not use Tableau Public to find and review vizzes. Instead, the submitted visualizations were hosted elsewhere. The effort (notably Tableau’s AB Commendatore and Andrew Grinaker (Twitter)) extended to do this was no small feat, but it helped keep the scoring fair. To avoid seeing IronViz qualifying entries, I muted references to IronViz on Twitter, avoided Tableau Public, and would not engage if a person requested feedback on any Tableau visualizations during that period. It wasn’t easy to do as I’m super curious and felt I missed out on a lot of excitement, but objectivity was essential. Of course, IronViz judges do not submit their entries for the contest as that would be an apparent conflict of interest. In addition, we were given the option not to judge vizzes if a visualization could not judge objectively (e.g., potential breeches above).

Judging Process

Credit: Tableau

Each judge at the Round 1 level was delivered 60+ visualizations scattered among all 320 entries. Multiple judges reviewed every visualization. The highest scoring visualizations were shared with the 2nd Round judges. Every visualization was scored based on…

  1. Analysis of how the data supported the topic, its cleanliness, and its application to the study, including calculations
  2. Design or how the data worked together and served in a manner the end-user can process it visually. These questions should come to mind… Is this the proper use case for the chart? Can I quickly read the…

Continue reading: https://towardsdatascience.com/what-makes-a-successful-tableau-ironviz-visualization-from-an-ironviz-qualifying-judge-4e1025b3a3c2?source=rss—-7f60cf5620c9—4

Source: towardsdatascience.com