A field guide in two halves. Eight before/after pairs grounded in Cleveland, Tufte, and Few, then seven newer charts coming out of the 2025–2026 visualization community that the canonical books were written too early to include.
I have been making charts for a living for more than a decade. I have made them well and I have made them badly. The single most-recurring observation across that decade is that the default chart is wrong, not occasionally, not in edge cases, but as a base rate. The pie chart your spreadsheet drew the moment you clicked the icon is, in almost every case I have ever seen, the wrong chart for the data underneath it. So is the dual-axis line graph the analyst dropped into the board deck. So is the boxplot in the engineering postmortem. So is the choropleth map in the policy brief. The defaults are wrong, and they are wrong for perceptual reasons that Cleveland, Tufte, and Few already wrote down decades ago.
This issue is not an opinion piece. It is a field guide. Eight chart pairs. The same data, drawn the way the default would draw it, and the way the research says you should. The "before" charts are not strawmen. They are the charts that get made every day in dashboards, decks, and press releases. The "after" charts are the upgrades that, for almost every audience and almost every dataset, make the actual point of the data legible.
Three names show up over and over below. William Cleveland ran the experimental work in 1984 that ranked the perceptual tasks underneath every chart: position more accurate than length, length more accurate than angle, angle more accurate than area. Edward Tufte built the vocabulary (sparkline, small multiple, slope graph) and the data-ink ratio principle that governs any honest chart drawn since. Stephen Few took both bodies of work into the boardroom and wrote out, in plain language, what they meant for your dashboard. Most of what follows is borrowed from one of the three.
I came across Tufte and Few about ten years ago, after I had already spent years in data, and finding them is one of the biggest single shifts in my career I can name. I had been moving the numbers around for a long time at that point. What I had not yet learned was how to take those numbers and turn them into a chart that actually told the story they were trying to tell. Tufte and Few were the answer. Most of what I know about communicating with data, I learned by reading and re-reading the two of them.
Every chart in this issue is hand-drawn in pure SVG. Every "before" is a chart I have, depressingly often, seen in production. Every "after" cites the source that proved it works. If you make charts for a living, or for a deck that decides anything, this is the field guide I wish someone had handed me in 2013.
The two most common chart types in business dashboards are also the two best-documented offenders against perceptual research. Both are easy to fix. Neither is fixed by accident.
Cleveland & McGill (1984) ran the foundational experiment on graphical perception. Their ranking of elementary perceptual tasks placed position on a common scale at the top, length and angle further down, and area further down still. A pie chart asks the reader to compare angles. A Cleveland dot plot, drawn from the same numbers, asks them to compare positions. Same numbers. Different ask. Different chart.
When you put two metrics on a single chart with two different y-axes, you choose the scale of each independently. You can make any pair of variables appear to track, diverge, or cross at whatever moment is convenient. Stephen Few has been arguing against this chart since 2008. The replacement: a connected scatterplot, where one variable is the x-axis, the other is the y-axis, and time becomes a path through the plane. There is only one frame of reference, and the analyst no longer gets to pick it.
The chart is an argument before it is a decoration. The default chart is the argument the software wanted to make, not the one the data actually supports.
Tufte's career is a long argument that the highest-quality chart is the one with the highest data-to-ink ratio you can get away with. Two of the techniques he formalized, small multiples (1990) and sparklines (2006), still get solved worse by most dashboards in 2026.
A spaghetti chart is what happens when you ask a default chart library to compare a small group over time: half a dozen colored lines on a single axis, all overlapping, all fighting for the reader's attention. The reader cannot follow any single line without losing the others, and cannot compare any pair without working harder than the chart should make them work. Small multiples, the term Tufte formalized in Envisioning Information (1990), solve this by giving each series its own axis on a shared scale.
Tables are a fine chart. Most reports underuse them. What most reports also do not do, even though Tufte specified it in Beautiful Evidence (2006), is add a sparkline: a word-sized graphic that compresses the trajectory behind the number into the line of the table. The sparkline does not replace the number. It explains it. The 2026 dashboard that ships with sparklines in every row is the dashboard the analyst trusts.
Stephen Few's contribution is operational. He took Tufte's principles and turned them into specifications a working dashboard designer could implement on Monday morning. The bullet graph and the strip plot are two of the most useful pieces of that work.
Few's 2005 design specification for the bullet graph is the most underused dashboard component in the working world. It replaces a circular gauge (eight square inches of decoration around a single needle) with a horizontal bar that shows the current value, the target, the qualitative ranges (poor / satisfactory / good), and the prior period, in roughly one-fifth the space and with several times the information. There is no good argument for the gauge in 2026. There has not been one since Few wrote the spec.
A boxplot summarizes a distribution into five numbers: minimum, lower quartile, median, upper quartile, maximum. That is fine when the distribution is well-behaved. It is dangerously misleading when the distribution is bimodal, skewed, or has a fat tail, because two visibly different distributions can produce identical boxplots. The raincloud plot (Allen et al., peer-reviewed in Wellcome Open Research, 2019 / v2 2021) fixes this by adding back the raw data and the density curve. Same five numbers, plus everything the boxplot threw away.
If your chart can be made by clicking one button, your chart is the chart that wins arguments your software wanted to win, not the chart that wins yours.
The slope graph and the hex tile cartogram are two of the post-Tufte/Few chart types that the contemporary data-visualization community (not the chart libraries) has standardized. Both are simple to draw. Both are vastly clearer than what they replace.
When your question is "what changed between two time points across many categories," the default chart is a grouped bar: two bars per category, one per period, lined up. The reader's task is to compute, mentally, the difference between each pair of bars. The slope graph (specified by Tufte in The Visual Display of Quantitative Information in 1983, popularized by Few and Cole Nussbaumer Knaflic) is a single line per category, drawn between two y-axis positions. The reader's task becomes: look at the slope. Up means up. Down means down. The eye does the math.
A U.S. state map shaded by some metric (the choropleth) is one of the most-shipped charts in American journalism and policy, and one of the most distorted. Wyoming and Montana take more visual real estate than Massachusetts and Connecticut combined while containing a fraction of the population. The reader's eye reads "size = importance," but the geography is uncorrelated with whatever the data is. The hex tile cartogram, used widely by NPR's visuals team, FiveThirtyEight, and the Bureau of Transportation Statistics among others, gives every state the same area. The metric becomes the only thing the eye can encode.
The first four parts are grounded in research that pre-dates the iPhone. This part is grounded in research and practice that mostly post-dates GPT-3. These are charts the canonical books were written too early to include. The 2025–2026 working data-visualization community has, in the last few years, made them part of the default vocabulary. Almost none of them ship in your charting library yet.
Take a few hundred high-dimensional vectors (sentence embeddings, image features, customer reviews, agent traces) and project them into two dimensions with UMAP (McInnes, Healy & Melville, 2018) or t-SNE (van der Maaten & Hinton, 2008). The output is a scatterplot with no interpretable axes. Distance carries the signal. Five years ago this was an ML research chart. Today every interpretability dashboard, every retrieval-augmented system, and every embedding atlas (Nomic Atlas, latentscope, Anthropic's Clio) uses it.
Every transformer-based language model produces, as a side effect of inference, a matrix of attention scores: for each output token, how much weight did it place on each input token? Visualizing that matrix as a heatmap (one row per output token, one column per input token, color by attention weight) is the foundational technique of mechanistic interpretability. The chart is a 2017 paper (Attention Is All You Need) made visible. It has become a working tool for prompt debugging, alignment research, and model auditing in 2024–2026.
Sankey diagrams have existed since 1898. The original was a steam-engine energy-efficiency diagram. They have always been the right chart for compositional flow: how much of this ended up as that. What changed in 2024–2026 is the use case. AI agent runs are flow problems: a thousand triggers fan out into a few planning strategies, fan out into a few tools, fan back into outcomes. Modern agent observability platforms (LangSmith, Datadog LLM Observability, OpenLLMetry) lean on flow diagrams to summarize what a thousand runs actually did.
A beeswarm or jittered-strip plot shows every observation as an individual dot, packed laterally so they don't overlap, with summary statistics drawn on top. The technique is older than the term. The annotated beeswarm, with labeled outliers, quantile bands, and direct callouts to specific points, has in the last three or four years become a modern editorial style for showing distributions in the New York Times Upshot, the Pudding, the Financial Times, and Bloomberg. It is what the boxplot wishes it were.
A horizon chart compresses a time series by folding the y-axis: the area above a baseline is split into bands of increasing color intensity, and the area below the baseline is mirrored above and rendered in a contrasting hue. The result is a chart that uses a fraction of the vertical space of an ordinary line graph and stacks cleanly into a tight matrix of dozens of series. Heer, Kong & Agrawala (CHI 2009) formalized the technique and showed empirically that horizon charts preserve perceptual accuracy at much smaller chart sizes than ordinary line graphs. Modern observability tooling has been picking it up steadily ever since.
A 365-day metric, laid out as a grid of 52 weeks by 7 days, with each cell colored by intensity. GitHub put it on every developer profile in 2013. Strava, Wakatime, Duolingo, and most modern habit trackers followed. It is now the default chart for any daily-cadence quantity over a year, and it has a property no other chart shares: the eye can read weekly periodicity, monthly clusters, and "the dip when I went on vacation" simultaneously, without any axis labels at all.
A railroad diagram, formalized by Niklaus Wirth in 1977 and made famous by the SQLite documentation, visualizes a grammar as a track. The reader follows the line from left to right. Required elements sit on the main line. Optional elements have a skip branch arcing above them. Repetitions have a loop arcing below. The chart was originally invented for programming-language reference manuals, where it remains the cleanest way to render a syntax. The 2025–2026 angle is that the same chart is now the cleanest way to render a tool-call schema, a JSON contract, a prompt template, or a regex pattern. Tools like railroad-diagrams.js (Tab Atkins) and regexper.com have made it cheap to draw one.
The chart you draw is a claim about which dimension of the data is most important. A pie chart claims the angles matter. A dot plot claims the rank order matters. A slope graph claims the change matters. Pick the chart that makes the claim you are willing to defend. Defaults pick a claim for you.
Every gridline, every drop shadow, every legend entry, every 3D bevel, every "smooth" interpolation between points is ink the reader has to subtract before getting to the data. Tufte's rule still holds: the chart you ship should be the chart you cannot delete any further pixel from without losing meaning.
Few is right. The reader's eye should never have to leave the chart to find out what a color means. Label the line where the line is. Annotate the point you want them to see. The legend is a mechanical device for charts where direct labeling is impossible, and it is almost never impossible.
Compiled and published from Columbus, Ohio. I have been working in data professionally since the early 2010s. About ten years ago I came across Edward Tufte and Stephen Few, and their work has been the largest single influence on how I think about charts, and on how I tell stories with data, ever since. This issue is one long thank-you to the two of them.
The writing is set in Source Serif 4 and Inter Tight; the charts are drawn in pure SVG by hand. There are no advertisements, sponsored sections, or affiliate links. There are also no pie charts, except the one in figure A.
Fifteen charts in total. Eight before/after pairs grounded in the canonical perceptual research, plus seven newer charts coming out of the 2025–2026 working community: UMAP embedding scatters, attention heatmaps, AI-agent Sankey diagrams, annotated beeswarms, horizon charts, calendar heatmaps, and railroad diagrams. Every numerical claim is sourced. Every "before" is intentionally bad in the specific ways most production charts are bad in the wild.
Reach me on LinkedIn →