Three Rules for Publishing High Impact Research
I recently read The Content Experience Report by Uberflip with much interest. After all, content strategy is a big part of what we do here at the Marketing Advisory Network. In summary, I think the report offers some good tips and is worth reading, particularly the section about navigation. However, the report also falls victim to some of the classic content blunders that frequently come when writing a research report.Here are some tips to help you avoid these common missteps.
Tip #1: You can’t “over review” a research document. In a report like this, let’s face it, typos are a credibility killer. Several people should review and edit the document and the last review should be done by a grammar and spelling pro that’s not seen the report before. In the case of this report, they could have caught the spacing issues in the introduction (not a big deal) and some of the mislabelling of the graphs throughout (a bigger gotcha that will confuse readers).
Tip #2: Dig deep into your data to find insights. People read research to learn something new. It is really important to give them that since they’re dedicating their time to reading your content. There were a couple places in the report that almost got there. For example, the report dedicates a section to “Putting content in more than one place can increase views by 8x on average!”. It’s great to quantify the benefit of being in more places but this is not groundbreaking information for the reader. Deeper insight would have come had they dug a level deeper and uncovered a predictive model that shows how much larger the audience needs to be in order to generate different levels of viewership, or how much placement location impacts engagement for specific types of audiences.
Tip #3: Graphical representations are critical. Most people are visually oriented and that means, in a report like this, how you display the data is critical to people’s comprehension. This report has a variety of graphical representation and some are very effective, but there are a couple that could use improvement. Here are some specific rules of thumb related to graphs that might help:
Use the right charts for the right purpose: When comparing data points, always put them on the same chart. Asking a reader to compare 2 charts against one another introduces a risk of misinterpretation and is not a smooth reader experience. Specifically, in this case, the reader is asked to compare two column graphs that each have 2 points. All 4 data points could have been placed on a graph with points plotted on 2 axes and labeled to make comparison much easier.
Be consistent: If you must have readers compare charts, put them side by side (not stacked vertically) and ensure that the scales on both axes are identical in order to make comprehension as easy as possible.
Put charts through a robust review process: Labels on graphs are the key to comprehension. Make them part of the review process as mislabelled axis or titles will cause confusion.
Follow color norms: OK, this didn’t come from the UberFlip report, but by a user interface I was studying from another vendor. In this case, they were showing saturation points on a spectrum. Color intensity can be very helpful here, but they made a mistake. Green was used to signal low saturation, and yellow, orange and red were used to signal increasing amounts. Unfortunately, we have all been trained that red is bad and green is good, so visually the data told the opposite story they wanted to express.
I applaud anyone who takes on a research project such as this one. It is a huge undertaking. And, because it is such a huge undertaking, there are high expectations for the results from it. Taking extra steps and time to ensure a high-quality report makes for better content products and is worth it in the end.