Midwinter Bump: Preliminary Findings

How about that Midwinter Bump, eh? If you follow EarlyWord, you’ve probably seen the dramatic immediate effect the Caldecott, Newbery, and Printz awards have had on the Amazon rankings of their respective winners.

One week out, we’ve got to ask ourselves the inevitable question: is it sustainable?

So far, we’re seeing some qualified successes in a number of categories. Here’s the chart compiling the before-and-afters that I’ve received so far. Titles that have gained in Amazon ranking since the awards are noted in green, while those that have shown a post-announcement drop are marked in red.

Chart the First:

What do we see here? For many of the awards, the difference has been dramatic. Children’s books don’t tend to make the Amazon top 100, and both Dead End in Norvelt and A Ball For Daisy cracked the top 20 within a day of the award announcement. Belpré winner Diego Rivera: His World and Ours reached #2200 by January 23rd.

Even awards given to a group of books showed considerable gains. 8 of the 10 Alex Award winners leapt up by significant degrees. Of the two that didn’t make the leap, The Night Circus held steady, and Salvage the Bones is likely coming down from a much bigger bump – that of the National Book Award.

There may also be evidence of awards affecting the other formats a title comes in. Upon seeing that Daniel Kraus’ Rotters had won the Odyssey award (given to the best audiobook for children or young adults) I mistakenly logged the Amazon rankings for the print book.

The result? Pre-Odyssey, Rotters had an Amazon ranking of 412,089. The day after the award announcement, it had climbed to 37,357. That’s a pretty big jump! (By contrast, the ranking for the audio CD version of the book is somewhere in the 900,000-1 million range.) This is just one example, but it would seem that the award has had some effect on sales.

Where we go from here

I feel like we’re scraping the tip of the iceberg here. I think there are a number of things we can do to flesh out these initial findings and create some truly bombproof data. Do you think you can pitch in? Here are the tacks I think we can take:

Find another measurement rubric to corroborate this information. Amazon metrics are nice, but they don’t give us the complete picture. Beyond the fact that Amazon is only one seller, the ranking number doesn’t give us any information about the number of books sold. (For all we know, the difference between positions #100 and #10,000 on the sales chart is just a handful of books.) I’m hoping to identify other sources of sales information. If I have to hack into the Bookscan database, so be it.

Identify non-award winning titles to serve as a control group. Getting a bump in sales is one thing, but some of these awards can set a book’s sales on a wholly different trajectory. It’d be nice to gather some of the “contenders” from various key categories. Assuming I can find a way to get historical sales records, the differences should be pretty stark. If anyone can suggest a few control titles, it’d be a big help to demonstrate the comparisons.

Create new ways to measure how libraries can influence sales. Is there merit in libraries honoring books that libraries themselves cannot circulate? Neither the current Printz nor the Newbery winners are available via current library eBook vendors. I don’t want to espouse scorched-earth tactics, but we’re putting ourselves at risk if we continue to celebrate titles that cannot be shared. Maybe we can create a better carrot, so that such a stick isn’t necessary.

Several weeks ago, I had a conversation on Twitter about creating tools to highlight titles that are available  via Overdrive, Cloud Library, or Axis 360. If we have a good set of metrics in place, this could go a long way toward quantifying those things we previously considered to be intangible. All we’d need is a framework, and a list of titles to work with.

Cautious optimism
So far, the results of this experiment have been quite positive. On the eve of ALA’s big eBook summit with selected Big 6 poobahs, we’re finally starting to see some signs of being recognized as valuable parts of the publishing economy. The door’s been opened a crack. Let’s see if we can blow the sucker off its hinges.

Can you help? Leave a comment, or email me at theanalogdivide at gmail if you’d like to pitch in.


  • Anonymous

    It should go without saying that this project wouldn’t be possible without the help of folks on each of the awards committees, as well as other folks who made connections and otherwise pitched in. Shoutouts are due to:

    Mary Michell, Michele Farley, Todd Kruger, Erin Helmrich, Karen Keys, Kaite Stover, Sarah Dahlen, Barbara Klipper, Neal Wyatt, Heather McCormack, Alicia Ahlvers, Jenny Levine, and Steve Teeri. 

    Massive ups are also due to everyone who commented or emailed me on the first post. I’m hoping we’re on to something here, and you’ve all helped immeasurably.

  • Jllaubond

    The key is when the library world does something that reaches the wider world in a really overt way, like these award announcements do. We can review books in LJ, post lists on the ALA website, etc. but that’s mostly for librarians. We need more mechanisms that non-librarians care about. (What about the review blurbs from Booklist, SLJ, etc on Amazon? What impact do those have on sales?) People do value our assessments of quality–if there were more ways for us to do that in a coordinated fashion (across libraries not just within one library) and to get those assessments to people who care, I think our influence would increase. Partnerships seem one possible avenue.

exploring the intersection of libraries, technology, and community

%d bloggers like this: