Creating Aftershock: Part 2

With research and drafting in full flow, the next step was to decide on a suitable platform. Given our short timeframe, we were looking for one with a great feature set that wouldn’t require masses of development tweaks to get it behaving as we wanted. We played around with Aesop Story Engine, a WordPress plug-in, but eventually concluded it wasn’t up to the challenge. Firestorm and Snowfall have set the bar for the kind of quality we expect from long-form content. That means inconspicuous slide transitions, seamless background audio and video fades, tidy navigation and a natural scrolling pace.

After assessing a popular German platform, Pageflow, we decided it would be powerful enough to deliver our vision (in fact, it’s the platform Firestorm was built on). There’s a hosted provision that doesn’t break the bank, but we went down the free self-hosted route to allow ourselves ample flexibility.

Throughout these technical decisions and the writing up of the story, we were conscious of aligning our editorial and design vision with that of the DEC. So, while the developers wrangled with the mechanics of the self-hosted platform, the design and copywriting team pieced together an initial run-through of Aftershock using Adobe Premiere. We created a ten minute rough-cut video to act as a simulation of the reader’s experience – video, audio, scrolling text and all. This gave the DEC something tangible to comment on and went down a treat. It was well worth the time it took to produce, since drawing all the threads of the project together required everyone’s tight-knit collaboration, and the simulation provided us with an indispensable focal point.

By this point, the design team were flying through the archived video content, stripping out redundant audio, trimming down segments and creating background loops. We were lucky to have a rich trove to plunder but in an ideal world we would have someone planning the video content specific to this task right from the start of the appeal – especially for the loops, which are a pain to produce without advance warning (trying to read text over too short a video loop gets pretty tedious).

The text itself was subjected to a close shearing: no flashy adjectives were spared in the great thinning. The words had to be refined to make room for time-consuming video segments, so superfluous embellishments got the boot and we pared the narrative down to its real essentials. We found in the end that drop-off was highest during video playback, which might lead us to conclude that users get bored and distracted when the speed of their progress is defined by the video, rather than their own reading pace. But we’ll dwell more on that in the part three, when we look at the analytics and results of Aftershock and suggest some improvements for next time.