I spent Friday afternoon last week at LIKE Ideas 2013 where the topic of discussion was Big Data.
I’m quite sceptical of the current hype – any current hype usually – and Dom Pollard flashing up Gartner’s hype curve with Big Data approaching the first peak indicates that it’s worth thinking carefully before being seduced.
Origins and examples
“Big data” has emerged as a topic of interest because while ubiquitous computing (“internet of things”) is producing data at ever faster rates, cheap cloud data storage and distributed computing frameworks like Hadoop are putting processing power within reach of smaller users. This creates the potential for real-time sensing applications to enable faster response.
Dom’s examples centred on the marketing value of pattern recognition, including Shazam selling data showing popular music songs which breakout in particular regions of the US, allowing marketing teams to increase effort where awareness is lower. Dom claimed this led Billboard’s sales charts by 4-6 weeks, but he didn’t say if feeding this information back to a marketing team actually causes a track to go national or simply provides a useful indicator of which tracks the team should focus its efforts on. He also mentioned the spin-off potential of partnerships, citing McLaren and GSK sharing technology and Songkick and Spotify sharing music consumption patterns.
Deployment needs thought though – it’s not just a matter of slapping on a few sensors and setting up some open source tools. Dom warns against trawling data looking for patterns, saying that you first need a hypothesis to test. I’d add that you need to consider how to react to the results. For a quick response to an emerging situation, staff need access to the data and to be able to respond, otherwise any time advantage is lost. The question then becomes how will you change the way you do business to exploit the data you collect?
Infographics and the importance of story
Data visualisation specialist Michael Agar emphasised the importance of storytelling when communicating information visually. There were more than a few infographic-fatigued communication professionals in the room and Stephen Dale raised a pertinent question about how to trust infographic data when so few include citations. Similar to Dom, Michael talked about the importance of using a hypothesis or story, then exploring the data and finally producing a visualisation. See Michael’s website for some wonderful examples of graphical communication.
Manny Cohen of RM Online then changed the pace a little with a knowledge café-style session looking at innovation over the next five, ten and fifteen years. Responses from the various tables naturally overlapped as most extrapolated the present, but there were a few interesting outliers. It still surprises me how optimistic people generally are about technical advances. Despite seeing so many innovative ideas fail first time around, usually held back by how slow society is to accept them, many still think in terms of the time taken to innovate technically rather than adopt socially.
The final LIKEmic session varied the pace again, giving James Mullan, Monique Ritchie and Andrew Grave a chance to explore some of the practical challenges faced by organisations. James mentioned enterprise search and suggested the use of dashboards as an executive sensemaking tool. I know some organisations have experimented with this and it would be great to see it turned into a generic tool allowing the selection of measures and reporting interfaces.
Monique gave an excellent overview of the challenges faced by libraries which need to expand their capacity to deal with the volume of data research is generating and recommended the use of Mendeley for sharing and organising research.
The afternoon went some way to answering the hype about big data. As with anything sold as a “solution”, the onus must be on the buyer to properly understand their needs, the opportunities and the potential impacts.