Context, cross-platform and cost: priorities for measurement?
Richard Marks, asi’s Research Director, reports from the International Publishing & Data Conference in Lisbon
This month’s International Publishing & Data Conference in Lisbon was a landmark event. The conference has been held every two years since 1981, known variously as the Worldwide Readership Symposium and more recently the PDRF. Meanwhile, asi has been running annual television and more recently radio conferences since 1991. With the retirement of the PDRF’s organiser Dawn Mitchell, it made sense to all parties for asi to take over the curation and running of the event to give it a new lease of life and enable asi to cover the three ways of consuming media – watching, listening and now reading.
So the two-day Lisbon event marked something of a transition, perhaps reflecting an industry itself also in a state of metamorphosis.
The conference was sub-titled ‘Context, cross-platform and cost: priorities for measurement?’ and organised to give not just a focus on measurement techniques but also outline the challenges facing the industry and how the provision and use of data can secure its future.
Will publishing survive until 2030?
Our first session was hosted by Colin Morrison, author of the influential publishing business blog ‘Flashes & Flames’. He introduced an extremely cohesive session and highlighted a number of themes that would repeat through the session and the rest of the conference.
Colin argued that the industry’s past reliance on the drug of advertising revenue had allowed it to forget the need to serve the readers themselves and that a realignment was needed to place readers – and subscription revenue – at the heart of business strategy. This was echoed by James Hewes of FIPP whose Digital Subscriptions Snapshot is now tracking impressive growth in online subscriptions – 3.5m globally for the New York Times now.
This realignment may mean that the measurement industry needs a renewed focus on editorial research and CRM, which has up to now had a fraction of the funding allocated for gathering data compared to advertising planning, trading and effectiveness. VRT’s Hanne Brasseur is a researcher embedded in the newsroom, guiding editorial decisions in real time, whilst the Lesewert reading and noting system, described by its founder Ludwig Zeumer, is able to give real-time feedback on print copy readership: a German panel uses a laser pen linked to an app to show what content they are reading and how much of that content.
So what about the future of publishing? Data from Comscore’s Stuart Wilkinson examined generational differences in news consumption, with a particular emphasis on Generation Z, who will determine that future. Social media dominates their news consumption, although they are less obsessed with sharing content than millennials. The challenge, as the BBC’s Santanu Chakrabarti convincingly argued, is that social platforms like Facebook offer huge potential for reach, but also significantly damage brand recall and attribution to the newsbrand itself. Facebook – the aggregator – takes the credit. Santanu concluded, ‘Reach without recognition is just numbers on a dashboard’. If the BBC is struggling with this, then the issue will be even more acute for smaller newsbrands.
Will publishing survive until 2030? Morrison concluded that to survive, publishers need a renewed focus on the readers themselves and upon the subscription model, reducing their addiction to declining ad revenue. Content brands may also need to be willing to self-cannibalise their content and allow consumers to subscribe to those elements of the offer they want – crossword, sports – as opposed to forcing people to subscribe to the whole package. In that regard there are a lot of similarities with the video industry and the debate around unbundling video and TV services faced with competition from direct to consumer services like Netflix.
Personally, I left this session fairly confident that the industry can survive and thrive if it can refocus on readers and subscription revenue, but less convinced that the word ‘publishing’ itself will endure. The major media brands offer integrated text, video and audio services and so the competitive set widens to include companies that were never ‘publishers’ in the first place, offering text video and audio. Of course, where that leaves the naming content of this conference going forward could become something of an existential headache! Meanwhile you can listen via these links to podcasts in which I interview Colin Morrison, James Hewes and Santanu Chakrabarti.
Advertising: Context matters
If the morning was focused on editorial, then the afternoon of our first day had its sights set very firmly on advertising effectiveness. This shone through UK work conducted by Newsworks, as Denise Turner showed that, according to econometric analysis of the effectiveness of 684 campaigns over the last decade, advertisers are under-investing in news brands in print and online. Further analysis demonstrated the importance of context and the quality online environment that news brands and magazines can provide – 42% more impact on average. This impact is due to higher levels of visibility and viewing. Ipsos has teamed up with eye-tracking experts Lumen to look at whether online ads are more effective in the quality environments that Denise described. Ipsos’ Nicholas Watson described five key drivers of attention to online ads, with quality content delivering x2.6 attention levels for ads compared to task-based sites. David Bassett of Lumen was unable to make the event, but you can hear him in this recent asi podcast.
Pete Hammer of Marketing Scientist Group in Australia presented a detailed study which combined actual video ad viewing behaviour with recall to examine the impact of ad length, skipping and branded content. Longer ads outperform shorter ones, even for younger viewers. Whilst younger consumers are more likely to skip video ads, skipped ads can still have some impact.
Of course, one approach to increasing attention is the use of ‘native advertising’ – defined by Britta Cleveland of Meredith as an ad experience that ‘…follows the natural form and function of the user experience in which it is placed.’
Data shown by GfK’s Mickey Galin indicated that native can be highly effective, but it can remain a controversial topic given the need to be careful not to step over the boundary from ‘following the natural form’ to actively misleading the reader as to what is advertising and what isn’t. The reader experience and publisher brand integrity have to remain the priority, particularly in the light of the conclusions of the morning session.
A major priority for publishing data going forward will be how it can interface with DMPs and the programmatic placement of advertising, a system in which the content and the advertising become divorced from one another. Ingvar Sandvik of Kantar described a Norwegian solution to this ‘fourth wave’ of digital advertising, whilst on the Friday Karin Schut of ViNEX in The Netherlands teamed up with Kantar’s Jonathan Brown to show how research demographics can be fed into DMPs. It will be critical for the future of publishing research and the currencies in particular that we can move beyond the simple provision of respondent level databases to being able to design APIs that interface with wider advertising ecosystems.
A major challenge for the publishing industry up to now has been to provide advertisers with audience data across platforms, print and online, with net reach as well as aggregate numbers. NOM’s Irena Petric partnered with GfK MRI’s Jim Collins to show how fusion can be used to align data sets, whilst Jim actually went one step further in showing how the CN1 project in the States tracks the brand reach of Condé Nast across print, online and social media.
Our advertising session highlighted how readership research is adapting to the evolving needs of the advertisers and media agencies, but the ‘elephant in the room’ remains their attraction to, and usage of, proprietary data sets from media platforms not subject to the same independent scrutiny as industry currencies. In a late addition to the programme I showcased work done in the UK for IPA and ISBA to frame the argument to the industry as to why independent audited data is so vital. This included a call to action to advertisers with a five-point plan, which can be downloaded from the IPA website here. A lot of great work is being done to make the case for news brands and magazines as highly-effective components of a media plan, but it is essential that the industry has an even playing field in which balanced informed decisions can be made.
More for less? The future of publisher data
Friday’s presentations reflected a closer focus on methodological approaches to measuring audiences, with a particular emphasis on currency research. I remarked as chair for the session that a major challenge for the industry is the prevalence of ‘cognitive polyphasia’: the ability of clients to believe two apparently contradictory things at the same time. This is prevalent across many areas of life at the moment, but in currency research really boils down to believing that readership research must a) expand to measure audiences across multiple platforms and in a more granular and timely way and b) cost less. This is further compounded by the need to continue to measure print audiences which are falling and therefore require larger samples to maintain robust measurement – the lower print readership gets the more expensive it is to measure: a vicious circle.
Harald Amschler and Jella Hoffmann of the Swiss readership JIC WEMF got the second day underway with a paper titled ‘More for less! More for less?,’ a title which we co-opted for this whole session as we examined a variety of ways in which measurement can become more cost effective without compromising quality. WEMF has combined extending reporting periods to allow smaller samples, increased mobile recruitment and added user benefits. The paper stressed the importance of correct messaging to win industry approval of changes and went on to win the Dawn Mitchell Award for best paper, voted for by conference delegates in real time via the asi app.
The use of face-to-face interviewing is increasingly pressurised, not just because of the high cost of this ‘best in class’ approach, but also by the availability of interviewers and the belief that other approaches may be better at reaching specific sub-groups of the population. Neil Farrer and Scott Jakeways of Ipsos gave a timely presentation examining the relative benefits of combining face-to-face interviewing with online self-completion and the merits of online first and online second approaches.
A very popular way to reduce pressure on sample sizes is via the use of modelling, which featured as a technique in many papers across the two days. Nielsen’s Jonathon Wells highlighted opportunities to use AI and specifically Bayesian mathematics to increase the stability of data from low sample sizes, whilst Ipsos’s Mario Paic provided a very useful definition of six forms of modelling, from weighting through to fusion. Katherine Page outlined how it is being deployed on the recently launched PAMCo survey, but reminded us that whilst modelling can enhance the work we do – particularly when it comes to combining on- and offline datasets – nonetheless the data needs to be there in the first place as an input for the calculations. An algorithm or machine learning cannot produce data out of thin air. Modelling is part of the future for readership research but is not a replacement for it. The robots can help but will not take over.
Whilst not motivated simply by a desire to cut costs, the Dutch TMAM initiative has received a lot of attention as it attempts to align video, audio and readership measurement into a joint initiative. An announcement on the form of the new contracts is imminent and Irena Petric of NOM gave an overview of the likely shape of the service, which is not one huge single-source survey but a series of aligned methodologies for each medium with a common online component.
One way in which publishers themselves are saving money is in curtailing their print versions and going online only. Neil Thurman of LMU Munich looked at the case study of the NME music paper in the UK as the first step towards a predictive model for the readership impact of making this transition.
Overall, this session demonstrated that a lot is changing in terms of the design of readership currencies. Modelling, the more efficient use of samples and mixed mode interviewing will be key elements in being able to deliver ‘more for less’, alongside greater co-operation and resource sharing with other industry currencies.
Readership around the world
Katherine Page got our final afternoon session underway with an overview of major developments around the world in readership measurement techniques. This was followed by a series of papers highlighting developments in specific markets. Key trends highlighted by Katherine included reductions in sample sizes and the adoption of less expensive data collection approaches, targeting hard to reach groups via adapting samples and the use of mobile and online data collection. As print readership falls, stabilising estimates is an increasing challenge. Metrics beyond simple Average Issue Readership are also on the agenda.
For the Indian readership survey, a primary challenge is security and Nielsen’s Dolly Jha outlined how the increased use of digital tracking and the release of sample to interviewers in near real time is protecting the service against tampering. After a series of papers which described necessary reductions in sample, the revelation that the sample size for the Indian Readership Survey is 330,000 caused many delegates jaws to hit the floor, but then it does serve a population of 1.4 billion!
The new French readership survey, as previewed by Gilbert Saint Joanis of ACPM includes a number of innovations, including the use of an app for respondents to scan the titles they are reading, whilst Pat Pellegrini described how Vividata in Canada is taking a range of steps to remain relevant to the market including a new currency survey, described by Josh Cormie of Ipsos. It is striking how the newer readership contracts are echoing the approach which is also becoming increasingly the norm in television currencies – fusing online panel data to a core survey, blending passive measurement and recruited samples.
A common concern for new and existing readership currencies is the ability to interface with DMPs and in Romania Arina Ureche and Kimmo Kiviluoto showed how BRAT, the currency JIC, and Syno International have teamed up to deliver effectively a national DMP for their SATI survey (Internet Audience and Traffic Measurement) , via which websites can access and integrate profile data in real time.
Looking across the whole of our second day it is clear that readership research is in a period of disruption. I use the word in the ‘Silicon Valley’ way in the sense of being a good thing as well as a challenge.
Readership research is successfully incorporating on- and offline audiences, using fusion techniques to bring those datasets together as well as improving data quality. There is a clear focus on survey methodologies – how they are designed – but also on the outputs: what systems currencies need to interface with, how the data will be used and an increased need for real-time availability of data.
It’s clear that we are moving further and further away from the world of the quarterly respondent-level data base and into a world of APIs, DMPs and real-time delivery.
Where do we go from here?
As this was the first event in its new incarnation, we concluded with a debate chaired by Kantar’s Jennie Beck looking at the future of the conference – how can it evolve to meet the needs of the industry and what format should it take in the future?
One of the primary challenges, to circle back to that first session on Thursday, is an existential one: what is the collective noun for the industry this conference serves? Is ‘publisher’ an increasingly anachronistic term? The challenge is that ‘publishing’ isn’t just about readership anymore when the dominant ad format online is video, whilst audio podcasting also plays an important role for many publishers.
Katherine Page and Mario Paic quoted Rodney Harris, a UK media agency director in the 1980s who said:
‘Media research is not designed to find out the truth, it is a treaty between interested parties.’
The mission of asi is always to get those interested parties talking to each other and we will be getting in touch early next year about future plans.
In the meantime, a number of the themes discussed in Lisbon will also feature at our Television & Video and Radio & Audio Conferences in Prague on 6th-8th November.
We hope to see you there.
Originally posted by asi
24th September 2019