- I recently attended the LSE's Future of Academic Impacts event primarily to learn about 'writing your impact case study'. This proved to be an extremely useful and informative session even though its focus was primarily on social science research impact. My comments in this storify relate to the insights generated form this event that I think are relevant for UoA36 media, communication and cultural studies. Let's start with the uncertainties, or which there are many...
Uncertainties around 'reach', 'significance' and 'dissemination'
Uncertainty remains over the importance of reach and significance and their relationship with dissemination. So, whilst dissemination is an increasingly important component of funded research, it is by itself insufficient to demonstrate reach (the extent and diversity or communities, environments, individuals or organisations that have benefitted or been affected by research outcomes) or significance (relates to demonstrable effect and 'change'). From the RF2014 guidance, it appears that significance is more important than reach and, of course, more difficult to evidence, especially when talking about media outputs like radio or TV programmes, journalism articles, exhibitions or musical compositions. Nonetheless, there are alternative metrics now available that can perhaps help to demonstrate the significant of research impacts. Alistair Brown provides one example.
- In thinking about how to evidence significance in our REF2014 impact case studies, it is also worth working back from effects, changes and benefits to the underlying research. Interestingly, you might consider whether securing partnership funding as a result of the changes that your research brought about can be considered an impact, especially if a non academic body is involved.
Envisioning the overall narrative
The overall narrative is incredibly important and this was reinforced by David Sweeney, responsible for REF2014 at HEFCE. As those responsible for narrating impact, it's important to ask ourselves the 'so what' question. A useful piece of advice from the LSE impact event was to start with the user side and work backwards to the underlying research rather than the other way round. Furthermore, also try to avoid including the most cited academic work only and instead think about the work where you were most engaged with a wider public - this is likely to be the best route to impact. WIthin the narrative, it is also important to give equal weight to research and impact, especially in the opening short summary. In detailing the impact, it is important to build towards a strong factoid-intensive narrative on impacts. Whereas corroborating sources are important, try to avoid an over-reliance on weak testimonials. When used, testimonials need to be specific and evidence 'change' as linked to underpinning research and not just act as a reference for individual academics. It is also advised to use quotations extracted from testimonials in the narrative, especially if they describe change. Ultimately, as an arts and humanities researcher producing the case study narrative, it is accepted (and encouraged) that these will be iterative and dynamic accounts.
- Here, Jane TInkler's (@janetinkler) advice on what is feasible in assessing impact is valuable. Of particular interest to UoA36 is how we go about defining the 'recorded occasions of influence' and what share of outcome is attributable to our research. Could it be that when working collaboratively with an art practice community the co-creation of research and impact is an outcome worth including? In UoA36, where practice-based work is likely to prevalent, then demonstrating impact might be relatively easy, but tracing back to underpinning research outputs (at least in the traditional sense of journal articles or monographs may be more problematic.
What is impact and what isn't?
For the purposes of the REF2014 impact cases studies, it is important to differentiate between net and gross impacts and non-impacts. So, we have to be careful not to over claim or make infeasible estimates (that cannot be corroborated) and should try to maximise the level of detail and specifics when asserting impacts. I've already mentioned the importance of providing evidence to support a theory of change from research outcomes and it is here where there is a danger of moving into the terrain of non-impact. Dissemination activities, on their own, without any follow on activity or evidence of take up outside of academia are likely to fall foul of all panels. Outline as clearly as you can the 'effect, change, or benefit' that your research has produced. Public outputs, like media coverage, will also need to go further than providing audience figures to trace change, whether on individuals, organisations or communities. Crucially, being perceived as an 'expert' in your field and providing expertise or advice is also insufficient to demonstrate impact. If that advice has led to the implementation of a new policy, new guidelines or changes to practice then it will move above the threshold detailed in Jane Tinkler's slide, below.
- The Future of Impact
Moving slightly beyond the here and now of REF2014, the Future of Academic Impact event also generated some really valuable debate which is relevant for UoA36 researchers. For example, there was a lot of discussion about how different disciplines and academic fields assess impact. Cameron Neylon, Advocacy Director, Public Library of Science (PLOS) received a lot of support for his presentation in which he argued for a more nuanced understanding of impact and how to assess it beyond traditional academic boundaries. His follow up blog post is accessible, below.
- Neylon proposed a definition of impact as being about re-use.
- However, there was also some criticism of academic institutional naval gazing over the impact agenda, expressed by a number of tweeters and questions from the floor related to the lack of discussion about the 'public' and audiences.
- One of the most interesting discussions of the day related to the role of traditional academic publishers and impact measures with the emergence of crowdsourced environments like Mendeley and the altmetrics movement.
- Jason Priem, Co-Founder of ImpactStory generated some significant debate by arguing that there is a need for a wider range of impact measures to be be used, including Twitter data. He posted his slides via Twitter immediately following his presentation to emphasise the changing nature of academic work.
- Of importance to all of us who use social media to assess 'reach' and, at times, the 'significance' of impacts from our research is whether these tools will be recognised in the future by those who judge the quality of research and distribute funding accordingly. One of the most fascinating debates at the event was over the extent to which the peer review process protects academics and gives them the opportunity to develop their 'authoritative voices'. Ziyad Mirar, from Sage, argued that it did, whilst those supporting the altmetrics perspectives felt that the authoritative voice is increasingly held to account by a range of sources, including on social media.