How Do You Define Impact? (#whatisimpact)

Shutterstock http://ow.ly/LwRpd

Shutterstock http://ow.ly/LwRpd

Let’s put our heads together and define ‘impact’.

How do you define it?


What is impact? Is it a result? Is it influence? What do you think it is?

I’ve yet to meet a person who doesn’t want to have an impact. There are some who are clear about the impact they want to achieve, and they have methods in place to measure it. There are many others, however, who confuse impact with activity. And often there is a clear disconnect between people who define impact in different ways.

Clearly defining your desired impact will help you set better goals. You’ll also choose the right metrics to measure your progress. And it’s easy to get this wrong. Here are two simple examples:

  1. Media Clip Count: Some organizations consider the number of media placements they’ve earned as a measure of impact. If you work in one of those places, I’m sorry to break the news to you – but clip count is a useless statistic. It doesn’t measure is impact. A single story in the right publication, targeting the right audience, may yield more results (greater impact) than a thousand stories shared with the general public. Impact is what happens in the weeks, months and years after stories run, and it’s much more difficult to measure. Despite this fact, many communicators are still evaluated based on how many placements they earn. If increasing brand awareness or generating new business are your goals, choose metrics that tell you how they’ve improved (or not.) Don’t take the easy way out by creating a false connection between activity and results.
  1. Published Paper Count: I roll my eyes every time I hear someone say they’re being evaluated on how many papers they publish. More papers, more impact, right? No! Not right. With 1.8 million scholarly papers being published each year, I’d argue that we don’t need more papers – we need better papers. Valuing quantity over quality is a recipe for disaster. It causes people to focus on maximizing output (pumping out paper) instead of impact. What kind of influence is the research having on the rest of the scientific community? On society? Measure that, and you’re measuring impact. …and citations don’t tell the whole story either.

What About Broader Impacts?

If you have an interest in winning a research grant from the National Science Foundation you have to understand the potential impact of your work, and it better be measurable. NSF won’t even consider your proposal unless you clearly show how the research benefits science/society. I’ve included links to more information about the NSF Broader Impacts Criterion below.

See, knowing and measuring impact has never been more important!


How do you define impact? Share your thoughts:

  1. Leave a comment on this blog.
  2. Share your thoughts on Twitter using the hashtag #whatisimpact.
  3. Leave a comment where you found this post on LinkedIn, Facebook or Flipboard.

Let’s work together to define impact. I look forward to sharing the results in a future post.


Additional Resources:

Measurement science – how should research and impact be assessed? James Wilsdon, The Royal Society Publishing Blog, April 10, 2015

Science Funding for the Masses – A Nature article examining NSF’s Broader Impacts Criterion

Broader Impacts Review Criterion – NSF Letter

NSF Broader Impacts Perspectives Brochure

NSF Merit Review Broader Impacts Criterion – Representative Activities (examples)

NSF Broader Impacts – Special Report (examples)

NSF Grant Proposal Guide – Section II.C.2.d – Project Description

NSF Grant Proposal Guide – Section III.A.2 – NSF Proposal Processing and Review 

10 thoughts on “How Do You Define Impact? (#whatisimpact)

  1. Real impact is defined differently for each piece of work….and is usually accomplished by teams, over time. That makes it largely unsuitable as an individual performance metric, but HR depts and academic administrators seem to require something — thus, our current quandary.

    It might be possible to counter the impulse to *count* by putting together a portfolio of impact success stories — going from initial communication to publication to resulting action in the real world, but it would take that much time away from actually achieving impact.

    Like

  2. You make an excellent point about setting goals. I also think its important in any activity (publishing papers or pitching to media) to set both short term (immediate) and long term impact objectives. Curation of positive (meaningful) mentions over the longer term, weeks, months, years demonstrates influence, longevity… impact.

    Have given this much thought lately. In drafting post for Perspectives blog – we analyzed the value of the publisher’s scicomm efforts and used one journal as an example. Positive Influence and increased value among the target community is a good measure of impact (for me). http://www.digital-science.com/blog/perspectives/why-scicomm-the-give-and-take-of-not-for-profit-scholarly-publishing/

    Like

  3. This might be a little further afield from what you’re looking for, but as someone who helps scientists communicate with policymakers, impact is often harder to measure. Science is one of many inputs policymakers are dealing with and increasingly, we see politicians cherry-pick science instead of relying on resources like the National Academies. So impact for me is seeing a politician engage in a policy debate with established science as a baseline for decision-making. (Similarly, seeing more politicians understand that uncertainty and confidence are important for gauging risks, not an excuse to ignore science.)

    On a separate note, I also measure impact by looking at scientists who we’ve worked with over the years and seeing what sort of communications opportunities they’ve had. I can think of several scientists who have done workshops with us or some other communications-focused work that then prepared them to take on more publicly facing educational opportunities. Generally, the scientific community doesn’t do a great job measuring how things like that affect people’s careers, impact, etc. Like many others in the field, I’d like to figure out a way to do so that doesn’t feel too time consuming.

    -Aaron Huertas, Union of Concerned Scientists

    Like

  4. Thanks for stimulating an important discussion Kirk! The misrepresentation of outputs for outcomes is endemic across most areas of endeavor – not surprisingly, as measuring outputs is so much easier!

    In US academia, output metrics are still the primary benchmarks that determine promotions and pay increases – number of publications, citations (or increasingly, H-index, which is more nuanced), and grants/grant dollars. This is changing elsewhere from what I have seen (I’d be interested in a perspective from the UK), but still is the primary metric or success or failure. I’ve seen very little interest in faculty evaluations in capturing outcomes or impact – possibly because they are harder to capture.

    In terms of communication, engagement and impact (and as part of this, the role of academics in society/public discourse), there have to be specific, measurable goals, otherwise the default position is bean-counting (and the ensuing depression when you feel you don’t have enough beans!). If the goals are chosen well, media clips, papers, citations, followers etc can be effective indicators of success – but only if there is a clear link between what is measured and how that relates to impact.

    For instance, when I was working in DC on safe and responsible nanotechnology development, one of our goals was to stimulate a public dialogue that led to informed policy decisions. Extensive media coverage over a period of time that not only came to the attention of congressional staffers, but also conveyed expertise and authority (above the fold on page one of WasPo, or AP coverage e.g.) was a highly effective metric for how impactful we were being.

    Online science communication is particularly tough to evaluate impact with – especially if you’re not sure what specific impact you want to have. Numbers can be helpful if you know your goals – for instance, with the YouTube Channel Risk Bites, one of my goals is to convey basic principles of risk science to non-expert audiences in their late teens/early 20’s – and so metrics of success include not only the number of video views, but also time watched (engagement) and media pickup (worthwhile content).

    And back to academic work. Citations can be a powerful indicator of impact if you know what you are setting out to achieve. In 2006, I was lead author on a Nature paper that outlined a research strategy for nanotechnology safety research – we wanted people to read this, and incorporate it into their own research agendas, and one of the more effective ways of gauging this was to look at citations (an indicator of where people thought that we had said something important enough to use and build on). Google Scholar citations are something over 900 for that paper – which is a strong indicator of value and relevance. Of course, to get a full sense of the impact of that particular paper, you also need to consider the evolution of the field as well as research, governance and policy decisions that have been made over the past 9 years – and this is where fully capturing impact gets tough, as in many cases, you can only do it through extensive and costly impact evaluations!

    But then again, there are people and groups with expertise in impact and outcome evaluation – maybe someone should be funding more work in this area within scicomm!

    Like

  5. I’ve often seen how effective and engaging science communication leads to more opportunities opening up. I’m referring to a kind of snowball effect where scientists become trusted public experts and receive more invitations to talk, debate, present, etc. Therefore, one of the many ways in which we could possibly measure impact is by looking at researchers’ communication or engagement footprint, counting the number of events and people who were meaningfully engaged (this could include face-to-face events, media profile and social media engagements) – not easy, but must be possible to develop such indicators.

    Like

  6. The hard thing is that so many current means of capturing impact–media clip counts, citations, altmetrics, and all the rest–are (at best) _proxies_ for impact.

    A good example can be found in Andrew’s point above, where he talks about citations as a means to understand readership and reuse of his research strategy paper. I’d argue that citations are a much rougher measurement than something like downloads (which can get at readership, but themselves can’t guarantee that everyone who’s downloaded the paper has actually read it) or “forks” (to use the OSS terminology; a fork indicating that a project/concept/dataset/etc has been reused by others).

    That said, Andrew’s point is striking–citations are the coin of the realm to in academia, and papers are among the only recognized means of sharing developments in research. I hope that soon, we in academia will be able to recognize, respect, and use supplementary measures of impact–whatever they might be–to help us get at flavors of impact beyond citations.

    On another note, I’ve been doing a lot of reading and thinking about the REF in the UK, and am impressed with how UK institutions communicate impact using “impact case studies”, like these [1]. They are narrative/qualitative, and in some ways communicates impact more effectively than metrics alone ever could.

    And yet preparing these case studies (and associated REF documentation) reportedly takes up to 40% of administrative time in UK higher ed institutions. That fact–along with the great points Jim makes in his blog post, about the differences between applied and basic research–are enough to make you ask, “Is determining impact even worth it?”

    Stacy Konkiel
    Research Metrics Consultant
    Altmetric

    [1] http://impact.ref.ac.uk/CaseStudies/Results.aspx?UoA=8

    Like

  7. Tough question, Kirk! For me it’s hard to even begin to think about when I’m not sure if you are referring to individual, project or program, institutional, etc. At the individual level there’s so much more to choose from and new work being done all the time to try to capture more “emerging” impact of scholarship, but as many point out, there are many flaws in the system. That is at least in part due to the fact that you and Marty and others point out, so much good work is done by teams. And over timescales that don’t lead to easy evaluation.

    I went to the American Evaluation Society meeting not too long ago, and was intrigued by how much work has already gone into this topic, which I wrote about here: http://scienceunicorn.blogspot.com/2013/10/thoughts-from-evaluation-association.html
    And, I would tend to agree with Stacy that narratives are the most compelling. In so many cases, impact is no one thing. Here I attempted to tell a major “success” story: http://scienceunicorn.blogspot.com/2014/02/beyond-buzz-change-making-with-science.html that had so many elements of impact that it’s hard to even know where to start. To complicate that story, I’d also note whole lot of edginess to what success/impact looks like — one person’s impact can cause blowback for another…

    Like

  8. Pingback: This Week’s Good Reads: The Hyperbolome, Making Impact, and Genetic Rescue | The UnderStory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s