Defining Impact: Here’s What You Said About It!

Two businessmen holding signs

Shutterstock http://ow.ly/LDRbQ

I kicked off the week by asking a pretty tough question: How do you define impact?

The response was incredible. It resulted in a wave of opinions being shared on this blog, on Twitter/LinkedIn and via email.

Reading through your feedback, I noticed a couple of common themes:

  1. Measuring true impact isn’t easy. It takes a lot of time and effort to get it right.
  2. Science and academia often measure activity and label it as impact. This happens because measuring activity IS easy.
  3. Often, an impact is defined as ‘whatever your boss/organization considers the impact to be.’ This is important. You want to support your superiors (and your organization), but you also want to make a real difference. Finding a balance is difficult.
  4. There is value in tracking an academic’s/scientist’s communication activities to determine if there is a tangible benefit, or if audience opinions/behaviors change following the outreach. Unfortunately, this is a huge time commitment for communicators. …time they rarely have.
  5. Case studies provide an interesting way to tell an impact story using quantitative and qualitative information. They can describe how an activity, or series of activities, created impact – or a result of some sort. Again, this takes time.
  6. It is important to set good goals. Determining the impact you want to have, will help you measure how successful you’ve been. Choosing the right metrics is important.

Here’s what you had to say:

Marty Downs (@MartyDowns)

Independent science communicator and research development professional

“Real impact is defined differently for each piece of work….and is usually accomplished by teams, over time. That makes it largely unsuitable as an individual performance metric, but HR depts and academic administrators seem to require something — thus, our current quandary.

It might be possible to counter the impulse to *count* by putting together a portfolio of impact success stories — going from initial communication to publication to resulting action in the real world, but it would take that much time away from actually achieving impact.” 

Faith Kearns (@frkearns)

Scientist and communications practitioner with an academic background in ecology

“At the individual level there’s so much more to choose from and new work being done all the time to try to capture more “emerging” impact of scholarship, but as many point out, there are many flaws in the system. That is at least in part due to the fact that you and Marty and others point out, so much good work is done by teams. And over timescales that don’t lead to easy evaluation.

I went to the American Evaluation Society meeting not too long ago and was intrigued by how much work has already gone into this topic, which I wrote about here.

And, I would tend to agree with Stacy that narratives are the most compelling. In so many cases, impact is not one thing. Here I attempted to tell a major “success” story that had so many elements of impact that it’s hard to even know where to start. To complicate that story, I’d also note a whole lot of edginess to what success/impact looks like — one person’s impact can cause blowback for another…”

Jenny Ryan (@JRyanCS)

Communications Manager, Canadian Science Publishing

“You make an excellent point about setting goals. I also think its important in any activity (publishing papers or pitching to media) to set both short term (immediate) and long-term impact objectives. Curation of positive (meaningful) mentions over the longer term, weeks, months, years demonstrates influence, longevity… impact.

Have given this much thought lately. In drafting [this] post for Perspectives blog – we analyzed the value of the publisher’s scicomm efforts and used one journal as an example. Positive Influence and increased value among the target community is a good measure of impact (for me).”

Aaron Huertas (@aaronhuertas)

Science Communication Officer, Union of Concerned Scientists

“This might be a little further afield from what you’re looking for, but as someone who helps scientists communicate with policymakers, impact is often harder to measure. Science is one of many inputs policymakers are dealing with and increasingly, we see politicians cherry-pick science instead of relying on resources like the National Academies. So impact for me is seeing a politician engage in a policy debate with established science as a baseline for decision-making. (Similarly, seeing more politicians understand that uncertainty and confidence are important for gauging risks, not an excuse to ignore science.)

On a separate note, I also measure impact by looking at scientists who we’ve worked with over the years and seeing what sort of communications opportunities they’ve had. I can think of several scientists who have done workshops with us or some other communications-focused work that then prepared them to take on more publicly facing educational opportunities. Generally, the scientific community doesn’t do a great job measuring how things like that affect people’s careers, impact, etc. Like many others in the field, I’d like to figure out a way to do so that doesn’t feel too time consuming.”

Andrew Maynard (@2020science)

Director, University of Michigan Risk Science Center

“The misrepresentation of outputs for outcomes is endemic across most areas of endeavor – not surprisingly, as measuring outputs is so much easier! In US academia, output metrics are still the primary benchmarks that determine promotions and pay increases – number of publications, citations (or increasingly, H-index, which is more nuanced), and grants/grant dollars. This is changing elsewhere from what I have seen (I’d be interested in a perspective from the UK), but still is the primary metric or success or failure. I’ve seen very little interest in faculty evaluations in capturing outcomes or impact – possibly because they are harder to capture.

In terms of communication, engagement and impact (and as part of this, the role of academics in society/public discourse), there have to be specific, measurable goals, otherwise the default position is bean-counting (and the ensuing depression when you feel you don’t have enough beans!). If the goals are chosen well, media clips, papers, citations, followers etc can be effective indicators of success – but only if there is a clear link between what is measured and how that relates to impact.

For instance, when I was working in DC on safe and responsible nanotechnology development, one of our goals was to stimulate a public dialogue that led to informed policy decisions. Extensive media coverage over a period of time that not only came to the attention of congressional staffers, but also conveyed expertise and authority (above the fold on page one of WaPo, or AP coverage e.g.) was a highly effective metric for how impactful we were being.

Online science communication is particularly tough to evaluate impact with – especially if you’re not sure what specific impact you want to have. Numbers can be helpful if you know your goals – for instance, with the YouTube Channel Risk Bites, one of my goals is to convey basic principles of risk science to non-expert audiences in their late teens/early 20’s – and so metrics of success include not only the number of video views but also time watched (engagement) and media pickup (worthwhile content).

And back to academic work. Citations can be a powerful indicator of impact if you know what you are setting out to achieve. In 2006, I was lead author on a Nature paper that outlined a research strategy for nanotechnology safety research – we wanted people to read this, and incorporate it into their own research agendas, and one of the more effective ways of gauging this was to look at citations (an indicator of where people thought that we had said something important enough to use and build on). Google Scholar citations are something over 900 for that paper – which is a strong indicator of value and relevance. Of course, to get a full sense of the impact of that particular paper, you also need to consider the evolution of the field as well as research, governance and policy decisions that have been made over the past 9 years – and this is where fully capturing impact gets tough, as in many cases, you can only do it through extensive and costly impact evaluations!

But then again, there are people and groups with expertise in impact and outcome evaluation – maybe someone should be funding more work in this area within scicomm!”

Jim Grange (@JimGrange)

Experimental cognitive psychologist and university lecturer

“I argued in a recent blog post that funders (mostly targeted at U.K. funders, but I guess applicable to the global funding arena) should strongly consider applications with no clear impact. I argue that merely funding research with impact is short-sighted and stymies fundamental basic science: ”

Marina Joubert (@marinajoubert)

Science communication researcher at Stellenbosch University

“I’ve often seen how effective and engaging science communication leads to more opportunities opening up. I’m referring to a kind of snowball effect where scientists become trusted public experts and receive more invitations to talk, debate, present, etc. Therefore, one of the many ways in which we could possibly measure impact is by looking at researchers’ communication or engagement footprint, counting the number of events and people who were meaningfully engaged (this could include face-to-face events, media profile and social media engagements) – not easy, but must be possible to develop such indicators.”

Stacy Konkiel (@skonkiel)

Research Metrics Consultant,  ‪Altmetric

“The hard thing is that so many current means of capturing impact–media clip counts, citations, altmetrics, and all the rest–are (at best) – proxies for impact.

A good example can be found in Andrew’s point above, where he talks about citations as a means to understand readership and reuse of his research strategy paper. I’d argue that citations are a much rougher measurement than something like downloads (which can get at readership, but themselves can’t guarantee that everyone who’s downloaded the paper has actually read it) or “forks” (to use the OSS terminology; a fork indicating that a project/concept/dataset/etc has been reused by others).

That said, Andrew’s point is striking–citations are the coin of the realm to in academia, and papers are among the only recognized means of sharing developments in research. I hope that soon, we in academia will be able to recognize, respect, and use supplementary measures of impact–whatever they might be–to help us get at flavors of impact beyond citations.

On another note, I’ve been doing a lot of reading and thinking about the REF in the UK, and am impressed with how UK institutions communicate impact using “impact case studies”, like these. They are narrative/qualitative, and in some ways communicate impact more effectively than metrics alone ever could.

And yet preparing these case studies (and associated REF documentation) reportedly takes up to 40% of the administrative time in UK higher ed institutions. That fact – along with the great points Jim makes in his blog post, about the differences between applied and basic research – are enough to make you ask, ‘Is determining impact even worth it?’”

Laurie Goodman

Writing and editing consultant and contractor (via LinkedIn)

“I define IMPACT as how many people died while you were waiting to publish/release data in your area.

That’s the impact of your research communication delays. 
Time to change how we define impact completely. Sharing, Collaboration, and Rapid Communication should count the most for any researcher. That is the first thing granting agencies and institutions should be looking at. Enablers.”

[Editor’s note: Many interesting stats were cited in this comment, but I didn’t have the capacity to independently verify them, so they’ve been removed. The commenter did indicate many of the stats can be found on the WHO website.]

Antonio Tugores

Biomedical researcher (via LinkedIn)

“Impact is a direct effect that can be measured: number of patients saved by a specific measure, number of lives saved by a novel safety device in a car, and so on. Impact is definitively not a publication per se, but it is easy for lazy and/or ignorant people to solely evaluate impact based on papers.”

Dale Yuzuki (@DaleYuzuki)

Marketing professional with large life science equipment and reagent firm

“I’ll leave the debate about the second question, publication paper count and related citations, to others better qualified to comment on that topic. This piece sums up your conclusion a different way – ‘the average academic paper is read by 10 people’.

Regarding the media, you are correct in the appropriate targeting of the audience being much more important than a sheer number of media placements, and the metric will depend on the goal(s) in mind for the publicity effort, and of course as you mention difficult to measure.

In the corporate communications world, this effect may take many months/years, measured through brand perception measurement in our vertical markets. In my current role (writing for a corporate blog called Behind the Bench) we do have revenue-tracking capabilities (when someone clicks through to purchase) but as it isn’t a primary goal we focus on other metrics for our target audience – overall reach, time on site, source of referrals, which are rather conventional metrics for the social media world. And as far as social media’s impact on our scientist/customer buying decision, social is definitely growing but still the largest influence (for scientists buying something) remains ‘general web search’ and publications; vendor websites, social media and scientific social media (i.e. ResearchGate and similar) rank a fair bit behind the first two.

But it is in the most general context that piques my interest – in this world of dis-intermediated media and traditional journalism under huge upheaval, how to measure impact with the general public? Click-through rates, views, and shares isn’t an end in itself, but a marker of better #scicomm with the general public. If a better-informed public is a goal, how is that impact measured?

Perhaps I’m meandering into a topic of ‘Communications’ in general, as far as what is impactful and how that is measured. Increasingly I see the importance of a better-educated public on science and science policy as a very important goal.”

Rosa Codina, Ph.D.

Director and Principal Scientist, Allergen Science & Consulting (via LinkedIn) 

“As far as I know, publication “impact” is associated with the number of times that published papers are cited in other published articles, which allows calculating a journal impact factor.”

Ophelia Santos

Medical Secretary, King Fahad Medical City (via LinkedIn)

“In my own opinion, impact is the mode of acceptance or may also be rejection expressed by the audience or reader on your presentation of something (whatever it is).”

Jennifer Donovan

Director of News & Media Relations, Michigan Technological University (via LinkedIn)

“Impact is a result – it’s what happens when people who matter to you pay attention to you. It can be a result of a campaign, a news story, a tweet or an image that goes viral. It can be planned and managed or pure serendipity – or anything in between. And it can be positive or negative. But when it happens, you’ll know it, and a wise PIO will analyze it carefully to see what went right (or wrong) and why.”

Robert Davidson

Data Scientist, GigaScience (via LinkedIn)

“There’s a quote often attributed to Linus Torvalds, creator of Linux: “Release early, release often”.  This differs considerably from the ‘hold back, build a story’ attitude that is forced upon researchers aiming for the ‘elite’ journals with big Impact Factor values.

Just throwing it out there, but perhaps some sort of ‘personal impact factor’ that could allow for cumulative impact from multiple outputs related to one project could be possible… in this day and age of ubiquitous computing and online networks etc.

Why is impact linked to a journal anyway? We know how many citations each article/dataset gets – we could easily calculate the citations for an individual’s outputs without the need for journal-level aggregation.

Laurie makes a good point about the benefits or rather the penalties for holding back on sharing outputs.”

Caryl Haddock

Medical Writer, Editor, and Proofreader (via LinkedIn)

“Impact is a forceful strike or collision. By extension, it can also be used in lieu of “influence” when one wants to connote a sudden or powerful influence, but should not be used as a synonym for gradual, minor, or subtle influences. As an analogy, consider the difference between the impact a speeding car has on a tree and the influence a pusher’s hands have on a car that’s out of gas.”

I owe a huge ‘Thank You’ to Scilogs Community Manager, Page Jarreau (@fromthelabbench) – who captured the Twitter discussion in this excellent Storify. 

https://storify.com/fromthelabbench/whatisimpact-initial-storify


Materials Referenced in Comments:

***Editor’s Note: Comments from Jennifer Donovan, Robert Davidson and Caryl Hadock were added after this post was initially published.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s