Thought experiments with ‘fake’ research abstracts help policy makers visualise actions to be taken

Authors: Penelope Hawe

This post was originally published by the Evidence & Policy blog on 2nd September 2020.

We have re-issued the article that has already been published by the Evidence & Policy blog. We would like to express gratitude to the kind offer of the editorial board of the Evidence & Policy blog.

Our university-policy maker partnership produces ‘fake’ abstracts of articles we’ve not written yet (on results we frankly don’t even know we’ve got) to loosen up thinking. It helps the team visualise pathways for policy action.

Penelope Hawe

Ours is a tricky situation, politically-speaking. A health department is undertaking Australia’s largest ever scale-up of evidence-based childhood obesity programs into every school and childcare centre across the state.[1] It costs $45m. They have an electronic data monitoring system in place. It’s already telling them that targets are being met. But rather than just rest on their success, they invite a team of researchers to do a behind-the-scenes, no-holds-barred ethnography. It could reveal the ‘real’ story of what’s goes on at the ground level.[2]

Foolhardy or brilliant?

I’m opting for brilliant. But let me put this in context. New South Wales Health is renowned for being research-savvy. They invest heavily in research capacity building. I’m talking in-house research, strategic investment, partnership research and peer reviewed publications.[3] The CVs of some of the policy makers in our partnership put some of us at universities to shame really.

So it was no surprise to me when our policy-maker co-investigators stretched themselves further.

Still, sending in observers on the ground was risky. Who knows what they might find?

But then we thought, “OK let’s imagine it. Let’s imagine the results now. Let’s imagine a range of outcomes and insights, good and bad, and think ourselves out of the situations we could be placed in’. So that is what we did.

The purpose of fake abstracts

Our Evidence & Policy article, ‘Mock abstracts with mock findings: a device to catalyse production, interpretation and use of knowledge outputs in a university-policy-practice research partnership’, describes how we designed and wrote ten fake abstracts for ten ‘pretend’ papers we might publish together after the data had been fully analysed.[4] The abstracts were written as a thought-experiment for use in-house. What they enabled us to do was not simply picture what the results might be, others had done that before.[5] Imagining the whole ‘fake’ article enabled us to write those vital ‘so what’ sentences at the end. Some of these were about what we would do if the ethnography revealed something grim. Writing the abstract enabled us to visualise a pathway out of any sticky situation we could think of. Plus there was something about seeing one’s name on 10 papers that made it more engaging. Human vanity perhaps! The many purposes of the abstracts appear in Box 1.

Box 1: The purpose of fake abstracts.

And the real findings?

The real findings are coming out now. The risk taking is being rewarded with new insights about how practitioners orchestrate change processes.[6]

I’d recommend other teams to try what we did. The rehearsal of ideas in advance quickly increased trust. When we started out we were mostly strangers. And the good news is: we are on track for more than ten real papers. So that is a delight.

[1] Green A, Innes-Hughes C, Rissel C, Mitchell J, Milat A, Williams, M, Persson L, Thackway S, Lewis N, Wiggers J. (2018). Co-design of the Population Health Information Management System to measure reach and practice change of childhood obesity programs. Public Health Research and Practice, 28(3): e2831822.

[2] Conte, K. Groen, S. Loblay, V. Green, A. Innes-Hughes, C. Mitchell, J. Milat, A. Persson, L. Thackway, S. Williams, M and Hawe, P. (2017) Dynamics behind the scale up of evidence-based obesity prevention: protocol for a multi-site case study of an electronic implementation monitoring system in health promotion practice. Implementation Science, 12: 146.


[4] Hawe P, Conte K, Groen S, Loblay V, Green A, Innes-Hughes C, Mitchell J, Milat A, Persson L, Thackway S, Williams M. (2019) Mock abstracts with mock findings: a device to catalyse production, interpretation and use of knowledge outputs in a university-policy-practice research partnership. Evidence and Policy. Online in advance of print.

[5] Wutchiett, R, Egan D, Kohaut S, Markman HJ, Pargament KI. (1984) Assessing the need for needs assessment. Journal of Community Psychology, 12: 53–60.



You can read the original research in Evidence & Policy:

Hawe, P. Conte, K. P. Groen, S. Loblay, V. Green, A. Innes-Hughes, C. Milat, A. Persson, L. Mitchell, J. Thackway, S. and Williams, M. (2019) Mock abstracts with mock findings: a device to catalyse production, interpretation and use of knowledge outputs in a university-policy-practice research partnership, Evidence & Policy, DOI: 10.1332/174426419X15679623018185.


Image credit: Author’s own.

By Penelope Hawe, Professor of Public Health, University of Sydney


If you enjoyed this blog post, you may also be interested to read:

To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review [Open Access]

Risk, uncertainty and medical practice: changes in the medical professions following disaster [Open Access]

Rethinking knowledge translation for public health policy [Open Access]


This post was originally published by Transforming Society on 2nd September 2020.


RSS Feed

© 2017 MRIC Global