top of page
Writing on the Board
White Structure

Introduction

The definition of early career researchers varies.  For example, early career researchers may be defined as

 

  • researchers who completed their PhD within the last five years

  • all researchers who have yet to secure an ongoing contract, including graduate students

  • as well as many other definitions

 

Yet, regardless of the definition, early career researchers experience a range of distinct barriers as well as some opportunities.  Job insecurity is especially high in these researchers.  Work hours are often excessive, because early career researchers are often told their position is a privilege they need to maintain at all costs.  Because of these pressures, many universities and other tertiary education institutions have developed resources and many other approaches to assist this cohort—from entrepreneurial training (Treanor et al., 2021) to advocacy programs (e.g., www.futureofresearch.org)

 

Kent et al. (2022) propose a variety of reasons to justify greater investment in the support and development of early career researchers including graduate researchers.  For example, early career researchers may be receptive to practices that have only recently been discovered or advocated than senior researchers whose values and behaviors may be more entrenched.  Consequently, early career researchers are vital to change and progress in research practices (e.g., Campbell et al., 2019)

 

To illustrate, as Campbell et al. (2019) revealed, early career researchers are more likely than are their experienced counterparts to share data.  Specifically, these researchers requested some raw datasets from 771 authors in animal biotelemetry.  Over 70% of the early career researchers agreed, but only  11% of more senior researchers agreed.

 

Similarly, early career researchers tend to be more diverse than senior researchers (Heggeness et al., 1027; Nikaj et al., 2018).  That is, impediments to career progress, such as unconscious biases or family responsibilities, are more likely to impinge on the advancement of older, senior researchers than younger, novice researchers.  Consequently, to increase this diversity in senior researchers, diverse early career researchers need to be granted more opportunities to thrive

 

Training resources to support early career researchers: Self-management

Because their jobs often feel insecure, early career researchers often experience intense pressure to thrive in research specifically and in academia more generally.  Accordingly, a team of researchers—called the Postdoc Special Interest Group, from an international society called the Organization for Human Brain Mapping Student—developed a series of recommendations that early career researchers should consider to withstand these pressures and to manage their own lives as effectively as possible.  A range of early career researchers, from Masters students to professors, contributed to this resource (Bielczyk et al., 2020).  According to this team,

 

  • early career researchers should develop expertise on a topic in which you can assist other individuals, such as a particular research tool or technology.  That is, you want to develop a skill in which other researchers will often seek your contributions or advice.

  • early career researchers should join a peer coaching program—or initiate this program if necessary;

  • early career researchers should gradually accrue a range of mentors; besides their own contacts, they might consider online associations such as the National Research Mentoring Network, available at https://nrmnet.net/, designed to facilitate mentoring relationships of biomedical, behavioral, clinical, and social sciences researchers

  • early career researchers could share their insights on blog sites, your data on data sharing sites, and their codes on repository sites, such as Github

  • early career researchers should join a hackathon, in which participants conduct research projects within a few days.  To access these hackathons, search BrainHack, HackOverflow, Odyssey Hackathon, and EduHack.     

  • before they submit an article, early career researchers should attempt to invite other researchers to evaluate their pre-prints, perhaps by posting the manuscript in preprint bioRxiv or similar repositories—a strategy that has been shown to increase media coverage (Serghiou & Ioannidis, 2018)

  • early career researchers should maintain their profile and visibility on Twitter, LinkedIn, Research Gate, and Google Scholar; platforms like Hootsuite enable researchers to coordinate these social media platforms more efficiently.

  • early career researchers should learn how to utilize online scheduling tools, such as Trello, to manage their projects more proficiently

  • early career researchers should join communities of individuals in similar roles, such as eLife Ambassadors. Isolation is one of the main sources of stress in early career researchers.

  • early career researchers should develop their own working style rather than conform to the patterns of their peers.  For example, they should consider the times of day in which they are most productive and creative, the sources of their procrastination, and other considerations.

  • to manage time effectively, early career researchers should abstain from activities that are not relevant to the future aspirations or vision.

 

Training resources to support early career researchers: Track records

Pickering and Byrne (2014) discuss the benefits that early career researchers enjoy if they publish systematic literature reviews.  Systematic literature reviews benefit early career researchers, including PhD candidates, because of several reasons:

 

  • The reviews are relatively simple to publish because the methods are explicit and rigorous

  • Individuals can identify shortfalls in the literature that can guide their work over several years

  • The reviews can be updated and published again over time

 

Laudel and Gläser (2008) developed a model to characterize the events and circumstances that facilitate the transition of early career researchers from dependent individuals to independent research colleague.  The authors showed that a successful PhD and a temporary role as a researcher—such as a research assistant or research fellow—facilitated this transition.  The main impediment to this transition, however, is the limited time available to dedicate to research. 

 

Training opportunities to support early career researchers: Funding

Once early career researchers have published some research, they may be ready to seek research funding.  Many programs have been developed to help early career researchers submit persuasive grant applications, although few of these programs have been evaluated comprehensively. 

 

To override this limitations, Weber-Main et al. (2022) are conducting a study to evaluate one coaching program. In this study, pairs of early career researchers in biomedical science, across the United States, will chose a scientific advisor in their discipline as well as attend group coaching sessions, facilitated by experienced NIH investigator who may not be specialists in the discipline.  The researchers will manipulate the level of coaching: Participants will either receive 5 months of group coaching or 5 months of group coaching plus 18 months of individual coaching.  The researchers will explore whether this intervention enhances both the rate of grant submission and the level of funding these participants attract. 

 

Before these coaching sessions, participants share a draft of one or more sections of a funding application—such as the aims, introduction, significance, innovation, approaches, and so forth.  Both the coach and peers deliver feedback.  In one condition, the scientific advisor who is a specialist in the discipline interacts with the coach occasionally. 

 

Besides coaching, institutions can also curate insights that early career researchers can apply to attract funding. As examples of insights, Doyle et al. (2021) recommend that, when appropriate, early career researchers should resubmit any of their grant applications that had been rejected previously rather than submit only novel applications.  As these authors show, resubmitted applications are 2 to 4 times more likely to receive funding during the next 5 years than novel applications.

 

Training resources to support early career researchers: Reproducibility and replicability

Many scholars now recognize that reproducibility and replicability are vital to science in particular and research more generally.  To clarify, a finding is reproducible whenever, other researchers would generate the same finding if they subjected the same data to the same data analysis.  A study is replicable whenever other researchers would generate analogous results if they conducted a similar study.  Sometimes, these terms are used interchangeably.

 

Auer et al. (2021) outline the conditions and circumstances that tend to diminish the reproducibility and replicability of a finding or study. The first set of conditions and circumstances revolve around unsuitable research methods.  For example, reproducibility and replicability is limited whenever

 

  • researchers do not utilize a rigorous and valid research design—such as attempt to utilize a randomized control trial, but fail to assign the participants randomly to conditions

  • researchers do not utilize the appropriate statistical analyses, such as overlook nested factors

  • researchers do not control relevant variables

  • the sample size is inadequate—and, therefore, either power is insufficient or outliers are hard to identify

  • researchers report only the findings that confirm their hypotheses and conceal other results

 

The second set of conditions and circumstances revolve around technical limitations.  To illustrate, reproducibility and replicability may be compromised in a study whenever

 

  • cell lines have been contaminated

  • antibodies or kits have not been validated sufficiently, and so on

 

The third set of conditions and circumstances relate to human errors.  To illustrate, reproducibility and replicability may be limited if

 

  • researchers did not describe the methods in sufficient detail to replicate

  • researchers did not share key information adequately, such as data, computer code, or reagents

  • researchers committed errors, and so forth

 

The final set of conditions and circumstances cannot be controlled by the researchers.  For example, several features of the research landscape diminish the likelihood that research is reproducible and replicable, such as

 

  • the scarce resources and the concomitant sense of competition that pervades the research sector

  • the fraud that may emanate from this sense of competition

  • the tendency of decision makers to reward significant results only rather than reward responsible research practices

 

Many resources have been developed to address these concerns and to promote reproducibility and replicability.  These resources are often directed to early career researchers and include

 

  • https://reproducibilitea.org/ - journal clubs that prioritize open science, reproducibility, and replicability

  • frictionless data fellowships at https://fellows.frictionlessdata.io is a virtual training program, presented over nine months, that introduces tools that early career researchers can utilize to enhance the reproducibility of their findings.  The organizers select eight fellows each year, and these fellows receive a stipend to complete the course.

  • R4E or reproducibility for everyone—a comprehensive set of materials and modules that help early career researchers publish reproducible and replicable findings

 

The reproducibility for everyone program, outlined by Auer et al. (2021), delivers content on eight topics including

 

  • how to manage data effectively, such as how to name and store data files effectively

  • the benefits of electronic lab notebooks over paper lab notebooks

  • how to create living protocols—tools that enable researchers to post updated versions of their protocols in suitable repositories

  • how to pre-register research

  • how to enable other researchers to access reagents, plasmids, seeds and organism strains to replicate studies

  • how to create suitable figures to display data, such as violin plots—rather than default plots that are often misleading

  • how to share code and data effectively

 

Training resources to support early career researchers: Translation of research to policy

Evans and Cvitanovic (2018) offered some insights on how early career researchers could improve their capacity to shape policy: one of the key goals of modern researchers.  First, early career researchers should ascertain the key stakeholders in policy change by contemplating

 

  • who could be favorably or unfavorably affected by their research?

  • why might other people by interested in this research?

  • which facets of this research are relevant to the goals of these individuals

  • how influential are these stakeholders?

 

Answers to these questions may evolve over time.  To collate information about these answers, initiate a Google Alert with relevant keywords to receive notifications of relevant media and changes in the field.  Similarly, gradually utilize public forums, seminars, conferences, community information sessions, and public hearings to learn about the topical issues in your field.

 

Second, early career researchers should gradually develop their public profile—perhaps with Google Scholar, a Twitter account, a personal blog, perhaps using Wiz, a LinkedIn profile, and Research Gate profile.  They can seek advice from their university and utilize programs like Hootsuite to coordinate and to maintain these accounts as efficiently as possible.  

 

Third, early career researchers should gradually learn to contribute towards policy discussions.  They might contribute to discussions in professional associations, community groups, government agencies, and so forth.  To contribute effectively, early career researchers should

 

  • recognize that even associations that seem homogenous, such as an industry body, might comprise individuals with diverse and conflicting perspectives.  So, early career researchers need to show their suggestions complement, rather than override, the diversity of existing opinions

  • often recount a story to express an opinion, because people accept, rather than challenge, these anecdotes

  • although you can express some passion, still demonstrate enough objectivity to show you are not biased

  • often commence with blogs, opinion pieces, or other formats that inform policymakers who may be too busy to remain abreast of the literature.  You could later send relevant policymakers relevant articles or summaries as well as meet over coffee.

 

After more informal channels, early career researchers might explore formal opportunities, such as submissions to government consultations or policy briefs. 

 

Fourth, early career researchers obviously need to extend and to maintain their network and relationships.  Although many individuals achieve this goal effectively, some early career researchers overlook some vital considerations.  For example, early career researchers should

 

  • prepare a brief summary of their existing role, the key problem they are attempting to solve with their research, an example of a unique approach they are applying, and the impediments they are attempting to overcome.  They should reduce this summary to a few sentences—and present this outline during serendipitous interactions with potential collaborators

  • consider how your existing network, such as colleagues, supervisors, or even friends, might be able to introduce you to relevant contacts

  • develop mutually beneficial relationships with mentors outside academia as well—especially mentors who might want to learn from these researchers

 

Fifth, many agencies and organizations now offer early career researchers internships, fellowships, summer schools, or other opportunities to apply their research to policy.  To illustrate

 

  • in the United States, these opportunities include the Duke science policy summer institute, Science outside the lab, Science and technology policy fellows, and the UCS science network mentor program

  • in the UK, these opportunities include the BES POST fellowship

  • in Australia, the government offers the Science policy fellowship program

  • Germany offers the UFZ science-policy expert group

  • Canada offers the Canadian science policy fellowship

 

Communities to assist early career researchers

A variety of associations, typically online communities, were designed to assist early career researchers.  In many instances, early career researchers established these communities, coordinate these communities or, at least, are the primary contributors to these communities.  Some of these communities include

 

  • ecrcentral.org or ECR Central—a website that shares funding opportunities and resources that could assist early career researchers.  The resources include tools and information to assist career development, science communication, reproducibility, teaching, peer review, and diversity.

  • www.futureofresearch.org or Future of Research—a community that supply resources to early career researchers, designed to improve their scientific endeavors.  The team also collect data about academia and scientific training, ultimately to improve decisions by universities around these matters

 

Priorities and goals of early career researchers: Challenges

To decide how to support early career researchers, institutions need to appreciate the main priorities, motivations, and concerns of these individuals.  Consequently, researchers have conducted and published an array of studies that explore these priorities, motivations, and concerns.   

 

To illustrate, Christian et al. (2021) conducted a survey of 658 early career researchers, working in Australia, to understand their experiences and challenges.  Over 31% of the respondents were dissatisfied with the workplace culture.  The main reasons they felt dissatisfied revolved around

 

  • inadequate support from more senior staff at their institution—a concern that 60% of respondents indicated was a problem or significant problem

  • feeling they are not valued

  • a conflict between the work and life responsibilities, and

  • job insecurity.

 

Many early career researchers are reluctant to leave academia, despite concerns about their job and organization.  Some respondents felt they did not want to waste the time they dedicated to their PhD and were not sure whether they have acquired the skills to pursue other avenues or occupations to pursue.

 

Early career researchers often experience marked stress in their role.   The survey identified several key sources of this stress.  For example

 

  • respondents indicated that job insecurity compromises their capacity to plan ahead, compromising their innovation and merely exacerbating their job insecurity—a vicious cycle.

  • these early career researchers were also reluctant to seek promotions or expand their research groups, because these changes would merely amplify their demands

  • respondents were swayed into perceiving their role as a privilege and thus felt obliged to work during the weekend and week nights

  • participants their productivity is judged inexorably—and they need to maintain this productivity despite some of the inevitable challenges and distractions they might experience in life

  • participants felt the expectations around grants and productivity are too steep

  • participants felt they were perceived as disposable and their contracts felt tenuous

 

Andrews et al. (2020) identified similar barriers in their study on the impediments that early career researchers experience when they attempt to pursue interdisciplinary research.  Specifically, they revealed that

 

  • early career researchers often feel inundated with work, partly because of pressures from colleagues or supervisors to collaborate on many projects and partly because of a perceived inability to refuse these requests

  • early career researchers often feel their funding is insecure and thus dedicate significant time to funding applications, teaching, and work outside academia

  • early career researchers felt their supervisor often dismissed their family obligations

  • early career researchers felt support in interdisciplinary research was especially unhelpful, because they were not sure which norms to follow and did not feel their skills were valued

 

Furthermore, early career researchers are not always granted opportunities to complete the tasks they enjoy and complete the tasks that will enhance their capabilities.  For example, Rodríguez‐Bravo et al. (2017) revealed that early career researchers enjoy their contributions to peer review of manuscripts and feel they learn vital information from this participation. Early career researchers prefer blind reviews, in which authors and reviewers do not know the identity of one another, and are uncomfortable with the prospect of open peer reviews.   Yet, because their publication record may be limited and their workload is often high, early career researchers may not be granted opportunities to participate in peer review.

 

Likewise, many funding bodies have, in recent decades, shifted their funding to more experienced researchers.  For example, as Daniels (2015) underscored, in recent decades, the percentage of research grants in US biomedical research that is directed to early career researchers has declined steadily—despite recommendations from many policy makers to reverse this trend. 

 

Priorities and goals of early career researchers: Communication

Nicholas et al. (2017) reported a longitudinal study of 116 early career researchers over three years, operating in the sciences and social sciences.  The participants were interviewed about their attitudes and behaviors around scholarly communication, including social media, online communities, and Open Science.  Participants felt that Research Gate in particular is becoming a vital platform to communicate their work—but high impact journals were still the most essential ingredient to their reputation. 

 

References

  • Andrews, E. J., Harper, S., Cashion, T., Palacios-Abrantes, J., Blythe, J., Daly, J., ... & Whitney, C. K. (2020). Supporting early career researchers: insights from interdisciplinary marine scientists. ICES Journal of Marine Science, 77(2), 476-485.

  • Auer, S., Haeltermann, N. A., Weissgerber, T. L., Erlich, J. C., Susilaradeya, D., Julkowska, M., ... & Jadavji, N. M. (2021). Science Forum: A community-led initiative for training in reproducible research. Elife, 10, e64719

  • Belkhir, M., Brouard, M., Brunk, K. H., Dalmoro, M., Ferreira, M. C., Figueiredo, B., ... & Smith, A. N. (2019). Isolation in globalizing academic fields: A collaborative autoethnography of early career researchers. Academy of Management Learning & Education, 18(2), 261-285.

  • Bielczyk, N. Z., Ando, A., Badhwar, A., Caldinelli, C., Gao, M., Haugg, A., ... & Group, P. S. I. (2020). Effective self-management for early career researchers in the natural and life sciences. Neuron, 106(2), 212-217.

  • Campbell, H. A., Micheli-Campbell, M. A., & Udyawer, V. (2019). Early career researchers embrace data sharing. Trends in Ecology & Evolution, 34(2), 95-98.

  • Christian, K., Johnstone, C., Larkins, J. A., Wright, W., & Doran, M. R. (2021). A survey of early-career researchers in Australia. Elife, 10.

  • Daniels, R. J. (2015). A generation at risk: young investigators and the future of the biomedical workforce. Proceedings of the National Academy of Sciences, 112(2), 313-318.

  • Doyle, J. M., Baiocchi, M. T., & Kiernan, M. (2021). Downstream funding success of early career researchers for resubmitted versus new applications: A matched cohort. PloS One, 16(11), e0257559–e0257559.

  • Evans, M. C., & Cvitanovic, C. (2018). An introduction to achieving policy impact for early career researchers. Palgrave Communications, 4(1), 1-12.

  • Gibson, E. M., Bennett, F. C., Gillespie, S. M., Güler, A. D., Gutmann, D. H., Halpern, C. H., ... & Zuchero, J. B. (2020). How support of early career researchers can reset science in the post-COVID19 world. Cell, 181(7), 1445-1449

  • Heggeness, M. L., Gunsalus, K. T., Pacas, J., & McDowell, G. (2017). The new face of US science. Nature, 541(7635), 21-23.

  • Herschberg, C., Benschop, Y., & Van den Brink, M. (2018). Precarious postdocs: A comparative study on recruitment and selection of early-career researchers. Scandinavian Journal of Management, 34(4), 303-310.

  • Kent, B. A., Holman, C., Amoako, E., Antonietti, A., Azam, J. M., Ballhausen, H., ... & Weissgerber, T. L. (2022). Recommendations for empowering early career researchers to improve research culture and practice. PLoS biology, 20(7), e3001680.

  • Laudel, G., & Bielick, J. (2019). How do field-specific research practices affect mobility decisions of early career researchers? Research Policy, 48(9).

  • Laudel, G., & Gläser, J. (2008). From apprentice to colleague: The metamorphosis of early career researchers. Higher Education, 55(3), 387-406.

  • Machovcova, K., Mudrak, J., Cidlinska, K., & Zabrodska, K. (2022). Early career researchers as active followers: perceived demands of supervisory interventions in academic workplaces. Higher Education Research and Development.

  • Nicholas, D., Watkinson, A., Boukacem‐Zeghmouri, C., Rodríguez‐Bravo, B., Xu, J., Abrizah, A., ... & Herman, E. (2017). Early career researchers: Scholarly behaviour and the prospect of change. Learned Publishing, 30(2), 157-166.

  • Nicholas, D., Watkinson, A., Boukacem‐Zeghmouri, C., Rodríguez‐Bravo, B., Xu, J., Abrizah, A., ... & Herman, E. (2019). So, are early career researchers the harbingers of change? Learned Publishing, 32(3), 237-247.

  • Nikaj, S., Roychowdhury, D., Lund, P. K., Matthews, M., & Pearson, K. (2018). Examining trends in the diversity of the US National Institutes of Health participating and funded workforce. The FASEB Journal, 32(12), 6410-6422.

  • Pickering, C., & Byrne, J. (2014). The benefits of publishing systematic quantitative literature reviews for PhD candidates and other early-career researchers. Higher Education Research & Development, 33(3), 534-548.

  • Rimando, M., Brace, A. M., Namageyo-Funa, A., Parr, T. L., Sealy, D. A., Davis, T. L., ... & Christiana, R. W. (2015). Data collection challenges and recommendations for early career researchers. The Qualitative Report, 20(12), 2025-2036.

  • Rodríguez‐Bravo, B., Nicholas, D., Herman, E., Boukacem‐Zeghmouri, C., Watkinson, A., Xu, J., ... & Świgoń, M. (2017). Peer review: The experience and views of early career researchers. Learned Publishing, 30(4), 269-277.

  • Serghiou, S., & Ioannidis, J. P. (2018). Altmetric scores, citations, and publication of studies posted as preprints. Jama, 319(4), 402-404.

  • Treanor, L., Noke, H., Marlow, S., & Mosey, S. (2021). Developing entrepreneurial competences in biotechnology early career researchers to support long-term entrepreneurial career outcomes. Technological Forecasting & Social Change, 164

  • Weber-Main, A. M., Engler, J., McGee, R., Egger, M. J., Jones, H. P., Wood, C. V., Boman, K., Wu, J., Langi, A. K., & Okuyemi, K. S. (2022). Variations of a group coaching intervention to support early-career biomedical researchers in Grant proposal development: a pragmatic, four-arm, group-randomized trial. BMC Medical Education, 22(1), 28–28

White Structure

Writing grant applications

Introduction

Most universities and research institutions strive to attract more research income.  Typically, to receive this income, they need to submit grant applications or similar documents.  Consequently, many of these institutions have attempted to improve the capacity of academics to write grant applications effectively. 

 

Initiatives to improve grant writing: Aims Review Committees

Some institutions have introduced aims review committees.  In essence, before academics write a grant application, they are encouraged first to prepare, and then submit, an overview of their research project in one page.  According to Nigrovic (2017), this single page should characterize the problem the research addresses, the aims or hypotheses, preliminary data that have been collected, and the distinct and key features of the approach these researchers will adopt to fulfill the aims—like an elevator pitch.  A committee then comments on the aim, the writing style, and the strength as well as flaws of these arguments.  This committee might convene every month or two and will comprise a chair and 3 to 10 academics.  The discussion on each submission usually lasts 20 to 30 minutes.  Often, these discussions generate insights that only one person uncovered but other members could amplify and clarify.   

 

Research indicates that reviews of these short overviews are more efficient than reviews of entire grant applications (Nigrovic, 2017).  This approach is effective partly because researchers confine their attention to perhaps the most important section of the grant application.  Second, this approach enables reviewers to evaluate many features of the grant application, but in minimal time.  Third, most of the reasons applications are declined revolve around whether the aims are compelling and whether the approach is feasible and defensible—reasons that reviewers can evaluate rapidly. 

 

Like most initiatives that have been implemented to improve grant writing, Nigrovic (2017) was unable to compare the effects of this committee to a control group.  However, most of the submissions received many suggestions, almost 80% of these submissions were eventually upgraded to actual grant applications, and almost 60% of these applicants were successful.

 

Initiatives to improve grant writing: Individual mentoring

In some institutions, academics, typically early career researchers, are assigned a research partner—a person who has accrued experienced in research administration and writing.  The role of this partner is to help these researchers write grant applications and secure grants.  These partners are seldom academic specialist in the same field of research as the person they are assisting and, despite their experience in research, may not be academics.  These partners may supplement the role of an academic mentor

 

To illustrate, Kulage et al. (2022) assigned partners to early career nursing researchers. The program comprised four key features: regular meetings, milestones, writing support, and administrative attention.   During the first meeting between the early career researcher and partner, the individuals reviewed a particular funding opportunity, scheduled regular meetings, and clarified the role of each person.  Typically, to maintain accountability, the individuals met every two weeks, although this frequency often increased in the month or so before the application was due.

 

Second, the individuals set a timeline, comprising a series of milestones, during the first meeting, and reviewed this timeline regularly.  To facilitate this goal, the individuals receive a template the institution has developed, stipulating weekly activities, but can adjust this template.  To illustrate, one milestone might be a session in which a team of academics review one page—a page that outlines the aims of this research project, called a “specific objectives and aims review” (Kulage & Larson, 2018).    

 

Third, the partner imparts advice on writing.  Specifically, the partner reviews draft sections in the application, imparting advice to improve the clarity, simplicity, cohesion, and quality—as well as correcting grammatical errors, diminishing redundancy, and limiting jargon or unnecessary abbreviations.  Nevertheless, to enable these researchers to develop their writing skills and maintain a sense of ownership, facilitators merely offer advice and do not rewrite sentences themselves.     

 

Finally, the partners inform the early career researchers of policies, procedures, and guidelines of which they might be unaware—such as Protection of Human Subjects or Inclusion of Women and Minorities—and check the application conforms to these documents.  To illustrate, the partner might confirm the application justifies the exclusion of a specific gender, ethnicity, or race

 

An evaluation of this program significantly improved the research income of this faculty. In particular, over a 10-year period, 81% of the academics who participated in this program received funding—almost double the percentage observed in academics who did not participate.

 

Initiatives to improve grant writing: A series of seminar

Many institutions have implemented a series of workshops or seminars to improve the capacity of researchers to write grant applications.  Although the benefits of these seminars are uncontested, researchers continue to debate which topics should be addressed in this series.  To illustrate one example, one large American university introduced a weekly series of seminars, each lasting 1 to 2 hours, over three months (Stein et al., 2011).  The topics revolved how to

 

  • initiate and outline preliminary studies, before submitting the grant application

  • write the section on statistics and power analyses convincingly

  • justify the significance of this research

  • justify the budget

  • respond to the feedback of reviewers

  • decide which funding opportunity to pursue

 

During the workshops, participants received samples of each section.  To complement the workshops, participants were prompted to complete various sections of a grant application, at particular times, and to organize a reviewer who could deliver feedback on these submissions. Before the workshops commenced, participants received a scheduled that specified when the various writing sections were due.    

 

After this series, participants were granted opportunities to complete evaluations of the seminars.  According to participants, future workshops should

 

  • impart more information about funding sources—such as the success rate and priorities of various options as well as details about which sources are directed to diverse academics

  • disseminate more samples and other materials

 

Regardless, the sessions were generally evaluated favorably.  After the workshops, participants indicated they were significantly more confident in their capacity to submit these applications.  Within one year, 40% of participants had submitted a grant application. 

 

Initiatives to improve grant writing: A blend of grant-writing groups and seminars

Some institutions have introduced programs that blend collaborative grant-writing sessions with seminars on how to write grant applications.  For example, the University of Windsor initiated a grant-writing group, comprising 14 researchers, specializing in social sciences and humanities (Wiebe & Maticka-Tyndale, 2017).  The group comprised experienced, emerging, and graduate researchers and met two hours on a Friday, once a month, over eight months.  A senior academic and a senior research coordinator facilitated the group.  During the first 50 minutes, in small teams of two to four individuals, participants discussed the work they had completed towards the grant and received feedback from the other members. Then, after a break, lasting ten minutes, participants attended a workshop about grant writing—presented by the facilitators or guests—and were assigned homework to complete by the next session.  During these workshops, the facilitators might disseminate exemplary answers on past applications, such as a response to the question about how the results will be communicated and applied in the future.  After the initiative, participants could submit their application to a competition that identified the best proposal.

 

To evaluate and to improve this program, the facilitators examined records of attendance, the homework of participants, and the submissions to the competition.  Participants could also submit anonymous feedback about the features they liked and disliked about the groups.  In addition, participants completed surveys that assessed their attitudes to various facets of this group, such as the format and homework.

 

Analysis of the data generated some insights that could be applied to improve future grant writing groups. For example

 

  • although participants valued the supportive atmosphere of this groups—and the capacity to deliver and receive feedback in a safe environment—some groups perhaps dedicated undue time to strengths.  In the future, during the first 50 minutes, participants could be obliged to express at least 3 improvements to each application.

  • the participants greatly valued samples of exemplary answers, derived from successful applications in the past.

  • but the participants would have preferred more advice on how to estimate the cost of items in the budget

  • the initiative did increase the number of grant applications that were submitted and the percentage of grant applications that were successful.

 

Similar programs, introduced at other institutions, have also been shown to be effective.  For example, Glowacki et al. (2020) describe an analogous program, comprising about ten workshops on grant writing, samples of previous applications, and peer review of applications.  In general, if individuals had attended the workshops, their grant applications were more likely to be successful in subsequent internal grants.  Also these successful applications were more likely to be awarded to diverse staff—suggesting this program may have addressed some of the barriers these staff would otherwise experience.   

 

Other effects of workshops in grant writing: Publications

Workshops in grant writing may not only attract research income but could attract other benefits as well.  For example, Hiatt et al. (2022) revealed how workshops in grant writing actually increased the number of publications. 

 

Specifically, this study revolved around the BUILD program or Building Infrastructure Leading to Diversity.  This program funded ten institutions to develop the research capabilities of diverse academics.  Academics from under-represented backgrounds received funding to develop research projects, collect data, and submit grant applications.  Specifically, institutions may utilize this funding to finance workshops on grant applications, workshops on teaching, sabbaticals, travel, new courses, and several other approaches.  If individuals utilized this funding to finance workshops on grant applications, the number of publications tended to increase.  

Other effects of workshops in grant writing: Overall

von Hippel and von Hippel (2015) suggested many other benefits that researchers may enjoy after they write and submit grant submissions. Specifically, these researchers administered a survey to almost 200 researchers.  These researchers were asked, on a 7-point scale, the degree to which grant applications generate a set of benefits, such as fosters collaborations.  The average score on each of these benefits was about

 

  • 5.7 out of 7 on improvements in scientific thinking

  • 5.4 out of 7 on consolidation of research plans

  • 5.1 out of 7 on generates new ideas—although this mean was appreciably higher in psychologists and lower in astronomers

  • 4.8 out of 7 on fostering collaborations

  • 4.6 out of 7 on generating text that could be used in subsequent papers or conferences

 

Recommendation on how to improve grant applications: Common suggestions

Scholars have written many books, papers, and guidelines on how to secure more grants.  Typically, scholars suggest that researchers should

 

  • dedicate between 100 and 200 hours of work, over at least three months, to larger grant applications (Guyer et al., 2020)

  • seek feedback from a diversity of academics (Guyer et al., 2020)

  • learn how to write as simply and unambiguously as possible; learn to use active voice and commence sentences with topic sentences but minimize jargon and abbreviations (Guyer et al., 2020; Wisdom et al., 2015)

  • insert words into the grant application that correspond to the mission, priorities, and language of the funding agency (Wisdom et al., 2015).

  • avoid long paragraphs or streams of text that are devoid of white space (Guyer et al., 2020)

  • confine your study to a few key aims—and discard peripheral research questions (Guyer et al., 2020; Wisdom et al., 2015); the subsequent methods and potential outcomes should explicitly correspond to one or more of these aims (Wisdom et al., 2015)

  • the innovation should usually emanate from integrating two or more familiar theories or approaches rather than from the latest technologies; reviewers prefer methods that have been established (Guyer et al., 2020)

  • if your research comprises multiple aims, these aims should be independent if possible.  That is, the likelihood that one aim if fulfilled should not depend on whether another aim of fulfilled.  Otherwise, problems with one facet of the research might impede other facets of the research (Guyer et al., 2020)

  • whenever possible, include preliminary data, such as a pilot or feasibility study (Bai et al., 2022; Guyer et al., 2020)

  • the grant application should not impart evidence that indicates the researchers on this grant have collaborated before or could readily collaborate in the future (Wisdom et al., 2015)

  • the grant application should document the facilities, space, equipment, and resources the researchers can already utilize (Wisdom et al., 2015).   

 

Recommendation on how to improve grant applications: Writing style

The writing style that researchers utilize can affect whether grant applications will be successful.  To illustrate, Van Den Besselaar and Mom (2022) conducted a linguistic analysis of 3207 applications to a funding scheme that was directed to early career researchers.  About 12% of these applications were successful.  Interestingly, as Van Den Besselaar and Mom (2022) revealed, reviewers tended to assign a higher score to applications in which

 

  • the authors were native speakers of English or lived in nations, such as the Netherlands, Austria, Denmark, and Norway, in which English speakers tend to understand the nuances of this language

  • the text comprises longer sentences, uncommon words, terms that imply confidence and certainty—such as “will” rather than “may”

  • the description of this project feels more like a story or narrative than a sequence of facts

 

Recommendation on how to improve grant applications: Identify limitations and justify remedies

To demonstrate that a research project is rigorous and the results are likely to be reproducible, researchers obviously need to acknowledge the key limitations of their methods and then propose, as well as justify, measures to address these limitations.  Wilson and Botham (2021) presented a simple illustration of this approach but also demonstrated why researchers do not always this approach effectively.  Their example referred to a proposed study, in which the researchers want to assess whether some chemical compound will prevent strokes in mice. In this example

 

  • the limitation is that chemical compounds that prevent strokes in mice might not prevent strokes in humans

  • the measure or procedure to address this limitation is to use a stroke mouse model—that is, laboratory mice that have been genetically manipulated to experience strokes

  • the justification or evidence of this measure may be that past studies indicate that determinants of strokes in stroke mouse models overlap closely with determinants of strokes in humans.   

 

The researchers could include one sentence to outline this limitation, measure, and justification, such as “To assess the benefits of this chemical compound, we will test whether this compound prevents strokes in a stroke mouse model—a model that past research shows mirrors humans in how their symptoms respond to drugs (Smith & Jones, 2020)”.  However, researchers often fail to acknowledge the essential limitations of their methods because of several reasons:

 

  • first, because the method is ubiquitous in their field, they assume this approach is uncontroversial—unaware the reviewer might not be as familiar with these protocols

  • second, the researchers might not even be aware of these limitations

  • third, they do not want to underscore limitations the reviewers might otherwise overlook—unaware that reviewers tend to be more concerned about limitations they identified themselves than limitations the researchers conceded and addressed

 

To overcome the first two reasons in particular, researchers should

 

  • prompt one or two peers—especially peers who study in similar but distinct fields—to attempt to uncover actual or perceived limitations of the methods

  • carefully read the instructions to authors in relevant journals.  Many journals will refer to protocols or principles that underscore the limitations and biases that are especially likely to concern reviewers

  • read books or papers that discuss these methods and attempt to locate references to limitations

 

Concerns about grant writing: Research integrity

Despite the importance of writing and submitting grants, this task is potentially contentious.  Specifically, as Conix et al. (2021) have observed, the instructions to grant applications inadvertently motivate behaviors that counter the values, principles, and norms that epitomize research integrity. 

 

First, as Conix et al. (2021) argue, grant applications may encourage behaviors that diminish accountability—the notion that researchers should present only arguments they can justify.  Yet, grant applications encourage researchers to insert content they cannot readily justify or defend.  They might, for example, need to include a detailed timeline and specify the likely outcomes, both of which are notoriously hard to predict, partly because these outcomes are often serendipitous.  Reviews of grants also need to pretend they can readily estimate the return on value of applications, even though research projects are unpredictable.

 

Second, grant applications may encourage dishonesty.  For example, to attract grants, researchers often include senior professors as co-investigators, aware these individuals have not contributed and may not contribute in the future.  In addition, researchers often withhold information about the project, a variant of dishonesty, because they are concerned the reviewer might lift or exploit this knowledge.  Furthermore, reviewers tend to prefer research projects that are not too innovative or risky, because they want to be confident the results will be useful.  Accordingly, many researchers submit applications on research projects they have already completed, but then use the funds they received on this application to support more innovative pursuits.   

 

Third, grant applications may encourage researchers or reviewers to violate the value of impartiality—in which individuals strive to diminish the effect of personal interests or preferences on their decisions and judgments.  To illustrate, in some nations, researchers may shape their research question to match the political leanings of their state.  In addition, to diminish the workload of funding agencies, the writers of a grant application are often granted the right to specify potential reviewers.  The problem is they might choose reviewers who are biased or whose work they have cited heavily. 

 

Fourth, grant applications may limit the capacity of researchers or reviewers to consider the broader interests of society, called responsibility.  To illustrate, academics are more likely to be promoted if they can attract substantial research income.  Consequently, they might feel inclined to inflate the budget, squandering more public money than perhaps necessary.  Similarly, on many projects, the researchers must exhaust all unspent funds before specific deadlines.  To fulfill this goal, researchers often squander their surpluses. 

 

Concerns about grant writing: Costs

Many academics do not write grant applications because they feel the costs—in time and resources—are unlikely to outweigh the expected benefits.  To explore this dilemma, von Hippel and von Hippel (2015) surveyed 113 astronomers and 82 psychologists, working in mainly large universities, who submit grants to examined the costs these submissions incur.  The survey collected data on the applications these participants had submitted, the funding they requested, whether these applications were successful, as well as the time they dedicated to background reading, data analyses, grant writing, budget preparation, and other relevant tasks.  In addition, the survey invited participants to consider other benefits of grant writing, such as the degree to which grant writing improves their scientific thinking, consolidates their research plans, inspires novel ideas, fosters collaborations, facilitates the development of graduate students and postdocs, generates text that could be used in future papers, or clarifies aspirations.    

 

The survey generated some key insights.  For example

 

  • on average, each principal investigator dedicates about 166 hours to every grant application

  • the time that individuals dedicate to grant application is not significantly related to whether this application is successful

  • however, the number of grant applications submitted is positively associated with the number of successful applications

  • researchers are unlikely to persist with grant applications if their success rate is below 20%.

 

References

  • Bai, J., Booker, S. Q., Saravanan, A., & Sowicz, T. J. (2021). Grant Writing Doesn't Have to Be a Pain: Tips for Preparation, Writing and Dissemination. Pain Management Nursing, 22(5), 561-564.

  • Banta, M., Brewer, R., Hansen, A., & Heng-Yu, K. (2004). An innovative program for cultivating grant writing skills in new faculty members. Journal of Research Administration, 35(1), 17.

  • Bienen, L., Crespo, C. J., Keller, T. E., & Weinstein, A. R. (2018). Enhancing institutional research capacity: Results and lessons from a pilot project program. The Journal of Research Administration, 49(2), 64.

  • Conix, S., De Block, A., & Vaesen, K. (2021). Grant writing and grant peer review as questionable research practices. F1000Research

  • Glowacki, S., Nims, J. K., & Liggit, P. (2020). Determining the impact of grant writing workshops on faculty learning. Journal of Research Administration, 51(2), 58-83.

  • Guyer, R. A., Schwarze, M. L., Gosain, A., Maggard-Gibbons, M., Keswani, S. G., & Goldstein, A. M. (2021). Top ten strategies to enhance grant-writing success. Surgery, 170(6), 1727-1731.

  • Hiatt, R. A., Carrasco, Y. P., Paciorek, A. L., Kaplan, L., Cox, M. B., Crespo, C. J., ... & Diversity Program Consortium. (2022). Enhancing grant-writing expertise in BUILD institutions: Building infrastructure leading to diversity. PloS one, 17(9).

  • Inouye, S. K., & Fiellin, D. A. (2005). An evidence-based guide to writing grant proposals for clinical research. Annals of internal medicine, 142(4), 274-282.

  • Jones, H. P., McGee, R., Weber-Main, A. M., Buchwald, D. S., Manson, S. M., Vishwanatha, J. K., & Okuyemi, K. S. (2017, December). Enhancing research careers: an example of a US national diversity-focused, grant-writing training and coaching experiment. In BMC proceedings (Vol. 11, No. 12, pp. 183-192). BioMed Central.

  • Kahn, R. A., Conn, G. L., Pavlath, G. K., & Corbett, A. H. (2016). Use of a grant writing class in training PhD students. Traffic, 17(7), 803-814.

  • Keogh, P. (2013). Motivation for grant writing among academic librarians. New Library World.

  • King, K. M., Pardo, Y. J., Norris, K. C., Diaz‐Romero, M., Morris, D. A., Vassar, S. D., & Brown, A. F. (2015). A community–academic partnered grant writing series to build infrastructure for partnered research. Clinical and Translational Science, 8(5), 573-578.

  • Kulage, K. M., Corwin, E. J., Liu, J., Schnall, R., Smaldone, A., Soled, K. R., ... & Larson, E. L. (2022). A 10-year examination of a one-on-one grant writing partnership for nursing pre-and post-doctoral trainees. Nursing Outlook.

  • Kulage, K. M., & Larson, E. L. (2018). Intramural pilot funding and internal grant reviews increase research capacity at a school of nursing. Nursing Outlook, 66(1), 11-17.

  • Nigrovic, P. A. (2017). Building an ARC to grant success: The Aims Review Committee. Arthritis Care & Research, 69, 459-461.

  • Serrano Velarde, K. (2018). The way we ask for money… The emergence and institutionalization of grant writing practices in academia. Minerva, 56(1), 85-107.

  • Smith, J. L., Stoop, C., Young, M., Belou, R., & Held, S. (2017). Grant-writing bootcamp: an intervention to enhance the research capacity of academic women in STEM. BioScience, 67(7), 638-645.

  • Stein, L. A. R., Clair, M., Lebeau, R., Prochaska, J. O., Rossi, J. S., & Swift, J. (2012). Facilitating grant proposal writing in health behaviors for university faculty: A descriptive study. Health Promotion Practice, 13(1), 71-80.

  • Talbert, P. Y., Perry, G., Ricks-Santi, L., Soto de Laurido, L. E., Shaheen, M., Seto, T., ... & Rubio, D. M. (2021). Challenges and Strategies of Successful Mentoring: The Perspective of LEADS Scholars and Mentors from Minority Serving Institutions. International Journal of Environmental Research and Public Health, 18(11)

  • Thorpe Jr, R. J., Vishwanatha, J. K., Harwood, E. M., Krug, E. L., Unold, T., Boman, K. E., & Jones, H. P. (2020). The impact of grantsmanship self-efficacy on early-stage investigators of the national research mentoring network steps toward academic research (NRMN STAR). Ethnicity & Disease, 30(1), 75.

  • Van Den Besselaar, P., & Mom, C. (2022). The effect of writing style on success in grant applications. Journal of Informetrics, 16(1), 101257.

  • von Hippel T, von Hippel C (2015) To apply or not to apply: A survey analysis of grant writing costs and benefits. PLoS ONE 10(3).

  • Wiebe, N. G., & Maticka-Tyndale, E. (2017). More and better grant proposals? The evaluation of a grant-writing group at a Mid-Sized Canadian University. Journal of Research Administration, 48(2), 67-92.

  • Wilson, J. L., & Botham, C. M. (2021). Three questions to address rigour and reproducibility concerns in your grant proposal. Nature, 596(7873), 609-610.

  • Wisdom, J. P., Riley, H., & Myers, N. (2015). Recommendations for writing successful grant proposals: An information synthesis. Academic Medicine, 90(12), 1720-1725.

White Structure

Strategies that increase citation rates

Introduction

Most researchers would like their publications to be cited frequently—perhaps 50 times or more—for a variety of reasons.  First, when researchers apply to seek promotions, jobs, or grants, they often need to record the number of times their publications are cited.  Citations, therefore, could attract promotions, jobs, or grants. Second, if none of their publications are cited, researchers will recognize their work neither shapes academic discourse nor generates benefits to society.  Fortunately, besides attempts to enhance the quality of their studies, researchers can apply a range of other practices to attract more citations.

 

Practices before publication that encourage citations: Number of authors

The number of authors on a publication may also affect citation rate.  To illustrate, Lopez et al (2016) explored whether a range of characteristics, including the number of authors, affects the likelihood that publications in the field of plastic surgery would be cited.  This analysis of almost 1000 publications revealed that number of authors was positively associated with citation rate.  Research that was published by only one author were not as likely to be cited over the next five years. Perhaps, when the number of authors increases, the likelihood that one of these individuals can promote the publication effectively may rise as well.

 

The geographical distribution, and not merely the number of authors, may affect the rate of citation as well.  As Aksnes (2003) and Douglas et al. (2020) revealed, if the authors are collaborators from multiple nations, the publication is more likely to attract many citations.    

 

Nevertheless, this association between number of authors and number of citations may not be observed in all circumstances.  Alabousi et al. (2019), for example, uncovered no association between number of authors and number of citations in the Canadian Association of Radiologists Journal.

 

Practices before publication that encourage citations: Funding

Some practices before a study is published, or even before a study is conducted, can affect the likelihood of citations.  To illustrate, Kulkarni et al. (2007) showed that whether a study had received industry funding may affect the rate of citations.  Specifically, the researchers compared the citation rates of two sets of publications: publications that had received industry funding and had generated results that are likely to favor or please the industry and publications that had not received industry funding and had not generated results that would favor or please the industry.  The publications that received industry funding and had generated results that are likely to favor or please the industry attracted over 25% more citations.  

 

These findings imply that industries, if pleased with results, are likely to promote the publication.  This promotion of the publication is likely to increase readership and thus citation.  Nevertheless, this study explored only research that had been published in prestigious medical journals: Lancet, JAMA, and New England Journal of Medicine.  Whether this pattern applies to other academic disciplines may warrant further research. 

Features of the study that encourage citations: Sample size

Some features of the research may affect the rate of citation.  For example, in studies published in the Lancet, JAMA, and New England Journal of Medicine, research in which the sample size was large, rather than small, was more likely to be cited (Kulkarni et al., 2007).  This finding was also observed in the spine literature (Yom et al., 2020). Alabousi et al. (2019) observed the same pattern in the Canadian Association of Radiologists Journal—although this effect of sample size, although significant, was not especially pronounced.  Conceivably, when the sample size is large, readers might feel the research is more legitimate and thus worthy of citation. 

Features of the study that encourage citations: Interdisciplinary research

Interdisciplinary research—research that utilizes or integrates multiple fields of research—can affect the rate of citations.  However, as Fontana et al. (2020) reveal, this association between interdisciplinary research and citation rates is multifaceted and nuanced.  In particular, whether interdisciplinary research attracts more or fewer citations depends on the measure of interdisciplinarity.  That is, researchers differentiate between three measures of interdisciplinarity:

 

  • variety or the number of fields in a publication

  • balance or the degree to which the fields are equally important to the publication, and

  • disparity or the extent to which the fields in this publication are disparate rather than similar

 

As Fontana et al. (2020) showed, high variety—that is, publications that comprise multiple fields—tend to attract more citations.  That is, publications that are relevant to many fields attract more readers and, hence, increase the likelihood and incidence of citations. In contrast, high levels of balance—or publications in which each field is moderately or modestly relevant—tend to attract fewer citations.  These publications are not perceived as especially useful or informative to a particular field, diminishing citations.  For similar reasons, high levels of disparity--or publications in which the fields are very different to each other—also diminishes citations.  Scholars in each field will perceive the publication as compromised by information that is not relevant to their pursuits.    

Features of the study that encourage citations: Level of evidence

Some research has explored whether level of evidence—such as whether the study is a randomized control trial or not—affects the citation rate.  Oravec et al. (2019) showed that level of evidence is positively associated with citation rate in neurosurgical research.  Researchers may be more inclined to cite more compelling studies.   Nevertheless, this finding has not been replicated in many other fields.

 

Features of the writing that encourage citations: Number of references and words

Publications that comprise many references are more likely to be cited.  Alabousi et al. (2019) , for example, revealed a positive, albeit small, association between number of references and number of citations in the Canadian Association of Radiologists Journal.  Perhaps scientists are more likely to read, and subsequently to cite, publications that refer to their work.

 

Some features of the reference list may also impinge on the citation rate.  For example, as Aksnes (2003) revealed, if a publication cites a diversity of journals—journals in both the same academic discipline and other academic disciplines—the article is more likely to attract many citations.   

 

Number of references might also coincide with another determinant of citation rate: length of publications.  In general, longer publications tend to attract more citations (e.g., Ball, 2008).  This finding could be ascribed to the possibility that longer publications entail more premises or propositions to cite.

 

Features of the writing that encourage citations: Clarity of language

The clarity and precision of language may also affect the citation rate.  To illustrate, as Martínez and Mammola (2021) revealed, if the title or abstract contains jargon, the publication is not as likely to be cited.  Presumably, readers are more inclined to disregard publications that are replete with jargon—even publications that might otherwise be relevant to their work.  Perhaps for similar reasons, when the title of a publication is longer, the article is not as likely to be cited (Jamali & Nikzad, 2011). Clarity in the abstract is not only more likely to attract citations but has been shown to increase media attention too, as measured by Altmetric scores (Jin et al., 2021)

 

Features of the writing that encourage citations: Title and abstract

Other features of the title and abstract may also influence the citation rate.  To illustrate, as many scholars recommend, researchers should repeat the keywords of the article in the title and abstract.  Search engines are more likely to retrieve publications in which the keywords are repeated in the title and abstract (for the rationale, see Jones & Evans, 2013)    

 

Features of the writing that encourage citations: English versus other languages

Many researchers are not native English speakers but can write and communicate in this language.  Therefore, these researchers may choose to write and to publish in their native language or in English.  These researchers often decide to write and to publish in English, assuming this language will attract more citations.   

 

To assess this possibility, Di Bitetti and Ferreras (2017) conducted a study in which they examined the effect of language on the number of citations a publication attracted.  The analysis revealed that English publications are more likely than other publications to attract citations.  This benefit of English persisted even after controlling the journal, the length of these articles, and the year of publication.   
 

Features of the journal that encourage citations

Several features of the journal affect the likelihood and frequency of citations.  For example, as reviewed by Ale Ebrahim et al. (2013), publications attract more citations if

 

  • the impact factor of the journal is high, obviously (Vanclay 2013)

  • the publication is available to the public at no cost. 

 

Research that is available to the public at no cost is called open access.  Interestingly, research that is open access, rather than unavailable to the general public, receive about 20% more citations (see Gargouri, et al., 2010; MacCallum & Parthasarathy, 2006).  Three models are utilized to arrange open access.

 

  • Gold Open access: Researchers can pay a journal an additional fee.  Consequently, the article will be available to the public at no cost. 

  • Diamond Open access: All articles in the journal are available to the general public at no cost.  Typically, a society or association manages these journals.  The membership fees pay the costs of production

  • Green Open access: Researchers are permitted to store a version of your paper—usually the accepted Word file—in a repository that anyone can access

 

Practices after publication that encourage citations: Shared data

Researchers can now share their data on a range of platforms, such as figshare.  After researchers share their data, their publications that reported these data are more likely to be cited extensively.

 

Piwowar et al. (2007) explored this possibility.  Specifically, they examined 85 publications of cancer microarray clinical trials.  In approximately half these publications, the dataset was publicly available.  These publications were more likely than were the other studies to attract citation.  Indeed, 85% of the citations of these 85 publications cited the articles in which the dataset was available.  More precisely, if the dataset is released to the public, publications tend to receive 69% more citations.  This pattern is observed even after the impact factor of journals, country of origin, and date of publication are controlled. 

 

Practices after publication that encourage citations: Social media

Social media, such as Twitter and LinkedIn, can also affect the citation rate of publications.  For example, as a variety of studies have shown, after research is publicized on Twitter, the research is more likely to be cited (see Clavier et al., 2020).  This pattern, for example, has been observed in several medical fields, such as coloproctology (Jeong et al., 2019), radiation oncology (Paradis, et al., 2020), gastrointestinal endoscopy (e.g., Smith et al., 2019), and urology (e.g., Hayon et al., 2019).

 

Indeed, Eysenbach (2011) showed this effect of Twitter on citations is often pronounced.  This study examined all tweets that included links to the Journal of Medical Internet Research. The publications that were mentioned on Twitter frequently were more likely to be cited than publications that were mentioned on Twitter only a few times or less.  About 75% of publications that were mentioned frequently on Twitter were cited extensively.  Only about 7% of publications that were mentioned infrequently on Twitter were cited extensively.

 

Despite these findings, media exposure is not always related to citation rate.  For example, O'Connor et al. (2017) explored the association between citation rate and Altmetrics—a measure of media exposure—in research on urology.  This research did not uncover a significant association between citation rate and Altmetrics.  Arguably, the qualities of research that attract other scholars do not always overlap with the qualities of research that attract media attention or the public.   Likewise, publications that are cited in Wikipedia do not necessarily attract more citations in scholarly journals (e.g., Marash et al., 2013).

 

References

  • Abt, H. A. (1998). Why some papers have long citation lifetimes.  Nature, 395(6704), 756-757.

  • Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation, 12(3), 159-170.

  • Alabousi, M., Zha, N., & Patlas, M. N. (2019). Predictors of citation rate for original research studies in the Canadian Association of radiologists journal. Canadian Association of Radiologists Journal, 70(4), 383–387.

  • Ale Ebrahim, N., Salehi, H., Embi, M. A., Habibi, F., Gholizadeh, H., Motahar, S. M., & Ordi, A. (2013). Effective strategies for increasing citation frequency. International Education Studies, 6(11), 93-99.

  • Ball, P. (2008). A longer paper gathers more citations. Nature, 455(7211), 274-275.

  • Chiang, A. L., Rabinowitz, L. G., Alakbarli, J., & Chan, W. W. (2016). Social media exposure is independently associated with increased citations of publications in gastroenterology. Gastroenterology, 150(4), S845.

  • Clavier, T., Besnier, E., Blet, A., Boisson, M., Sigaut, S., Frasca, D., Abou Arab, O., Compère, V., Buléon, C., & Fischer, M.-O. (2020). A communication strategy based on Twitter improves article citation rate and impact factor of medical journals. Anaesthesia Critical Care & Pain Medicine, 39(6), 745–746.

  • Dhawan, S., & Gupta, B. (2005). Evaluation of Indian physics research on journal impact factor and citations count: A comparative study. DESIDOC Journal of Library & Information Technology, 25(3), 3-7.

  • Di Bitetti, M. S., & Ferreras, J. A. (2017). Publish (in English) or perish: The effect on citation rate of using languages other than English in scientific publications. Ambio, 46(1), 121–127

  • Douglas, T. S., Chimhundu, C., Harley, Y. X. ., & De Jager, K. (2020). Collaboration and citation impact : trends in health sciences research at the University of Cape Town. South African Journal of Science, 116(3-4), 51–58.

  • Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of medical Internet research, 13(4).

  • Gargouri, Y., Hajjem, C., Larivière, V., Gingras, Y., Carr, L., & Brody, T., et al. (2010). Self-selected or mandated, open access increases citation impact for higher quality research. PLoS ONE, 5(10), e13636.

  • Giuffrida, M. A., & Brown, D. C. (2012). Association between article citation rate and level of evidence in the companion animal literature. Journal of Veterinary Internal Medicine, 26(2), 252–258

  • Hamrick, T. A., Fricker, R. D., & Brown, G. G. (2010). Assessing what distinguishes highly cited from less-cited papers published in interfaces. Interfaces, 40(6), 454-464.

  • Hayon, S., Tripathi, H., Stormont, I. M., Dunne, M. M., Naslund, M. J., & Siddiqui, M. M. (2019). Twitter mentions and academic citations in the urologic literature. Urology, 123, 28-33.

  • Jamali, H. R., & Nikzad, M. (2011). Article title type and its relation with the number of downloads and citations.  Scientometrics, 88(2), 653-661.

  • Jeong, J. W., Kim, M. J., Oh, H. K., Jeong, S., Kim, M. H., Cho, J. R., ... & Kang, S. B. (2019). The impact of social media on citation rates in coloproctology. Colorectal Disease, 21(10), 1175-1182.

  • Jin, T., Duan, H., Lu, X., Ni, J., & Guo, K. (2021). Do research articles with more readable abstracts receive higher online attention? Evidence from Science. Scientometrics, 126, 8471-8490.

  • Jones, K., & Evans, K. (2013). Good practices for improving citations to your published work (pp. 2).

  • University of BATH

  • Kulkarni, A. V., Busse, J. W., & Shams, I. (2007). Characteristics associated with citation rate of the medical literature. PloS One, 2(5), e403–e403.

  • Lopez, J., Calotta, N., Doshi, A., Soni, A., Milton, J., May, J. W., & Tufaro, A. P. (2016). Citation rate predictors in the plastic surgery literature. Journal of Surgical Education, 74(2), 191–198

  • MacCallum, C. J., & Parthasarathy, H. (2006). Open access increases citation rate. PLoS Biol, 4(5)

  • Marashi, S.-A., Seyed Mohammad Amin, H.-N., Alishah, K., Hadi, M., Karimi, A., & Hosseinian, S., et al. (2013). Impact of Wikipedia on citation trends. EXCLI Journal, 12, 15-19.

  • Martínez, A., & Mammola, S. (2021). Specialized terminology reduces the number of citations of scientific papers. Proceedings of the Royal Society B, 288(1948), 20202581.

  • O'Connor, E. M., Nason, G. J., O'Kelly, F., Manecksha, R. P., & Loeb, S. (2017). Newsworthiness vs scientific impact: are the most highly cited urology papers the most widely disseminated in the media? BJU international, 120(3), 441-454.

  • Oravec, C. S., Frey, C. D., Berwick, B. W., Vilella, L., Aschenbrenner, C. A., Wolfe, S. Q., & Fargen, K. M. (2019). Predictors of Citations in Neurosurgical Research. World Neurosurgery, 130, e82–e89

  • Paradis, N., Knoll, M. A., Shah, C., Lambert, C., Delouya, G., Bahig, H., & Taussky, D. (2020). Twitter: a platform for dissemination and discussion of scientific papers in radiation oncology. American Journal of Clinical Oncology, 43(6), 442-445.

  • Piwowar, H. A., Day, R. S., & Fridsma, D. B. (2007). Sharing detailed research data is associated with increased citation rate. PLoS ONE, 2(3), 308.

  • Quintana, D. S., & Doan, N. T. (2016). Twitter article mentions and citations: an exploratory analysis of publications in the American Journal of Psychiatry. American Journal of Psychiatry, 173(2), 194-194.

  • Shiri, R. (2016). Predictors of citation rate. Annals of Epidemiology, 26(2), 160–160.

  • Smith, Z. L., Chiang, A. L., Bowman, D., & Wallace, M. B. (2019). Longitudinal relationship between social media activity and article citations in the journal Gastrointestinal Endoscopy. Gastrointestinal endoscopy, 90(1), 77-83.

  • Tonia, T., Van Oyen, H., Berger, A., Schindler, C., & Künzli, N. (2016). If I tweet will you cite? The effect of social media exposure of articles on downloads and citations. International Journal of Public Health, 61(4), 513-520.

  • Vanclay, J. K. (2013). Factors affecting citation rates in environmental science. Journal of Informetrics, 7(2), 265-271.

  • Webster, G. D., Jonason, P. K. & Schember, T. O. (2009). Hot topics and popular papers in evolutionary psychology: Analyses of title works and citation counts in Evolution and Human Behavior, 1979-2008.  Evolutionary Psychology, 7, 348-362.

  • Yom, K. H., Jenkins, N. W., Parrish, J. M., Brundage, T. S., Hrynewycz, N. M., Narain, A. S., ... & Singh, K. (2020). Predictors of citation rate in the spine literature. Clinical Spine Surgery, 33(2), 76-81.

White Structure

Every 6 or so years, universities in Australia complete the Engagement and Impact Assessment—an exercise that evaluates the utility of their research.  Similar exercises have been implemented, or may soon be implemented, in many other nations as well.  Although this program is likely to be refined or even superseded quite soon, the evaluation of impact is likely to be an enduring feature of universities.  Therefore, universities need to be attuned to how they can improve their performance on these evaluations.    

 

Predecessors of the Engagement and Impact Assessment in Australia

In May 2004, the Australian Government released a series of announcements—under the package called “Backing Australia’s Ability—Building our Future through Science and Innovation” that included a plan to assess the quality of Australian research that is publicly funded.  In 2005, an Expert Advisory Group, chaired by Professor Gareth Roberts, endorsed a paper on the Research Quality Framework.  After some debate, an updated model was signed by Brendon Nelson, the Minister for Education, Science and Training.  To assess quality of research, this model considered both academic merit—as recognized by peers—and broader impact, as recognized by end users.  Nevertheless, the report did not clarify how this broader impact would be measured. In January 2006, the new minister, Julie Bishop supplanted the Expert Advisory Group with the RQF Development Advisory Group, chaired by the Chief Scientist at the time.

 

This RQF Development Advisory Group established working groups on metrics, impact, and IT. In September 2006, the working group on research impact, after meeting four times, released a report on principles that should be applied to measure impact.  For example, they recommended that

 

  • when the impact of research is assessed, impacts over the last six years should be included, even if the research was conducted more than six years ago

  • the exercise will assess the impact of research groupings, comprising five or more academics

  • to assess impact, the submission of a research grouping should be fewer than 10 pages and include a statement that outlines the impact of research, according to various criteria, four case studies to substantiate this statement, and end users who can act as referees

  • indicators should be verifiable and auditable.

  • expert assessment panels should comprise six core members, three end-users to assess impact, and three relevant researchers to assess quality

 

The working group suggested possible indicators of research impact, but acknowledged the limitations of these metrics. These recommendations significantly shaped subsequent reports and practices.  At this time, the degree to which impact assessments would shape the distribution of funding was uncertain, but was assumed to approximate about 10% (Grant et al. 2009). In November 2006, the Minister announced the Research Quality Framework exercise would be implemented in 2008 and measure both quality and impact.

 

In 2007, 13 assessment panels were established, and the submission and technical specifications were published.  However, after the new Labor government was installed in October 2007, the Minister for Innovation, Industry, Science revealed the Research Quality Framework would be abandoned because the exercise was unnecessarily expensive and the measure of impact is unverifiable.

 

Around this time, the UK announced a plan to update their Research Assessment Exercise in response to previous concerns.  This exercise and the predecessor, the Research Selectivity Exercise, had been conducted six times before 2008.  During November 2007 to February 2008, the Higher Education Council of Funding for England or the HEFCE coordinated a consultation, primarily to streamline the costs of these exercises and to prioritize quantitative measures over peer review, recognizing the tension between efficiency and accuracy.  This review, however, uncover ardent support towards the introduction of impact criteria rather than a reliance on citation rates. 

 

Therefore, in 2009, the HEFCE commissioned a review on how research agencies around the globe measure impact.  The report suggested the Australian Research Quality Framework could underpin the updated UK model, called the Research Excellence Framework.  A subsequent consultation did uncover some reservations, however, especially about the notion that measures of impact depend on political priorities.  In 2010, the HEFCE piloted an impact exercise on 29 institutions and developed criteria to measure the significance—or degree to which research has influenced change—and reach—or extent to which these changes have influenced many end users—of research.  The pilot was deemed as a success, suggesting the narrative case studies are feasible and effective.  Despite this apparent success, over 17 500 UK academics signed a petition to remove this impact assessment, maintaining this exercise shifts the meaning and definition of research excellence. 

 

Because of some raised concerns, in 2010, the incoming UK government deferred the Research Excellence Framework by one year, enabling the HEFCE to improve the design, especially around the measure of impact. In March 2011, the Research Excellence Framework was updated.  Specifically, 20% of the value to each submitted was assigned to the measure of impact, 5% less than originally planned.  Research outputs was assigned 65%, and research environment was assigned 15%.   

 

Around this time, the Department of Innovation, Industry, Science and Research recommended an exercise that measures impact, in parallel to the updated assessment of research quality in Australia—the Excellence of Research for Australia—to assess the benefits of publicly funded research.  In 2012, the equivalent Department, after conducting a feasibility study, announced two schemes to assess impact: one scheme that explores the impact of research at each university, with reference to case studies and metrics and a second scheme that integrates data from government departments, publicly funded research agencies, and universities.  Nevertheless, in 2013, the CEO of the Australian Research Council, Aidan Byrne, expressed doubts about measures of impact, because of potential to manipulate these metrics. He also suggested that funding applications already encourage researchers to consider and to enhance the impact of their research.

 

Tensions between the Australian Research Council and government culminated in the publication of the National Innovation and Science Agenda in December 2015.  This publication foreshadowed the engagement and impact assessment (Watt, 2015), administered in parallel with the Excellence of Research for Australia.  In May 2016, the Australian Research Council released a consultation paper on this engagement and impact assessment—a paper that was significantly shaped by the UK Research Excellence Framework.  A pilot was completed in the first half of 2017 to assess the approach.  This pilot distinguished two measures that are combined in the UK version: engagement and impact.  Engagement was defined as the interactions between researchers and end users to exchange knowledge, technologies, methods, and resources.  To demonstrate engagement, universities submit a set of indicators—such as patents, patent citation data, co-authorship, and research income—and an accompanying narrative that delineates the content and other engagement activities.  Impact was defined as the contribution of research to economy, society, and the environment.  To demonstrate impact, universities submit impact studies that correspond to one FOR code and delineate how they promote or enable research impact.

 

This evaluation of impact is not confined to the UK or Australia.  For example, the Research Quality Evaluation in Italy and the Standard Evaluation Protocol in the Netherlands also measure the impact of research (Zheng et al., 2021).   

 

Outline of the approach: The ratings

The Australian Research Council evaluates the engagement and impact of each discipline in Australian universities that produce enough publications. Each discipline is assigned a rating of high, medium, or low on three measures: engagement, approach to impact, and impact (see the EI 2018 Submission Guidelines).  Specifically

 

  • high engagement implies the research team and end users exchange knowledge, technologies, methods, and resources very effectively to the benefit of all parties

  • high approach to impact implies the research team has introduced very effective mechanisms to translate research into benefits outside academia

  • high impact implies the impact of this research outside academia was highly significant

 

Medium engagement, approach to impact, and impact is similar, except strong terms like “very” or “highly” or replaced with moderate terms like “quite”.  In addition

 

  • low engagement implies the evidence the research team and end users exchange knowledge, technologies, methods, and resources is limited

  • low approach to impact implies the mechanisms that research team have introduced to translate research into benefits outside academia are inadequate or sustainable

  • low impact implies the impact of this research outside academia was negligible or limited.

 

Outline of the approach: Submissions to demonstrate engagement

To gauge engagement, the Australian Research Council considers both quantifiable and objective data—such as patents, patent citation data, co-authorship, and research income—as well the narrative that universities supply.  As the Australian Research Council recommends, these narratives may include information about

 

  • engagement with stakeholders, such as co-location of industry partners on campus, secondments, joint conferences with industry partners, an innovation hub, and citizen science

  • shared advice and resources, such as development of training workshops, webinars, and videos with industry partners or shared use of specialized equipment, infrastructure, and resources

  • other information, such as awards and alternative metrics

 

Outline of the approach: Submissions to demonstrate impact

Universities also need to submit impact studies, detailing how the research of a team benefited society.  To illustrate, universities may refer to data or tangible changes that demonstrates how their research

 

  • delivered technologies in partnership with government, business, or community beneficiaries

  • inspired changes to the resilience, cohesion, and performance of communities, such as parenting programs.

  • addressed some important social problem in society, such as climate change, food security, or cultural preservation in Aboriginal and Torres Strait Islander communities

  • enhanced safety and justice, such as improvements in health and safety standards, defense, sentencing, and medical treatments  

  • improved experiences that relate to everyday life, such as education campaigns to promote health or better transport planning

 

Outline of the approach: Submissions to demonstrate effective approach to impact

Universities must also delineate how they attempted to enhance the impact of their research on society.  In these submissions, they may refer to

 

  • engagement with local, state, and federal governments, such as joint projects or contribution to government committees

  • MOUs, joint ventures, and other agreements to help the institution collaborate with industry partners

  • forums, workshops, or seminars that engage industry leaders and the general public

  • staff exchanges

 

Examples of impact

Since the advent of the Engagement and Impact Assessment in Australia and the Research Excellence Framework in the UK, institutions have submitted many case studies to exemplify the impact of their research.  To uncover the key themes and features of these case studies, Zheng et al. (2021) subjected almost 7000 of these submissions to a set of analysis, such as word co-occurrence network analysis and topic modeling—an unsupervised machine learning algorithm that can unearth underlying themes in a collection of documents. Specifically, topic modeling uncovers clusters of words or phrases that demonstrate similar patterns, such as coincide with the same terms.  For example, one analysis explored how research can translate to impact.  As this analysis revealed, research can

 

  • increase the number or diversity of people and groups that undertake some act, such as wash their hands

  • increase the duration in which people and groups undertake some act

  • increase the extent to which people and groups undertake some act

  • enhances the efficiency, or reduces the cost, time, and effort, to undertake some act

  • enhances the benefits or utility of some act

  • enhances the experience of some act, such as diminishes the level of stress

  • increases awareness of some act or opportunity

  • informs debates and discussions around potential solutions and problems—especially relevant to philosophical matters, such as the ethics of AI

 

Potential changes in the future

After their analysis of the UK and Australian assessments of impact, Williams and Grant (2018) underscored a range of practices that could be updated and debated in the future to gauge impact.  For example

 

  • the councils may introduce mandatory and optional fields to guide the case studies or impact studies that universities submit

  • the councils may consider how impact should be attributed—because impact can be ascribed to specific research studies or general research activity, designed to translate the gamut of research in a field to practice

  • the councils may consider whether the quality of research should affect measures of impact; for example, research that is deemed low as quality could be excluded from case studies

  • the councils may consider audits to verify the claims that are included in case studies or impact studies

  • the councils may need to decide whether case studies or impact studies that were submitted in one round can be submitted in future rounds

 

Furthermore, the councils may vary on the degree to which they prioritize the outcomes of research over the practices that were implemented to enhance impact (for a discussion, see Gunn & Mintrom, 2018).  The assessment could primarily weight actual impact.  However, the actual impact of research depends on many considerations the university cannot shape, such as the resources of organizations.  If the outcomes are weighed heavily, researchers might be reluctant to collaborate with organizations that are bereft of capital—the very organizations that might benefit from research.  Conversely, the assessment could primarily weight the practices the institutions applied to enhance impact.  However, evaluation of these practices may not be as robust, consistent, or beneficial.   

 

Potential improvements to the Engagement and Impact Assessment in Australia: Measure of engagement

In 2020, the Australian Business Deans Council submitted some concerns about the enagegement and impact assessment exercise in response to the review that was conducted by the Australian Research Council.  For example, according to their perspective (see Grant, 2020)

 

  • HERDC data, to measure cash support from research end users, may not always measure engagement accurately.  The income contributions of an NGO, even if small relative to the income contributions of a large corporation, might be significant to this organization and thus demonstrate strong engagement.

  • institutions should be able to submit more than one narrative for each board discipline.  The Australian Business Deans Council recommend the number of narratives should depend on the number of staff within the discipline to characterize the range of engagement activities effectively. 

  • the engagement narratives should include other evidence, such as testimonials from end users, to corroborate the narrative

  • the threshold between medium engagement and high engagement seems hazy

 

Potential improvements to the Engagement and Impact Assessment in Australia: Recommendations adapted from the Stern Review

In 2016, Lord Stern completed a review of the Research Excellence Framework in the UK.  The aim of this review was to diminish the costs and burden of this exercise, while identifying and fostering excellence.  The review integrated data from 40 interviews with universities, academics, and end users, stakeholder meetings, and over 300 written submissions. According to Bolingford (2017), some of these recommendations could be applied to the Engagement and Impact Assessment exercise in Australia.  For example

 

  • Stern recommends that impact should also encompass the effect of research on teaching, pedagogy, cultural life, public engagement, and the generation of disciplines.  The existing approach considers impact outside academic—but disregards impact that spans the boundary between academia and society.

  • Stern was concerned that institutions should be able to submit case studies that span many disciplines and represent the impact of institutions holistically.  An expert interdisciplinary panel would assess these case studies.  Institutional case or impact studies could be included in the Engagement and Impact Assessments as well.

 

Concerns raised about the Engagement and Impact Assessment in Australia: Accountability

Some researchers underscore the paradox that research assessment exercises, such as the Engagement and Impact Assessment in Australia, are designed to enhance the accountability and transparency of research.  And yet, these exercises as not especially accountable or transparent, according to these scholars (e.g., Sawczak, 2018).  That is, researchers cannot readily evaluate whether the costs of these exercises outweigh the benefits given that

 

  • the costs of these exercises are uncertain because, since 2018, universities no longer reported the time they dedicated to the preparation of submissions to the Excellence in Research for Australia exercise. 

  • the effect of these exercises on the quality and impact of research is uncertain as well

 

Practices to improve submissions of engagement and impact assessments in Australia: Support roles

To submit engagement and impact assessments successfully, staff at universities need to have developed a range of competencies.  Nicholson and Howard (2018) subjected position descriptions and job advertisements of research support roles in Australia and the UK to content analysis.  To extract suitable position descriptions and job advertisements, the researchers entered terms such as   research impact officer, engagement impact officer, impact officer, research officer, and research support officer. Furthermore, the researchers compared the competencies that emanate from this analysis with the skills audits in library and information sciences.  

 

The analysis uncovered a set of personal and interpersonal skills, such as communication, negotiation, leadership, teamwork, problem solving, initiative, and organizational skills, that are central to support the engagement and impact assessment.  More relevant to this study, however, the analysis also uncovered more technical skills, such as industry knowledge, project management, software, IT, and web knowledge, funding knowledge, report writing, as well as information and data management.  Interestingly, skills audits in library and information sciences encompassed virtually all these technical competencies—except social media management.  This study, therefore, suggests that library and information science is positioned appropriate to support the administration and submission of the engagement and impact assessment.

 

Practices to improve ratings on engagement and impact assessments in Australia: Research pitch tools

To enhance ratings of engagement and impact assessments, scholars have recommended that researchers learn how to design research that fulfills the needs of industry partners more effectively and, therefore, is likely to foster this engagement and impact.  To illustrate, Faff et al. (2022) developed a research pitch tool that achieves this goal, called Pitching Research for Engagement and Impact or PR4EI. This tool is predicated on some key assumptions, such as

 

  • researchers need to direct their research to the actual problems of key stakeholders

  • to understand these problems, researcher must interact with the most important stakeholders—the individuals, teams, organizations, or communities this problem affects to the greatest extent

  • research, however, is not applicable to problems that must be solved too rapidly; in these instances, stakeholders must apply existing policies, expert intuition, and other approaches

  • engagement that demands secrecy, expedient solutions, and rampant pragmaticism but precludes careful analysis or publication is unlikely to benefit from research

 

The pitching template prompts researchers to consider various determinants of engagement and impact.  Specifically, Faff et al. (2022) recommend that researchers first utilize a template to develop a research proposal.  Specifically, the researchers should specify

 

  • a working title

  • a basic research question—that is, the controversy or limitation in the research they want to solve

  • key papers: a few recent publications that underscore this controversy or limitation

  • the puzzle or paradox the research is designed to solve

  • an idea, method, or approach to solve this puzzle

  • the data or information the researcher will collect to solve this puzzle—such as survey data, secondary data, interview data, and so forth

  • the tools or equipment the researcher will use to collect the data or information

  • the unique features of this research

  • how this research could be applied in policy or practice.

 

After they design one or more provisional research projects, researchers should complete another template that is designed to shift their perspective to the needs of stakeholders.  For example, they should record

 

  • the working title: similar to the previous research project, but tilted towards impact

  • the basic impact goal: one sentence to describe who or what could benefit from your research and how

  • external triggers: important or controversial events or reports from industry, government, or scholarly literature that underscore or illustrate the problem

  • motivation or problem: a brief description of the problem you want to address and why

  • stakeholders: three stakeholders who are interested in this problem—together with why they are interested in this problem, how they are striving to solve this problem, their limitations or shortcomings, whether they could invest money to solve the problem, and whether the stakeholders are collaborators or competitors of one another

  • value proposition: the benefits these stakeholders enjoy if this problem is solved, such as business outcomes or capability

  • resources: the resources you need to achieve this impact, such as funds, time, people, intellectual property, and skills—and how you will access these resources, such as grants

  • communication strategy: how will the key stakeholders learn about your research—such as industry publications, social media, presentations, mentoring, participation in committees, emails, joint ventures, or meetings in person

  • metrics: measures to assess engagement and impact, such as shares on social media, reach of publications, consulting revenue, or measured effects on people and organizations

  • impact: two sentences to delineate the essence of this impact—such as whether the research is likely to guide policy, management practices, professional practices, social behavior, or public discourse.  Does this impact overlap with academic objectives too

  • other considerations: also consider the role of collaborators, IP ownership with collaborators, potential to commercialize, and an analysis of reputational, competition, and compliance risks

 

Once researchers complete this template, they can then utilize their responses to inform subsequent activities.  Researchers could use these responses to guide their conversations with potential collaborators, industry partners, or other stakeholders.  Or they could use these answers to improve grant applications or promotion applications and so forth.  Over time, they might, iteratively and gradually, improve this template.  To illustrate, they might conduct a more comprehensive stakeholder analysis matrix—in which they evaluate the influence and importance of many relevant stakeholders.  They could then apply this tool to more than three stakeholders.  

 

References

  • Bolingford, I. (2017). The Research Excellence Framework and the Stern Review–Implications for Australia’s Engagement and Impact Assessment Framework. Available at SSRN 3414086.

  • Faff, R., Kastelle, T., Axelsen, M., Brosnan, M., Michalak, R., & Walsh, K. (2021). Pitching research for engagement and impact: a simple tool and illustrative examples. Accounting and Finance (Parkville), 61(2), 3329–3383.

  • Grant, D. (2020). Re: Review of Excellence in Research for Australia (ERA) and the Engagement and Impact Assessment (EI).

  • Gunn, A., & Mintrom, M. (2018). Measuring research impact in Australia. The Australian Universities’ Review, 60(1), 9–15.

  • Nicholson, J., & Howard, K. (2018). A study of core competencies for supporting roles in engagement and impact assessment in Australia. Journal of the Australian Library and Information Association, 67(2), 131–146.

  • Sawczak, K. (2018). The hidden costs of research assessment exercises: the curious case of Australia. Impact of Social Sciences Blog.

  • Tsey, K. (2019). Working on wicked problems: A strengths-based approach to research engagement and impact. Springer.

  • Watt, I. J. (2015). Report of the review of research policy and funding arrangements. Department of Education and Training, Commonwealth of Australia

  • Williams, K., & Grant, J. (2018). A comparative review of how the policy and procedures to assess research impact evolved in Australia and the UK. Research Evaluation, 27(2), 93–105

  • Zheng, H., Pee, L. G., & Zhang, D. (2021). Societal impact of research: a text mining study of impact types. Scientometrics, 126(9), 7397–7417.

The model university 2040: An encyclopedia of research and ideas to improve tertiary education

©2022 by The model university. Proudly created with Wix.com

bottom of page