FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Bridging the Gap between Research and Policy: Lessons from Co-Creation in the Aid Sector

There is an increasing focus in academic and policy circles on research-policy partnerships. These partnerships are often achieved through co-creation, or “the joint production of innovation between combinations of industry, research, government and civil society.” Co-creation is central to innovation in the hard sciences and technology, but its role in international relations scholarship and aid policy remains underdeveloped.

As scholars of international aid practice, we believe that co-creation can help us design and conduct more relevant, rigorous, and impactful research. It is also a core mission of the Research on International Policy Implementation Lab (RIPIL), whose co-creation process engages policymakers and practitioners in: 1) the generation of important, policy-relevant research questions; 2) research on these questions, through regular validation and consultation; and 3) the development and dissemination of findings and their policy implications, which often leads to the identification of important new research questions and opportunities.

In this piece, we focus on the first phase: the co-creation of research questions. This is one of the trickiest phases of the co-creation process because it requires researchers and policymakers to find a common question and research design that aligns with academics’ incentive to publish rigorous research and policymakers’ incentive to feed evidence into the policy process. Future blog posts will discuss how to implement co-created research and disseminate co-created findings.

At the beginning of the COVID-19 pandemic, we initiated a collaboration intended to generate policy-relevant research questions on the changing nature of international aid. Our aim was to get a sense of whether the combined shocks of COVID-19, growing calls to decolonize aid, and the rise of populism and popular protest had changed the underlying power dynamics in aid. 

Importantly, in this project, we did not just want to learn from practitioners based in Western Europe or North America. We wanted insights from key thinkers and actors from the context where aid dependency has been most acute: the African continent. We wanted to understand how these thought leaders viewed aid-related power dynamics and how research could help answer their most puzzling questions.

Between 2020 and 2022, we conducted one-on-one interviews, organized virtual focus groups, and hosted a high-level roundtable in Geneva with donor governments and international non-government organizations (INGOs) on power in aid, all to better understand the changing nature of aid and the research questions that matter to policymakers, practitioners, and key African thinkers.

A synthesis of our thematic findings is available here. In this blog, we discuss our four most important lessons learned about the co-creation process itself. 

First, co-creation requires scholars to bring knowledge to the table and to put the voices of others at the center

We saw our discussions as an exchange of knowledge. Therefore, we wanted to make sure that we brought something to the table. Before each meeting and workshop, we circulated a summary of the existing research and our discussions from previous meetings. Having set the stage with these syntheses, we then focused each interview and workshop on listening (not talking). This allowed us to build on the existing academic knowledge, and to use the conversations to identify how it diverged from the everyday experiences of our interviewees and workshop participants. It also allowed each participant to arrive feeling well-prepared, in part through the materials we provided.

Importantly, we began each workshop and roundtable with presentations by African scholars and practitioners. They helped shape the power dynamics of the conversation from the outset.

The process worked. Our preparation, planning, and careful facilitation enabled open and respectful communication among key African thinkers and representatives of international non-governmental organizations (INGOs), donor governments, recipient governments, and the United Nations.

Second, co-creation requires regular communication, persistence, and respect

We wanted to hear what donors, recipients, and key observers had to say about changing power dynamics around aid. We wanted to understand the perspectives of people from different recipient and donor countries to see if there might be broader trends.

Most of the people who participated in our discussions did not know us or each other. To enable an open conversation, we had to create an environment where they could trust us, and each other. This took time. We had to reach out to people repeatedly; build relationships through one-on-one conversations at the beginning of the process; and use these one-on-one conversations and our repeated meetings over time to establish our own credibility. This paid off in the quality of the conversations we were eventually able to have in our focus-group discussions and, subsequently, in our high-level panel in Geneva.

Third, co-creation requires researchers to be flexible and willing to let go of their prior expectations

If the point of co-creation is to spark new lines of inquiry, researchers involved in co-creation must be willing to let go of the questions they think they should be asking and be open to the questions that others think are most important.

In our initial one-on-one conversations, we focused on asking open-ended, big picture questions to get a sense of whether participants thought power dynamics had changed and, if so, how. In some cases, their answers confirmed our assumptions. In others, we were surprised by new insights.

For example, respondents indicated that the rise of populism in Africa was leading to a backlash against aid recipients and donors. This led us to start a new research project on aid and populism that we could not have imagined at the beginning of the process. 

Fourth, co-creation requires a considerable time investment

This is hard to understate. Co-creating research questions involves the translation and transfer of ideas between science, policy, and practice over an extended period. This means that researchers should not engage in co-creation expecting quick wins or immediate research results. Co-creation is not a quick strategy to increase your research output, but a long-term commitment to identifying important research questions and building the relationships necessary to answer them. 

When done well, co-creation has the potential to improve the relevance and impact of research, foster greater collaboration and understanding between researchers and practitioners, and ultimately contribute to positive change in the aid sector. But it is time-consuming and requires patience, careful planning, regularly questioning one’s assumptions, and continuous communication.

We believe that the investment of adequate time up front has been worthwhile, greatly enhancing our understanding of the power dynamics in aid today and enabling us to ask (and answer) cutting-edge research questions. It has also given us the connections necessary to conduct research on these dynamics, ensuring that our research authentically reflects the views shared by African stakeholders and is relevant to aid policymakers and practitioners globally.

To learn more about RIPIL, visit https://bridgingthegapproject.org/ripil/.

Bridging the Gap to Nowhere?

Frances Gavin’s recently declared that “the gap” between policymaking and academic research “has been bridged!” As evidence of International Relations’ newfound influence on the making of U.S. national-security policy, Dr. Gavin points to a handful of scholars who, having crossed the Gap on their own two feet, now occupy prominent government positions. This underscores, he argues, that programs like the International Policy Scholars Consortium and Network and the Nuclear Studies Research Initiative—as well as his own Texas National Security Review—have successfully constructed a communications channel between academia and policymakers.

This is a common line of argument among “gap-bridgers”—scholars who emphasize the importance of forging close connections between the academic study of international relations and the world of policymaking. The Bridging the Gap team’s recent, thorough review provides a good example. It focuses on opportunities and challenges for connecting academic knowledge to policymakers, while also celebrating the influence of a number of scholars on the policy landscape.

The gap-bridgers do, indeed, have much to celebrate. But I worry that the bridges that they aim to construct and maintain suffer from some critical design flaws. Chief among them: they are made by academics for academics. The engineers ask little of the policymakers on the other side.

Bridge-building must move beyond catering to policymakers. Academics bridge no gaps if they, for example, content themselves with providing validation for policymakers’ existing beliefs. It means little if someone with an academic background occupies an influential policy position if they perform their role no differently than would any other appointment.

The problem is that gap-bridgers tend to start with the wrong set of questions. Instead of asking “What do policymakers want from us?” we need to ask ones like “What should policymakers want from us?” and “What do we want from policymakers?”

The Gap is More Than Knowledge: It’s Epistemic

Perhaps no scholar did more to advance the conversation about the divide between academia and policymaking than Alexander George. In Bridging the Gap: Theory and Practice in Foreign Policy, he identified three types of policy-relevant knowledge: abstract conceptual models of foreign policy strategies, generic knowledge (that is, empirical laws and causal patterns), and actor-specific behavioral models.

Other scholars have built upon George’s arguments. Dan Reiter wrote that scholars help policymakers know their tools. Michael Horowitz offered four measures of policy-relevant knowledge: policy significance, accessibility, actionability, or agenda-setting impact on the public debate.

None of this knowledge matters, however, if government officials ignore it. Henry Kissinger, perhaps the most (in)famous academic-turned-policymaker, noted that policymakers have no time to study while on the job; they can only bring to the table what they learned before entering the policy arena.

Bridge-builders encourage scholars to ‘hide’ their methods and evidence

“Even the most highly developed general knowledge of a strategy cannot substitute for competent policy analysis within the government,” Alexander George wrote. But participants in the policy process suggest that, in recent years, it has gone from bad to worse. A report from two influential officials called the State Department’s clearance process “hell.” Putatively serious analysis in foreign policy occurs infrequently, and much of what does emerge is ad hoc rather than rigorous and systematic. It almost never includes consultations with academics or draws from scholarly research.

Even if policymakers do engage with academic work, there’s no guarantee that officials will understand it. Preparation for careers in foreign policy rarely emphasize training in methods and methodology. Surveys of policymakers demonstrate their deep distrust of social science methods.

Many, as I have learned in my frequent interactions with policymakers, are downright hostile to them.

Gap-bridgers must pay more attention to the degree that the foreign policy community simply dismisses much of the knowledge produced by social scientists. “It’s almost impossible to quantify what we do, and in fact, I think that there’s a great danger in trying,” said an influential former Ambassador at a recent event on the State Department’s Congressionally-mandated Learning Agenda. A high-ranking official concurred, “Diplomacy is an art, not a science.”

When policymakers extol the art of foreign policy, they are advancing a theory of knowledge—that is, they are taking a position on epistemology that ex ante rejects a lot of what we do in social-scientific research.

This is not, at heart, a matter of ‘qualitative versus quantitative methods,’ or ‘constructivism versus rationalism.’ Rather, their stance entails a rejection of the usefulness of systematic method altogether. It is a claim that the most important tools for policymaking are rooted in the ‘gut instincts’ and idiosyncratic beliefs of professional policymakers—and that the only ones qualified to assess foreign-policy decisions are, naturally, those who share the necessary experience to develop comparable instincts and hunches.

Dominant methods of policy analysis and decision making are badly outdated

The epistemology of the U.S. foreign-policy community bears a strong resemblance to what scholar Robert A. Kagan termed “adversarial legalism.” Instead of prioritizing solutions aimed at achieving policy objectives, the policy process weighs legal and political risks between entrenched bureaucratic interests. The “right policy” is whatever emerges from that process. The “top expert” is the official who has achieved decision-making authority.

Social-scientific epistemology is very different. Its goal is to produce the “right” answers. Even if one believes—as many academics do—that we will probably never know the exact “truth” or provide the “best” answers, social-scientific training emphasizes self-consciousness and transparency when it comes to epistemic choices.

The defining features of academia—the practices of citations, peer-review, hypothesis testing, university training and certification, a focus on methodology, and so on—aim to facilitate intellectual progress. In principle, scholarly authority derives from the quality of scholarship, not the other way around.

Efforts to “bridge the Gap” need to better wrestle with these differences.

Building a Better Bridge

Bridge-builders often encourage scholars to ‘hide’ their methods and evidence when speaking with policymakers. But this renders even the best scholarship indistinguishable from opinions, guesses, and even misinformation.

It is this state of affairs, not the nature of social science itself, that makes “lab leaks” from social science so dangerous.

At the very least, academics should avoid validating the anti-scientific views of many policymakers who dismiss social science as irrelevant.

Alexander George understood the challenge. In Bridging the Gap, he explained:

Quite obviously, substantive knowledge of foreign affairs can have no impact on policy unless it enters into the process of policymaking. Substantive knowledge must combine with the effective structuring and management of the policymaking process in order to improve the analytic (versus the political) component of policymaking.

But George never answers the question of how policy analysis within the government should work. One of the most important tasks of bridge-building involves providing answers.

Scholars should push policymakers to think more like scientists.

At the very least, scholars need to use their privileged position to hold policymakers accountable for making decisions that violate basic scientific norms. They must speak up when officials subordinate hard-won substantive knowledge to intuitive judgment and parochial political considerations. In the words of the longest-serving member of Congress in history, John Dingell, “If I let you write the substance and you let me write the procedure, I’ll screw you every time.”

Improving Policymakers’ Epistemology

Dominant methods of policy analysis and decision making are badly outdated, if not outright anti-scientific. Rather than expecting scholars to conform to policymakers’ ‘ways of seeing,’ scholars should push policymakers to think more like scientists.

If social scientists believe their work has value, then they (necessarily) believe in the value of their methods and epistemological beliefs.

Scholars have the tools and training to help improve every stage of the policy process. An improved foreign policy epistemology must:

  • Provide support for policymakers to research the big questions at the heart of their work;
  • Focus on getting policymakers to prioritize the accumulation of knowledge as a way of constructing an organizational culture capable of learning and evolution;
  • Encourage policymakers to think in terms of hypothesis testing—that is, investing in policy interventions that show the most promise while dispensing with those that repeatedly fail;
  • Make the case that the policy world should more frequently emulate aspects of the peer review process, with its emphasis on transparency and constructive critique.

Upgrading policy processes might include finding ways to improve, for example:

In short, a true “bridge” between scholarship and foreign-policy making should be constructed around the evidence-based policy movement and other efforts to improve the effectiveness of foreign policy.

The U.S. foreign policy community can learn a great deal from those other sectors of government that foster a much closer connection between research and practice, including public health, economic policy, and education. Even within the national security community, some agencies do a better job than others. Evidence-based methods play a larger role in international development, the intelligence community, and the Department of Defense than in the State Department or the National Security Council.

The good news is that bureaucratic footholds are emerging for scholars interested in advancing more scientific foreign policymaking. The Department of State recently launched its Learning Agenda, which Congress requested in the Foundations for Evidence-Based Policymaking Act of 2018. The Global Fragility Act is prioritizing evidence and learning to reform the way the US government prevents and responds to conflict. Both the Departments of Defense and State are making high-profile investments into analytics and data.

Ultimately, the responsibility of building a bridge does not lie solely with academics. Those of us who care about the quality of foreign policy must help policymakers help close the Gap between research and practice. Neither academics nor policymakers have all the answers. But Americans – and billions around the globe affected by our decisions – deserve the best possible foreign policy.

LIVE recording of Whiskey & IR Theory at BISA 2023

Don’t miss the live recording of episodes 32 and 33 of Whiskey & IR Theory on June 21, 2023, starting at 3pm. We’ll be taping at the BISA annual conference. Rumors suggest that there may be whisky for tasting and schwagg for… something.

Episode 32 will be in “classic format.” We’ll discuss Robert Cox’s classic 1981 article, “Social Forces, States and World Orders: Beyond International Relations Theory.”

Episode 33 will be a “whiskey optional” on status and international-relations theory.

BISA attendees should register in advance for one or both of the special sessions.

Deterrence can never fail, it can only be failed

The government of a country makes explicit or implicit threats to another: “if you cross this line, we will inflict harm upon you.” The threat fails; the government crosses the designated line. Has deterrence failed?

Well, yes. Of course. By definition. It is, for example, unequivocally true that the United States did not deter Russia from invading Georgia in 2008, nor Ukraine in 2014, nor Ukraine (again) in 2022. Should you have any doubts about this, you can always go read a nearly four-thousand word Foreign Policy article on the subject.

I agree with its authors, Liam Collins and Frank Sobchak, that U.S. policymakers made a number of mistakes in handling Russia. Trump’s rhetoric concerning NATO, Russia, and Ukraine did not exactly help make U.S. deterrence credible; then again, Trump wasn’t in office when Putin ordered the invasion. In retrospect, Obama’s decision to withhold lethal aid from Ukraine was probably mistake, as not much seemed to happen when the Trump administration reversed course. But do we really think that providing more javelins in 2015 or 2016 would have deterred Putin’s invasion?

Apparently, yes. For Collins and Subchak, Washington’s failure to deter Russia means that U.S. policymakers should, ipso facto, have adopted a more hardline policy toward Russia. But much like the opposite claim—that Georgia and Ukraine “prove” that the U.S. should have adopted a more accommodating approach toward Russia, for example, by not expanding NATO—we’re looking at reasoning that is less “ipso facto” than “post hoc ergo propter hoc.”

That is, just because X preceded Y does not mean X caused Y. In the context of policy analysis we might add that just because Y is bad doesn’t mean Y’ would be better.

Sometimes, X isn’t even X. The fact that ‘deterrence failed’ doesn’t imply that any attempt to accommodate Russia was a capitulation to Moscow. Sometimes the opposite is true.

For instance, Collins and Sobchak argue that Ukraine shows the folly of Obama’s decision to cancel the “Third Site” anti-ballistic missile system, which involving placing radar in the Czech Republic and interceptors in Poland.

But the Obama administration replaced the “Third Site” with the European Phased Adaptive Approach (EPAA), which (as the Russians soon figured out) was easier for the United States to upgrade into the kind of system Moscow worried about. EPAA also entailed eventual deployments in Romania; Obama committed to stationing Patriots on Polish territory, as well “left open the door to stationing new types of missile defense interceptors in Poland, an offer the Poles later agreed to accept.” Moreover, at the Wales NATO summit Obama convinced NATO to affirm that missile defense was part of its collective mission.

Given all of this, it seems bizarre to claim, as Richard Minter did in 2014, that after “Obama delayed deployment of missile defenses in Eastern Europe, Putin knew he had a free hand to reassemble the old Soviet Union piece-by-piece. Invading his neighbors would now be cost free.”

Now, Collins and Sobchak don’t write anything quite so ridiculous. But they sometimes land come within striking distance.

Consider the very opening of the article, which discusses the U.S. response to the Russia-Georgia war:

Recall the aftermath of the 2008 invasion of Georgia. The Bush administration airlifted Georgian soldiers serving in Iraq back to Georgia to fight, provided a humanitarian aid package, and offered tersely worded denouncements and demarches. But it categorically rejected providing Georgia with serious military assistance in the form of anti-tank missiles and air defense missiles and even refrained from implementing punishing economic sanctions against Russia. The United States’ lack of resolve to punish Russia for its gross violation of international law was underscored when U.S. National Security Advisor Stephen Hadley’s remark “Are we prepared to go to war with Russia over Georgia?”—made during a National Security Council meeting after the war started—was later released to the media.

Keep in mind that they’re talking about an effort to proved anti-tank missiles and air-defense systems during a war that lasted five days—one in which Russia systematically annihilated the shiny systems that the United States and its partners had previously provided. If the argument is that the United States should have given Georgia anti-tank weapons or air-defense missiles after the conflict, then (while that might have been a good idea) it’s not clear to me how that would’ve signaled U.S. resolve.

(Stephen Hadley’s remark first appeared, if I remember correctly, in Ron Asmus’ book about the Georgia war. So the passive voice is definitely doing some work here. At the time, Hadley refused to comment on the specific quotation but did confirm that the Bush administration decided that the risks of using force outweighed the benefits. This “revelation” shouldn’t have surprised anyone, including Moscow, since, you know, the United States did not, in fact, use force. What’s particularly strange about this example is that it’s backwards. What surprised people was the extent of support within the administration for a more aggressive response. The headline of the Politico article that I linked to above wasn’t “The United States didn’t risk war for Georgia.” It was “U.S. pondered military use in Georgia.”)

It is not obvious that the United States could have secured support for, say, more punishing sanctions. The Georgia War did not deter France from closing a deal to sell two Mistral-class helicopter carriers to Russia. Paris only cancelled that sale after the 2014 invasion of Ukraine, when Hollande (rather than Sarkozy) was president (interesting side note here).

But, as is typical for this genre, the article never seriously considers either the viability or the downside risks of alternative policies. This is… problematic… given that it is very difficult to assess what the world would like after fifteen years of concatenating changes produced by different policy decisions.

None of this means that we shouldn’t evaluate past policies and work through conterfactuals. That’s a crucial element of policy analysis, social-scientific inquiry, and policymaking, Collins and Sobchak, like too many others, don’t even do the bare minimum—in their case, despite writing a piece that runs as long as a short academic article in International Relations.

That failure is particularly pernicious when an obviously “bad outcome” makes it easy to gloss over. In fact, the last sentence of Collins and Sobchak’s article gives the game away:

The sad irony is that U.S. leaders, of both parties, chose to avoid deterrence for fear of escalating conflict—only to find themselves continually escalating their support once conflict started. Time after time, the United States chose the option that was perceived as the least provocative but that instead led to the Russians becoming convinced that they were safe to carry out the most provocative action of all: a full-scale invasion of Ukraine.

The United States ignored the eternal wisdom of the Latin phrase Si vis pacem, para bellum (“If you want peace, prepare for war”) and instead hoped that half-steps and compromise would suffice. While so far those decisions have prevented direct conflict between two nuclear-armed superpowers, they have caused Russia and the West to be locked in a continuing series of escalations with an increasing danger of a miscalculation that could lead to exactly that scenario.

Episode 27: Everything is Relational

It’s a nostalgia episode for our two hosts, Patrick and Dan. 

They tackle Mustafa Emirbayer’s 1997 article in the American Journal of Sociology, “Manifesto for a Relational Sociology.” According to Emirbayer, “Sociologists today are faced with a fundamental dilemma: whether to conceive of the social world as consisting primarily in substances or processes, in static ‘things’ or in dynamic, unfolding relations.” 

Was that also true of International Relations? PTJ and Dan certainly thought so back in 1999. 
Is it still true today? The two may or may not answer this question, but they do work through Emirbayer’s article in no little detail.

Additional works alluded to in this podcast include Bhaskar, A Realist Theory of Science (1975); Emirbayer and Goodwin, “Network Analysis, Culture, and the Problem of Agency” (1994); Emirbayer and Mische, “What is Agency” (1998); Mann, The Sources of Social Power, Volume II (1993); Pratt, “From Norms to Normative Configurations: A Pragmatist and Relational Approach to Theorizing Normativity in IR” (2020); Sommers, “The Narrative Constitution of Identity: A Relational and Network Approach” (1994); Tilly, Durable Inequality (1998); and Wiener, Contestation and Constitution of Norms in Global International Relations (2018).

https://www.podomatic.com/podcasts/whiskeyindiaromeo/episodes/2023-01-29T14_48_01-08_00
❌