If the goal of policy communications is to tell our audience how we will create a more equitable and prosperous society, then where do we begin?

If you ask professional writers, they will tell you to find out everything there is to know about your setting and characters, and more than anything else, tell an interesting story. While you may not know if your main character missed the bus this morning or what the rain smells like on a hot summer day in their hometown, there are, however, a wealth of things you could find out if you ask the right questions. Input will always be greater than output in this process—your job is to filter and refine the answers for your audience, emphasizing any areas in which they lack knowledge and capitalizing on those in which they do.

Together, the 3 policy narratives will help you construct a clear and convincing story of the policy problem for your reader, but depending on their needs, some may be shorter, longer, or simply unnecessary. Polishing these narratives into a clear and convincing story of the policy problem will engage your audience by extending their current understanding and illuminating any areas of uncertainty.

Descriptive Questions

What do the data tell us is happening? What did you find the policy problem to be?

Evaluative Questions

Which policy option, if any, has been shown to be effective at addressing the policy problem?

Prescriptive Questions

What policy option should be implemented to address the policy problem? Why?

Descriptive Narratives

Descriptive narratives are exactly as the name implies—you, the policy expert, are tasked with providing your reader a comprehensive account of the problem, including how it came to be and who it affects. In your rigorous analysis, you will dig deep beneath the surface of the perceived problem to uncover the actual problem, tracing the roots to the various downstream effects and undesirable outcomes.

Comprehensive? Rigorous? That sounds exhausting, right? Well, it doesn’t have to be. Chances are your reader will already have some knowledge about the problem, but depending on the depth of this knowledge, you will need to adjust your description to best suit their needs. Let’s take a look at our first case study…

To what extent are women represented on US corporate boards of directors?

This is clearly a “What’s happening?” question, which means it needs a descriptive answer. Put simply, we’re going to describe what is happening with the representation of women on US corporate boards—nothing more.

Over the last decade, women have increased their representation on US corporate boards to about 16 percent.

Asked and answered. Our work here is done, right? Actually, no; this is where the fun starts. Once we have an answer to the question, we need to think about what follow-up questions our audience will likely have for us. If we want to tell a persuasive policy story, we’re going to need answers to questions like these as well:

  • What is supposed to be happening? Is 16 percent too low? Too high? Just right?
  • What factors may affect the number of women who sit on US corporate boards?
  • What barriers, if any, may be preventing more women from serving?
  • What policy options, if any, could be implemented to increase the representation of women on US corporate boards?
  • What might happen if nothing is done to address what’s happening?

Evaluative Narratives

To determine what is supposed to be happening, we need criteria. It’s not enough to point out an issue and expect your audience to agree that what you’ve found is problematic. To determine what should be happening, we can look to laws, regulations, contracts, grant agreements, internal control measures, industry best practices, expected performance measures, clearly defined business practices, and benchmarks—basically anything against which we can compare what’s happening.

Other times, when there may not be accepted criteria to use, we can look to our audience for cues about what should be happening. If, for example, our audience is private sector executives, members of corporate boards, and company shareholders, we may want to ask what their goals are. What do they care about? Perhaps the goal should be to ensure that women represent the same percentage of corporate board seats as they represent in the overall population—about 51 percent. Or perhaps the goal should be to ensure that women represent the same percentage of corporate board seats as they represent in the respective company’s customer base. Target’s base shoppers, for example, are 60–63 percent women, on average, so perhaps 60 percent of Target’s corporate board should also be women (currently, women make up 36 percent of its board).

Or perhaps the audience cares less about “optics” and “political acceptability” and cares more about effectiveness and financial performance. In that case, they may be interested to know that some research shows that having a broader range of perspectives represented on a diverse corporate board results in better decisions because the board members need to work harder to reach consensus. Other research has found that boards with more women have a positive impact on their company’s financial performance. Your audience may be interested in taking steps to increase short-term profits, or they may be more concerned with ensuring long-term sustainability. It’s our job to understand what our audience wants and needs. Once we know what they want and need, we can tailor the policy answers we provide to help our audience solve a problem and achieve their goals.

How effective is the Choose to Change program in reducing arrests for violent crime among the program’s participants?

Any time you’re trying to determine how effective something has been, you’re going to need to give an evaluative answer that tells the reader what works, what doesn’t, and why.

Those who participated in Choose to Change had 48 percent fewer arrests for violent crimes than their peers who did not take part in it.

In 2015, two Chicago nonprofit groups developed Choose to Change to help prevent youth violence in the city. Choose to Change is a six-month intervention that connects participants with mentors who use trauma-informed cognitive behavioral therapy to help young people process trauma and develop a new set of decision-making skills.

To determine how effective the program was, the University of Chicago’s Crime and Education Labs designed a randomized controlled trial to evaluate the program’s impact on academic engagement and justice system involvement by comparing those who were selected by lottery to participate in the program with those who were not. In addition to showing a substantial reduction in the number of arrests for violent crime, the trial’s preliminary results showed that participants in the program

  • were 39 percent less likely to have been arrested for any offense compared to the control group,
  • attended an additional seven days of school in the year after the program began (a 6 percent increase), and
  • had 32 percent fewer misconduct incidents in school compared to their peers who did not take part.

Readers won’t be satisfied with this answer, however promising it seems to be. They may understandably have follow-up questions:

  • How do we know the positive result is attributable to the program and not to some other variable or intervention?
  • Why was the program so successful?
  • How long after the program ends can we expect the benefits to last?
  • Could Choose to Change work on a larger scale? Or in a different city?

All good questions, right? Any reasonable person trying to figure out how to reduce violence among Chicago’s young people will surely want to know the answers to these questions and more. Our job, in turn, is to figure out what kinds of follow-up questions readers will probably have and to answer those too. If we do that, our policy story will be that much more responsive to readers and their needs, which will make it that much more convincing.

Luckily, we know many of the answers to the questions above. We know that the positive results can be attributed to the program because the researchers set up the gold standard of evaluation: a randomized controlled trial. Doing so allowed them to account for other differences that may have affected the outcomes being evaluated. Because they randomly assigned the young people to one group or the other, whatever differences existed among them should have balanced out, as long as there were enough young people who participated. The researchers were confident enough to conclude that the program caused the differences they had observed in the outcomes evaluated. This methodology isn’t perfect, of course. There is no such thing as perfection in policy analysis. But it’s the best tool we’ve got.

As for why the program was so successful, the secret seems to lie in positive effects from cognitive behavioral therapy. “The keys to the success of [Choose to Change] are teaching youth the cognitive behavioral skills needed to stop, think, and choose while providing a supportive mentoring relationship to practice these skills with the mentor and with other youth in the program,” said Amanda Whitlock, senior vice president of behavioral health at Children’s Home & Aid.

But how long can we expect these positive effects to persist? “Typically, what we see with many evaluations of adolescent programming is that the positive benefits diminish as soon as programming ends,” said Nour Abdul-Razzak, a postdoctoral fellow at the Crime and Education Labs. Not so with Choose to Change. Researchers found in their follow-up, which occurred two years after the first cohort had finished the program, that many of the positive outcomes had not diminished. “The fact that the impacts of [Choose to Change],” Abdul-Razzak said, “last even after the program ends is very encouraging, and speaks to the ability of the program to support a safer and brighter future for young people.”

Based on the promising preliminary outcomes published by the Crime and Education Labs, the City of Chicago and Chicago Public Schools expanded Choose to Change to serve even more young people. On February 21, 2020, Chicago’s mayor announced that the city would fund an expansion of the program’s reach over three years. “Our children are the future of Chicago and as a City, we have a fundamental obligation to ensure young people who are involved in gun violence have the resources and supports they need to get back on the right path, pursue their dreams and live a life free from violence,” said Mayor Lightfoot. “That is why through our landmark multi-year expansion of Choose to Change, we are not only investing in these young people, we are transforming their lives and shaping Chicago’s future for the better.”

Could Choose to Change work in other cities, such as New York or Los Angeles? It’s possible. The only way to find out for sure is to pilot the program, collect relevant data, analyze the data, and make adjustments to the program as necessary.

Prescriptive Narratives

Whenever you’re thinking about what could or should be done next to solve a policy problem, you’re going to need a prescriptive policy recommendation that tells the reader what needs to be done and why.

According to the Generally Accepted Government Auditing Standards (GAGAS) by GAO, the main objectives of audit reports are twofold: first, to effectively communicate the audit findings to the responsible parties, including those charged with governance, the relevant officials of the audited entity, and oversight officials; and second, to facilitate a mechanism for monitoring and verifying whether the recommended corrective actions have been taken. In other words, audit reports aim to convey results clearly to the relevant parties and to ensure that the necessary steps are taken to address any issues identified during the audit.

Prescriptive policy narratives should be evidence-based and grounded in the established criteria—laws, regulations, and industry best practices—uncovered in your evaluation of the policy problem. The resulting recommendation should clearly communicate the findings to the relevant parties, with the ultimate goal of facilitating corrective actions when necessary. By basing recommendations on evidence and established criteria, we can ensure that policy decisions are grounded in reality and have the greatest chance of success.

How could the Pension Benefit Guarantee Corporation better ensure the long-term sustainability of its business model and protect the retirement security of American workers?
  • The Pension Benefit Guarantee Corporation should revise how it charges premiums to better reflect the risk posed by private companies’ pension plans.

I don’t expect any of you to have heard of the Pension Benefit Guarantee Corporation (PBGC). I had never heard of it until I was asked in 2011 to assist a team of researchers in writing a report that would help the officials who oversaw PBGC’s single-employer insurance plan pull themselves out of a $26 billion hole.

PBGC was established in 1974 under the Employee Retirement Income Security Act, and its mission from the start has been to protect the pension benefits of American private sector workers by charging insurance premiums to companies that offer pension benefits to their employees. If a company were to go out of business, PBGC would take over the company’s pension plan and use the money it had collected to continue paying the benefits. By 2011, however, PBGC was in dire financial straits. That year PBGC collected $2.1 billion in premiums to insure the benefits of 44 million workers, retirees, and beneficiaries, but PBGC paid out $5.5 billion in benefits. It doesn’t take a math whiz to figure out PBGC had a real problem on its hands.

That’s what was happening. PBGC was hemorrhaging money, with no relief in sight. The next question we had to ask was why? Why was PBGC paying out so much more than it was taking in? It turns out that since its inception, the premiums PBGC charged employers did not accurately reflect the risks that PBGC insures against, namely, the risk of an employer with an underfunded pension plan filing for bankruptcy and triggering the need for PBGC to take responsibility for paying its benefits. Instead, PBGC generally charged a flat-rate premium based on the number of people covered by an employer’s pension plan.

Why didn’t PBGC adjust the way it factored in risk when determining how much to charge employers? PBGC could, for example, factor in the employer’s overall financial strength. PBGC could also evaluate the employer’s investment strategy. When we asked PBGC that question, the officials that oversee PBGC said they were afraid they’d put an undue burden on employers if they raised premiums. Think about that for a second. What they were telling us was that they were going to go out of business relatively slowly rather than risk going out of business more quickly. That was the primary reason they were going out of business and jeopardizing the retirement security of millions of Americans—they were too averse to risk.

To help solve PBGC’s financial problems, we had to figure out what questions PBGC had that it needed answers to. After learning about the aversion to raising premiums, we decided that what PBGC’s officials needed to know was whether their fears were warranted. In other words, what would happen if PBGC raised its premiums? Would that cause employers with underfunded pensions to file for bankruptcy and hasten the implosion of PBGC? To answer that important question, some wicked-smart data scientists at the US Government Accountability Office developed a predictive model from a data sample of about 2,700 pension plans to analyze the potential effects of different premium structures. Under one structure, employers that were relatively financially healthy would pay less while relatively financially risky employers would pay more.

We then took the model and shared its findings with several retirement experts to get their take on it. They initially echoed PBGC’s concern that higher premiums would lead employers to terminate their pension plans, but they suggested that if PBGC capped premium levels and averaged employers’ pension funding levels over multiple years to reduce volatility, an updated premium structure incorporating relevant risk factors could help PBGC reduce its deficit. Prior analyses conducted by the Government Accountability Office and others had shown that employers file for bankruptcy and terminate their pension plans because of other factors, such as the size of the employer, whether its employees could collectively bargain, and the overall costs of the employer’s pension plan; these factors are more important than the cost of premiums. Unless PBGC incorporated relevant risk factors when determining how much it charged in premiums, we concluded, it risked destroying the retirement security for millions of Americans.

As with descriptive and evaluative policy answers, prescriptive policy answers will lead to more questions that need to be answered if you expect your audience to be persuaded by your policy recommendations. Questions like these:

  • How exactly will the prescribed policy recommendation address the identified policy challenge?
  • Why is the prescribed policy recommendation better than alternative steps that could be taken?
  • How will we know if the prescribed policy recommendation is effective in addressing the identified policy challenge?

In July 2012, eight months after the Government Accountability Office had published its report, Congress approved premium increases to better reflect the risk posed to PBGC by certain pension plans. Congress did so again in December 2013 and again in November 2015. PBGC also proposed two policy options to help prevent hardships for small employers with a higher default risk that might result from higher risk-based premiums. First, PBGC wanted to have a phase-in period to allow enough time for employers to improve the funding status of their pension plan and to prepare for the premium increase. Second, PBGC wanted to create a premium cap for smaller companies.

In November 2019, PBGC noted in its annual report for fiscal year 2019 that its single-employer program had a budget surplus of $8.7 billion. In eight short years, the corporation went from a $26 billion deficit to having almost $9 billion left over at the end of the year. The turnaround happened all because the corporation had implemented a risk-based premium structure.

We figured out what questions PBGC needed answers to. We answered them. PBGC took our answers and solved a problem. Solving that problem had a positive impact on millions of Americans. In short, we helped change the world.