This post is adapted from a talk I gave at the Forum for Justice and Opportunity organized by Episcopal Community Services in Philadelphia.

Why look at data?

Why do we look at data? In my mind, the response is to make decisions. Research suggests we make thousands of decisions every day, and we want information about those decisions. And in particular, we want to make different decisions. Usually we want to make things better, to grow and to improve, and we recognize that doing so will require us to make different decisions than those we had been making previously. Data leads to improvement through decision-making.

Frequently, however, we measure things but don’t make different decisions which lead to improvement. Take perhaps the most famous example of this: inequity in educational outcomes for different racial/ethnic groups. The chart below is from a 2017 report by NCES and shows college enrollment for 18-24 year olds by race/ethnicity from 1990 through 2015.

It’s clear that, while there may have been a slight upward trend in college enrollment overall, the gaps for racial and ethnic groups have remained basically the same. Even more stark is this quote from the report:

At grade 12, the White-Black achievement gap in reading was larger in 2015 (30 points) than in 1992 (24 points), while the White-Hispanic reading achievement gap in 2015 (20 points) was not measurably different from the gap in 1992.

Jerry Muller, in his book The Tyranny of Metrics takes on this topic. He says:

Such outcomes might lead one to conclude that the achievement gap cannot in fact be closed by education and that the reasons lie beyond the schoolhouse door. Yet measurement continues unabated. That is perhaps because … the idea that some problems are insoluble is morally unacceptable to a substantial portion of Americans. When it comes to gaps in school achievement, it seems that in the absence of discernible progress in results, the resources devoted to ongoing measurement becomes itself a sign of moral earnestness. … Not everything that can be measured can be improved - at least not by measurement.

We have focused so much on measuring the problem, yet the decisions we’ve made based on this data haven’t led to any improvement.

Technical vs Adaptive Problems

The Havard professor Ronald Heifitz, whose work focuses on organizational leadership, makes the distinction between technical and adptive problems. Technical problems are those in which the problem itself is clear and the solution comes from experts. A leaky faucet is a technical problem: you know what the issue is and you just need to call a plumber. Adaptive problems are those in which “the problem definition requires new kinds of thinking” and the soultion will come from the group rather than experts. A home remodel is an adaptive problem: a family has to think hard about what the problem with their current home is an how different options will lead to a solution within their budget.

To take a basic data-based example, take a personal monthly budget like the one (I made up) below:

This person is right on or under every budget item except for entertainment, things like going out to eat, getting drinks, or go to the movies. If we approach this as a technical problem, the solution is clear: stop going out to eat! However, if we add additional context, the solution becomes less clear. For example, imagine this person just moved to a new city and doesn’t have any family or friends there. Her co-wokers go out for drinks a few times a week, and this is pretty much her only social outlet. I haven’t changed the problem at all, just the context surrounding the problem, and now we have to think differently about a solution. It requires new kinds of thinking, hard (internal) conversations about priorities and resources. It might require her to be in the red for a few months until her community is established.

I would posit an addition to Heifitiz’s technical/adaptive distinction. Technical problems are those in which a solution emerges from the data. Adaptive problems are those in which the data doesn’t necessarily point to a clear solution.

Too often in social service and mission-driven organizations, we approach data as a technical solution, when in fact our problems are adaptive.

Building shared understandings

In my experience, we often skip the important intermediate step of building shared understandings about our work. We need to have strong foundations in order for data to truly impact decisions.

I was working with an educational training program whose graduation rate was about 30%. No matter how you sliced the data, by subgroup, cohort, etc., about 30% of participants completed the program. Everyone had different ideas both about why the cause and what could be done to improve this key outcome. However, when we dug deeper, we found that everyone also had a different perspective about what the program should be doing. Even more striking, everyone had different ideas about what was actually happening. Without a shared understanding of the goals and activities of the program, and a better picture of current implemention, it’s not possible to come to aggreement about what we should do differently that will lead to improvment. Without a strong foundation, new ideas are spaghetti thrown against the wall, and the one chosen is usually thrown by the HiPPO.

Theories of change

One approach for establishing strong foundations is to develop a theory of change. The document and the process for creating it can take many forms but the basic approach is to start with the change you are seeking and, working backwards, detail specifally how you plan to go about achieving that change.

Below is an example theory of change from a UK-based research program for alleviating poverty in a developing country:

I use this example intentionally because the work we are seeking is complex. It is not easily described by a few bullet points; if it is, you problably need to think hard about what you’re missing. You can see that there is a general upward trend that ends with the ultimate change they seek, but it is not completely linear; there are a number of places where the work is cyclical. They also have a number of call-out boxes that describe things like assumptions and success criterion that are important to articulate clearly.

Key questions

Below I offer some key questions that orgizations who are embarking on the process of building shared understandings and documenting their theories of change can begin with. This list is not meant to be exhaustive or authoritative. In my mind, there is not one best process or tool for developing a theory of change. The most important thing is ask challenging questions and document your conclusions. When it comes time to make difficult decisions, you should have something that you can turn to as a guide.

What specifically are we trying to change?

Get really clear out on the key outcomes but also the problem you are seeking to solve. Challenge yourself to get as specific as possible. The discussion will bring out differences in thinking that are important to acknowledge.

For whom?

We can design the best program in the world, but if it’s not offered to the people for whom it was designed, it’s unlikley we’ll see progress on our key outcomes even if it is successful with those those show up. If you create an afterschool program to improve school attendance, but only kids that already come to school everyday show up to your program, the school won’t experience a measureable increase in attendance. Figure out exactly who you are designing for.

What are the root causes?

Our initiative is unlikley to be successful if it’s not directly addressing, or at least acknowledging, the root causes of the challenge you want to solve. Articulating root causes requires a combination of new learning, through reading research and looking at data, combined with local expertise from those directly affected. Be sure to get a healthy dose of both.

What assumptions are we making?

Too often we need dozens of things to fall into place for our program to work and yet we just assume they will. What needs to happen for your work, and participants, to be sucessful? Why is the problem that we are addressing even considered a problem? What does research say and how have people who have done this before been successful?

What are some leading indicators?

Frequently, evaluation work is treated as an autopsy, examining something that already happened. It’s really hard for data to impact decisions if you already made the decisions. Put on paper some ideas about things that will tell you whether or not you are on the right track. This gives an opportunity to course-correct. Say you want to want to provide recycling bins to increase recycling in a neighborhood. After you hand out the bins, interview or survey folks to see if they think they recycled more after getting the bin. It’s not definitive proof, but it gives you some idea about whether you are on the right track.

Resources and conclusion

The website www.theoryofchange.org has some good resources and examples for those who want to learn more.

I also like this tool for developing a theory of change with a specific racial-equity focus.

The book The Tyranny of Metrics gives both plenty of examples of how data use can go wrong and ideas for how to use it more responsibly and effectivley.

Finally, don’t get too focused on one specific tool or creating a flashy thing you can put on your annual report. If you want data to impact decisions, the most important is to have a strong foundation built on shared understandings, and to have these ideas documented so that they can be returned to regularly.

Thoughts? What did I miss? Experience in your work? Feel free to share in the comments.