Focused Framework: A New Approach to Policy Research

A Focused Framework for Community Development
Build, Measure, Learn through Event Mapping
July 2015
Virginia Carlson, DataRubrics

I. Overview

The Focused Framework (Framework) is a measurement tool that supports a paradigm shift in social sector work. Rather than developing products or programs that may meet a need, social sector actors are beginning to understand that it is critical for philanthropy and its partners to continuously explore the causes of the problems they wish to address in order to achieve scalable and enduring change on behalf of low income people and under-resourced places. Such exploration may result in initiatives that are not necessarily new formal programs, such aligning existing funding or partnerships, activating existing assets, or changing administrative procedures. There is a growing sense that these latter kinds of initiatives are more likely to have a long-run systemic impact that will endure after the formal funding has ended.

The Framework is meant to assist an initiative in focusing and developing its approach by stipulating a series of “thinking and measuring steps” that elucidates connections between problems and solutions and forces actors to lay out and test assumptions, hypotheses, and logical connections between interventions and expected outcomes. Connection-testing is done with build/measure/learn activities via event mapping.

Embedded tenets:
• Actors work best when they are open to questioning their own assumptions by engaging in a build/measure/learn feedback culture in order to explore those assumptions
• Much can be learned from lean learnings—quick trials, research, and experiments in order to design strategies
• Solution-finding must involve the people whose stories we are trying to change
• The “measures” in build/measure/learn activities can become key metrics for understanding how healthy communities function
II. The Learning Framework

Recognizing that the practice of systems intervention is neither linear nor predetermined, the Learning Framework is nevertheless laid out as a phased process. The activity of approaching the collective tables’ thinking in a step-by-step manner forces everyone to confront fuzzy thinking, helps slow down the impulse to implement large-scale (as opposed to scalable) projects or programs, and gives sites a tool by which they can monitor their own progress. In brief, the steps are
• Define Problem
→ articulate the dimensions of the problem to be addressed
• State Goal
→ set a realistic goal
• Identify Key Assumptions and Choose Strategy Areas
→ articulate understanding of the root causes of the problem: “This problem exists because”
→ choose a few strategy areas, based upon assumptions about why the problem exists and where the actor has leverage or experience
• Articulate Hypotheses and Choose Specific Efforts
→ make connections between efforts and possible outcomes: “If we do X, then Y will come about”
• Trial-run Efforts
→ using Build Measure Learn (BML) activities with event mapping, trial-run the chosen “X’s”
• Reflect
→ reflect on learnings from the BML activity with stakeholders
• Act
→ take next steps based on learnings, which may include returning to a prior step, scaling up the effort, or re-running the activity

Example
In order to address the problem of under-resourced immigrant communities, we’ve set a goal to increase family income by 2% within a year. Several assumptions have presented themselves through field research, brainstorming, etc.: immigrant communities are under-resourced because of language barriers, because of a lack of skills, because families are reluctant or unable to access supplemental income programs such as food stamps and energy assistance. We decide to focus our strategy on income support programs. We hypothesize that if only immigrant families knew about the programs, they’d sign up for them.

In order to test this, the specific effort (linked to the hypothesis) we undertake can be summarized as “if we send home information in children’s school backpacks then their parents will be enticed to access supplemental income programs.” (We might have chosen another effort within this same hypothesis, such as information via telenovelas).

Yet, instead of jumping to a large program that creates materials and targets all public school children we decide to run a BML activity. We target just one school where we know that 99% of the families qualify for free or reduced-price lunch (so are likely eligible for supplemental programs), and one supplement program (say, energy assistance) for the test activity.

We begin by creating a simple event map matrix that marks out the goal the steps, and the measures for each step. As we run the test, we also collect data that tells us what happened. (See below).

In this example we learn that our expectations were not met in terms of timing for the development of the materials; that we aren’t sure how many of the families received and read the materials (only about 25% of families sent back the “yes we saw this material” card), and the energy assistance program only reported two new families from this school took up energy assistance.

Reflecting upon the data from this experiment, we might decide several things: 1) sending home materials through school is not a viable option, since we have no way of knowing whether or not families saw the material; but at the same time 2) of the 23 families who DID report having seen the materials, two did take up energy assistance and so we did make our goal in terms of percentages (we expected approximately 10% of 118 families to take-up, we did get about 10% of the 23 families we know saw the materials). Perhaps we can conclude that once having gotten the information, families did take action, but we are not sure whether the delivery mechanism is correct.

We also get feedback from stakeholders, including the voice of the client, teachers themselves, the energy assistance program, etc. From them we learn that not all teachers knew what to do with the cards that were returned, and a bundle ended up in the dust bin; a number of parents believed that by returning the card, they’d get more information on the energy program from the school; a number of parents don’t bother reading any of the flyers/information that comes home in backpacks; the materials were written in formal Spanish but most of the families speak Latin American Spanish, etc.

Taking action from these learnings, we are likely to decide that our effort—“send information home in children’s backpacks” is likely not a good approach. We don’t throw away the entire hypothesis, which was that if we gave families information about the supplemental income programs, they would sign up for them. But we’d perhaps test one or another efforts—peer learning through CBO’s or infomercials on daytime television. If neither of these worked, then we’d might likely conclude that the hypothesis is wrong—even when families know about the programs, they still do not sign up for them.

We also don’t throw out the assumption—that families are reluctant or unable to access supplemental income programs such as food stamps and energy assistance. We’d turn to our other hypotheses suggested by the assumption (e.g., families need safe space to apply) and test out efforts within that. When we find something that works, we then consider whether it really is scalable, about what would have to be re-aligned, to create population-level change.

In this way, the process of discovery and learning repeats itself. As we learn, other assumptions and strategies emerge and are fed back into the “stepped learning process” we have described here.

A note about a strategy for measuring change and whether we are moving to longer-term goals. In this example, by involving a local administrator for energy-assistance we have been able to attain our person-level, short-term measure of “whether or not enrolled in a supplemental income program.” What we will want to track in the medium- and long- term then are more aggregate measures related to income assistance and children living in poverty. It is likely that state-run assistance programs release aggregate data on families receiving income assistance, and American Community Survey data will give us the information we need to look at the relationship between assistance take-up and poverty levels.

 

Getting the Digital Goods

Clive Thompson tells me in the April 2011 edition of Wired that government should open its data catalogs in order to foster private-sector investment, businesses, creativity. He tells me a story where BrightScope builds a multi-million dollar business by getting the Department of Labor to “[cough] up the digital goods in bulk.” He then goes on to suggest that if private-sector companies lead the way, perhaps activists will get the goods, too.

I’d like to meet Clive Thompson. I’d like to tell him about the technical assistance work I was doing in Chicago in 1987 for local community organizations in Chicago with digital data on occupations, released on magnetic tape. What happened when the Illinois Department of Employment Security realized that we charged a small fee to these organizations to run and help interpret the data? They yanked the digital version—for everyone—and only disseminated hard copies.

So, I was there first, Clive. And I don’t say this only because I’m feeling left out. I’m saying this because I’m at the Association of Public Data Users (“PublicData”) conference at GWU in Washington DC, bemoaning the drop in funding for the federal statistical agencies, while the O’Reilly Strata Conference is happening in New York celebrating the era of Big Data, including the energetic Open Government data movement (“OpenGov”). And while it would seem that OpenGov might have the same agenda as PublicData, it doesn’t. And that makes me crabby.

See, the difference is that OpenGov isn’t necessarily interested in statistical data. Statistical data, the raison d’être of PublicData, are gathered in order to understand characteristics of the US population and economy (think the US Census), and mostly by federal statistical agencies (think Census Bureau; Bureau of Labor Statistics). Statistical data have been used for decades for drawing electoral districts, setting public policy and programming, disbursing federal funding, and planning infrastructure investments like highways. (Do I see an OpenGov Yawn?)

OpenGov primarily wants administrative data and operational data. Administrative data are data gathered as a result of governments administering programs or overseeing regulations – the 401(k) data used by BrightScope noted above; EPA data generated as a result of environmental regulations, etc. Operational data are records generated as a result of government going about its own business – the visitor’s log to the White House at the federal level; 311 calls at the local level. It’s not statistical data — data that surveyors and researchers collect through observation and experimentation.

So while the Census’ Bureau’s budget is set to be slashed by an amount that means the end of the quinquennial Economic Census, (http://www.entrepreneurship.org/en/Blogs/Policy-Forum-Blog/2011/September/The-Quality-of-Economic-Statistics-is-About-to-Erode.aspx), Clive Thompson is telling me that “members of the Obama administration intervened” so that BrightScope could have (government) administrative data. So while I won’t be able to use statistical data to help local workforce agencies better target job training more effectively by understanding growing and declining economic sectors, private sector businesses can be built on administrative and operational data.

I know it isn’t this stark. I know from my non-profit sector vantage point the loss of statistical data will shift me to other activities perhaps. I’ll be able to use 311 data to help target food pantry resources, for example, while a private-sector food delivery service might also use the same data to beef up deliveries to seniors. And I’m all for increased private sector economic activity.

But what is true is that I (and others) have been working with nonprofit organizations for decades to wedge out better data from governments, and the federal statistical system is currently under siege; yet the OpenGov movement seem to be flourishing with evident private sector support (e.g. Google and Yahoo’s sponsorship of the Open Government Working Group Meeting in Sebastopol, CA in 2007which resulted in the “8 Principles of Open Government Data”). And that makes me surly.

So yes to Clive. I’ll wait for “pushy start ups [to] pressure governments to release more info [so that] activists will get to use it too.” But in return, I ask that pushy starts ups understand that there are those who have gone before, and who are standing in line.

Data, Data, Everywhere but Nary a Byte to Eat

Originally posted on July 22, 2011 by VL Carlson

In 1985 I started my “data scientist” career as the head of the DataBank at the Center for Urban Economic Development at UIC-Chicago. Ahh yes, these were the days when universities were investing in mainframe computers, Home Mortgage Disclosure Act data were becoming available, and the phrase “data-driven decision making” was entering our lexicon. Heady days. My first action item was to visit local community and public-sector agencies to find out what data they wanted and needed to run their programs. The primary ask was for city- or neighborhood-level data on health, employment and housing–could the DataBank somehow find these data?

I’m now head of the Metro Chicago Information Center and am still a part of the civic data movement, now in the era of Big Data and Open Government. MCIC is running the Apps Competition for Metro Chicago, Illinois (www.appsformetrochicago.org) – an unprecedented government partnership with conscious outreach on needs for and connections between developers and community organizations. By design, MCIC is collecting data desires from coders and community groups – what do you wish you had?

The funny thing is that these folks want the same stuff the civic groups wanted in 1985. As I look at the data wish list compiled by our outreach folks at MCIC, the similarities to 1985 are striking: neighborhood housing data, local health indicators, more data on city businesses. The difference is that now there is a false belief that these data exist somewhere, if only they could be delivered to potential users in a structured and consumable manner. Unlock the data!

But are there really more civic data to be easily accessed in the Big Data era? To a certain extent, yes—technology opens new data possibilities, but I believe we’ve also been lulled us into a false sense of abundance. The reality is that timely data are not as available as one might think; the amount of data varies widely by subject, by governmental source and by geography; and that most data are not easily mash-able/app-able for quick digestion. Data emerge through a complex socio-operational context of which technological change is only a part. Most important, the operational needs of cities lie at the core of any determination of civic data availability. City government collect data that help them operate cities.

Think about it. Data on housing vacancies don’t exist from city governments because homeowners have to file a certificate of “occupancy” but not a certificate of “non-occupancy.” Public health incident data aren’t available because there are a myriad number of health facilities run by nonprofits, federal agencies, and city health departments and they very rarely share data. We don’t know characteristics of businesses in cities because local governments generally don’t collect the kind of information we want. Yes, they do inspections and licensing as part of operations, but that doesn’t give us number of employees, or sales history, or lines of business. The (federal) Bureau of Labor Statistics DOES collect this information, but does NOT publish economic data for cities—maybe that’s what we should be advocating for.

Coupling the energy of amazing civic coders with the multifaceted knowledge of data geeks is the best way to bring about real data liberation.

Open Government Data – Measuring Urban America

Home Depot made national news when it opened its Mills Basin Brooklyn store in 2002. Its urban format partially reflected a new realization that, for city neighborhoods, urban income densities were a better measure of potential retail demand than the over-used measure of median income. The argument for measuring urban areas differently began in the late 1990s with research done by outfits such as MetroEdge, a subsidiary of ShoreBank Advisory Services in Chicago; and the national not-for-profit initiative Social Compact. I kicked off the Urban Markets Initiative (UMI) at Brookings with Pari Sabety by noting the importance to building healthy communities of increasing the quality and reliability of information for urban and inner city areas: Using Information to Drive Change: New Ways to Move Urban Markets (2004). There we point out that the relative homogeneity of rural and suburban areas makes them easier to measure than diverse urban landscapes. Cities tend to be “under-measured.”

Why? Characteristics of physical form contribute to the richness and hard-to-measure nature of cities. Housing stock is more irregular – boarding houses and smaller aged apartment buildings may have no individual mailboxes so that official address lists will miss occupants; garages with “in-law” apartment additions are often off-the-record. Mixed living spaces miss people or economic activity – live/work units (common ways by which old warehouses are re-purposed) are both a home and a business; home-based retail businesses are more common.

Our task at UMI was to address this “urban data shortage” problem primarily by focusing on making the case for better data from federal statistical agencies (although we had local initiatives as well). Why can’t economic data be made available at the city level, for example? What about a retail census that broke out cities from suburbs? Although we found our work challenging then, the situation for federal data recently looks even worse. Recent reports that data transparency initiatives at the federal level are to be severely curtailed are coupled with an attack on long-standing federal statistical initiatives (such as the American Community Survey) that produce critical economic and demographic data.

Here is the historical opportunity for the open government movement. Ten years ago open gov was in its infancy—Malamud’s “8 principles of Open Government Data” wasn’t published until 2007—now open data catalogs are appearing in cities across the US. What can’t be measured at the federal level, either because of a lack of political will or because of the inflexible nature of the federal statistical system, may be able to be found from data collected locally.

This is my hope for the Apps4MetroChicago competition. Not only the opportunity for fabulous apps, but apps that reveal the rich and diverse nature of the urban landscape. Measures of the local food environment that incorporate permits, licensing and inspections data as a way of tracking retail locations that otherwise would slip through the cracks. Put together building permit and occupancy data in a way that might be a leading indicator of economic activity. Show us hospital records in order to estimate the health care uninsured. Use 311 and crime data to describe neighborhoods – trendy, industrial, entertainment district, families.

In short, use government data to help us reveal the rich nature of urban areas.

About VL Carlson

Virginia Carlson is a data geek who believes that the right systems -physical and social – create optimal outcomes. She’s been a professor, a researcher, a storyteller, a photographer and a architectural historian.