Published March 2015

Last year was my 50th birthday with half of those years spent in evaluation (and 15 of those in philanthropy). It did prompt some reflection about life and past work but more or less just added to the To Do list or refocus on old items on the list still undone.
Like finishing blog posts and actually posting them. With apologies and thanks to Chris Lysy, Sheila Robinson, and Ann Emery who have provided much encouragement, advice, tips, and technical assistance on evaluation blogging. The student still has to do the homework.
Years ago while working at the Annie E. Casey Foundation we struggled with how to organize, summarize and communicate very diverse grant making strategies and results (from direct service work to community change, from technical assistance and capacity building to policy and advocacy). We had plenty of numbers and examples without a common framework for communicating them. This triggered both a return to basics around an intentional outcomes focus (with Results Based Accountability) as well as common definitions for the “types” of outcomes and results we were aiming for. There was a lot of experience with naming and describing outcomes for child and family well-being (e.g., increased employment, improved school attendance) but the struggle was often with summarizing more developmental outcomes like organizational capacity, changes in attitudes and beliefs, and early policy and advocacy investments. Collective memories may be fuzzy but I credit Miriam Shark at Casey with advancing a set of 3 result categories:
- Impact
– results that were intended to achieve direct change and impact on people
- Influence – results that described intended change in organizations, beliefs and behaviors, contexts, and policies and practices
- Leverage – changes in resources and funding, in this case, especially where the foundation investments influenced others to change how they invest in the same or similar strategies
We later discussed including a second “L” for Learning outcomes–especially where there was intentional strategies to acquire knowledge needed to inform other work. This list may seem simple (and even obvious now in retrospect) but it provoked some key thinking and behaviors.
First, it helped program officers and grantees organize and report results in all 4 categories which was especially helpful for activities and investments that could not measure community impact directly or within a short time period. The influence and leverage results could describe the early evidence that change was happening on a path to impact outcomes. In addition, it allowed for not only results from different strategies within a portfolio or across the foundation to be consolidated, it also helped people to look at all four types of impact, influence, leverage and learning outcomes for individual grants or activities.
Second, it provided a prompt to everyone to define their intended results (and intentional strategies) for all four categories at the beginning of the planning and work. Again, this is certainly obvious within most results and outcome based planning but often the focus is only on the long term impact and less upfront attention is given to the earlier and needed influence and behavior change outcomes needed to achieve changes in people and places. What often happens is that impact results are defined up front and measured but if not fully achieved both foundation and grantee fell back on narrative or bullet-form examples of “other changes” that occurred–often influence and leverage–defined and documented in retrospect.
Starting with this initial set of ideas I asked ORS-Impact to prepare a tool for Making Connections community change sites and grantees to understand how to define and measure influence and leverage. This initial guide helped get the concepts and early definitions to a limited audience of grantees and it provided examples of indicators and ways to measure both influence and leverage. Later in 2006, we focused on the policy and advocacy aspects of influence which took the form of other manuals and guides, and also contributed to the growing policy advocacy evaluation work.
I continued to use the initial framework with the inclusion of learning outcomes in different work with multiple organizations and ORS-Impact also went back to it in work with other clients. Deceptively simple but helpful as an organizing framework when the work or array of investments and strategies have different levels of focus and change and operate in different timeframes but are meant to relate and be complementary. Certainly deeper and more comprehensive theory of change exercises help to define these same elements in different ways but these can be challenging to summarize and communicate to audiences not immersed in the work (like board members or the general public).
So we decided to go back to the original ideas and publications and spend time documenting good case examples of boy how the framework has been used and what organizations have gained from it. Jane Reisman, Anne Gienapp, Sarah Stachowiak, Marshall Brumer, Paula Rowland, and the ORS-Impact team worked with current and past clients and colleagues to assemble these examples. We also shared the examples at the American Evaluation Association conference and other meetings which helped to develop the version you can read here as I2L2.
We continue to receive positive feedback especially around the I2L2 framework’s ability to help organize thinking and definitions of expected change and results. Again, this doesn’t replace theory of change and other in-depth planning but when community change strategies and their intended outcomes are complex and highly interrelated (sometimes without distinct sequencing), I2L2 helps groups to organize, define, document and communicate the results they are aiming for and achieve.
So where are we now? We have spent a lot of time and effort on defining terms and examples for influence and leverage. (Others have also contributed their work on these categories–see Jim Casey Youth Opportunity Initiative‘s Assessing Leverage guide. We would like to focus on how to help people and organizations focus on defining intentional and planned learning results and the strategies to get there. Here we want to define learning to not be only the lessons acquired from (usually) failing to achieve impact or successfully reach targets but more importantly the intentional agenda for acquiring needed knowledge. Defined at the beginning and evaluated along the way. We hope to be discussing this more with others including at this year’s AEA meeting.
Do you have examples of work defining learning results? Learning outcomes? How have you evaluated learning?
We’d love to hear from you.