Over the past year, I’ve become intentional in transitioning away from focusing solely on designing and implementing evaluations to working with my clients on building their capacity to do it all themselves.

And sometimes, I like to offer aspect of my services for free as a way of establishing a connection with a potential client. From a skills perspective, this helps me keep everything sharp. From a networking perspective, this puts me on the radar for future paid consulting work and referrals.

Recently, I offered to revised an evaluation tool for an organization providing small grants to community groups seeking to reduce abortion stigma. They want to develop an easy-to-understand evaluation tool that measures stigma pre and post grantee project interventions and measures overall project success.They are finding their current tool–a survey requiring grantees to provide open-ended answers–to be challenging for most of their grantees to understand, and want something that makes the process of giving feedback easier to obtain and to analyze.

In my work with past and current evaluation clients, some themes I’ve observed related to evaluation are A) a level of anxiety around evaluation as a whole given that evaluation tends to get a bad reputation, B) an interest in developing engaging ways to gather feedback that builds confidence in evaluation, and allows them to do the necessary follow-up with staff and the stakeholders, C) a question in how to implement feedback, or D) a mixture of A, B and C.

I see this more so with clients who are more grassroots or have a community organizing background as they ultimately go by direct community engagement for feedback, rather than on evaluations. Also, there is a sense of feeling protective of their programs, services, and campaigns, and no one wants to see that what they’re doing isn’t resonating with the communities they serve. So, finding a way to gather meaningful feedback and be objective of feedback that may be interpreted as negative is a balancing act.

For some background information, grantees receive small grants to develop a project/intervention that speaks to a level of abortion stigma: individual, community, institutional, media, and policy. Grantees have free reign to create what they want, with support from the organization, I made some preliminary edits to the organization’s evaluation tool, and provided some additional feedback:

First, before starting any process, understand what you want to evaluate. In this organization’s case, the current tool focuses more on the process the grantees went through in developing their project/intervention from conception to implementation. This is called a process evaluation.

An impact evaluation, on the other hand, would focus on the impact the grantees’ interventions had on their target audience(s). If a grantee expects A to occur as a result of their target audience(s)being exposed to their projects, did it really happen? If not, what factors may have contributed to this and how can they be addressed.

Next, I provided a possible rationale into why grantees may be getting tripped up evaluating their project: There is only one evaluation tool that’s asking each grantee to evaluate their project, without much consideration into how each question on the tool translates to the level of stigma the project is addressing. After doing some research, I found that the International Network for the Reduction of Abortion Discrimination and Stigma (inroads) has shared best practices for adapting scales the measure abortion stigma at various levels, and included examples of various scales, including:

•Individual Level Abortion Stigma Scale (ILAS)- Measures individual level stigma among women who have had abortions. Adapted by The Sea Change Program- access here

•Stigmatizing Attitudes, Beliefs, and Actions Scale (SABAS)- Measures community level stigma toward women who have had abortions. Adapted by Ipas- access here 

•Abortion Provider Stigma Scale (APSS)- Measures institutional level stigma toward women who have had abortions. Access here 

Measuring the impact of an intervention/project may not seem feasible if the target audience(s) have only been exposed to an intervention once. One way to remedy this is to recommend grantees continue their project, with support from the organization, and report back in 3-6 months. Another way to measure impact is to develop a larger evaluation where the organization compares how grantees are engaging with their projects based on level of stigma. What are the similarities and differences, if any? This may give more insight into which levels of stigma are easier to address/evaluate and to provide resources for the levels that may be harder to address.

Key Takeaway

This is part of the evaluative thinking process.

Researcher and evaluator Tom Archibald describes evaluative thinking as “a cognitive process in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves skills such as identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective taking and making informed decisions in preparation for action.”

After reviewing my revisions to their evaluation tool, the organization shared that my feedback will be helpful in their consideration of evaluation tools that are meaningful to both their grantees and the organization. They also shared that planning at the beginning of a project will help their evaluation process at the end. They appreciated that I was able to translate the evaluation questions into “common sense language” so that grantees can feel comfortable with their evaluation tools. Given the list of stigma measuring scale above, they are interested in adjusting their evaluation tool based on stigma level. This would be more work, but they believe it would yield better results.

One reason why I enjoy program design and evaluation work is the learning process that takes place between myself and my clients. It’s very rewarding to witness the “a-ha” moments my clients have, and also experience my own “a-ha” moments when I’m able to see things from their perspective. In this case, I provided the opportunity to share my knowledge about impact and process evaluations, while also learning more about various abortion stigma measure scales and of this organization’s evaluation needs.

SaveSave

RAISE YOUR VOICE:  Do you focus more on the process or the impact? Or both? Share below in the comments section.

Like this post? Subscribe to the Raise Your Voice newsletter to receive resources, advice, and tips to help you raise your voice for women and girls of color.

Sign Up

SaveSave

SaveSave

SaveSave