If you have a question that you’d like to share with the Raise Your Voice community , contact me.
It’s the worst thing ever. That moment when you’ve been working with a client, community members, or some other form of stakeholder, and you have to bring the bad news.
I recently got this email from a nonprofit professional (and FYI: I’ve removed identifying information):
My nonprofit has created a program that seeks to increase the importance of physical activity among young indigenous youth in a rural community where there’s a lack of access to gyms and other places that would make it easier for youth to be more active. The stakeholders were expecting that the activities included in the program would resonate with the youth. In my nonprofit, I’ve been charged to carry out an evaluation of this program. We used surveys and focus groups with the youth participants. The results of the evaluation were that the participants weren’t interested in the activities, which aligned with the lack of participation. In fact, the results showed that the participants have developed more creative means to get in physical activity, but they brought up the need for other quality of life services that the program wasn’t addressing. The results could potentially impact the funding that was given to this program, as the funders were expecting that the program would be a success. What’s the best way to handle this?
Dealing with funders and leadership can be tricky, and nonprofits know all too well the stress of proving that a program or service is successful to stakeholders.
So, how do you share unexpected results in a way that is diplomatic and addresses concern head on?
Make it participatory from the start
I’ve worked with clients who had the expectation that I would come in, ready to go, with all the surveys, focus group questions, and in-depth interviews scheduled. They just want someone to come in and do the work for them. When I noticed this happening, I began to push back against working with clients in this way, and in encouraging current clients and potential clients in developing a participatory way of working together. From determining data collection tools to developing questions to ask participants (and even getting everyone together to interpret the data), when you make feedback gathering participatory from the start, it creates buy-in, puts everyone on the same page, and makes everything more transparent. When people are more involved, it makes this process more fun (at least for me), and everyone learns in the process.
And here’s a secret: When you make it participatory, it improves the likelihood that recommendations from the evaluation are actually implemented.
Address expectations and potential consequences
When you ask your stakeholders what they intend the outcome of their program to be, also ask this:
“What if what we’re expecting doesn’t happen?”
Ideally, we create programs or services based on theory, research, and what’s happen in our community. It builds the foundation to do some meaningful work. Can you believe there are nonprofits actually create programs or services because it just sounds like a good idea? You’d be surprised. So, can we really feel some type of way when we get results that we weren’t expecting, and in the case of the nonprofit above, it sounds like a program was created to address a need that the community has already dealt with.
But when we follow the theory, research, and community input, yet the outcome is still not what we’re expecting?
Determine if it really is bad
Not all feedback is negative, and not all feedback is positive.
To go further, getting mostly negative feedback is not always bad and getting all positive feedback is not always good thing either. While I don’t go in expecting a program to not be successful, I do question programs that have almost always positive feedback and positively written evaluation reports. Sometimes evaluations can be created in ways that skew results to make a nonprofit look good (which is a whole ‘nother conversation.)
Were your feedback and evaluation activities a waste? No. You can always spin something to make it work in your favor. You do this by course correcting. Go back to your evaluation plan. Review the data collection tools you used, be it a survey, in-person interview, or a focus group. Were they the most appropriate for your program or service? Were they suggested by the funder or other stakeholders, and could they have been designed to generate more authentic feedback? Did they show no progress, when there really was some progress made (and vice versa)?
Is something really bad? It depends on who you’re talking to and the meaning they give it. One way to determine this is to have a data interpretation meeting using data placemats. Data placements is a visualization technique that helps stakeholders understand results. Instead of you coming up with the themes and analysis on your own (though you technically should have done your own analysis before the meeting), you’re forcing the stakeholders to think through it. Since it’s their program or service, they may have insight into why the results look a certain way. This is particularly useful if you’re an external evaluator.
Another creative way I recently learned of is “data dice”. Get 1-2 empty boxes that are easy to hold, and cover them with blank paper. One each die, write out a one finding that was discovered on each side, be it a percentage or a quote from a focus group. Have everyone in the meeting take turns “rolling the dice” and have a discussion on what the finding means to them and for the program or service. This not only make learning about data more collaborative, but it breaks the monotony of sitting around a table viewing a PowerPoint presentation.
Tell the truth (and tell it sooner)
Here in New York City, the subway system has a safety phrase: “If you see something, say something”. How many times have we heard on the news of something happening that resulted in negative consequences, and it could have been prevented? While this may not be that dire, no one likes surprises, and lying about what you’re finding isn’t beneficial to your stakeholders or to yourself. In fact, this is one of the reasons why I always encourage clients who typically request feedback at the end of their program to consider placing different opportunities for feedback through the life of the program (and for many, it’s typically within a fiscal year). This is why course-correcting is so important. If something is occurring that’s not expected, and it’s not addressed as it become noticeable, AND you wait until the evaluation is over before telling your stakeholders? Don’t do that. As you’re creating a plan to evaluate and gain feedback, strategically place these activities throughout the life of the program so that the program’s stakeholders can address them before the evaluation is complete.
Frame it as an opportunity, rather than a failure
It’s my belief that most programs and service have some positive outcomes, even if those outcomes aren’t relevant to your stakeholders or funders.
Also, sometimes you’re not able to course-correct. This is where the evaluation report comes into play. The evaluation report has several sections, two of which are the Findings and Recommendations. When something occurs that wasn’t expected, address it in the report, and instead of attaching a “failure” to it, use these sections to tell a positive story of how the unexpected was identified, how it impacted the results, and how knowing this information will not only make the program or service better next time, but can also be worth the funder’s time to provide additional funding to further look into it. Win-win.